Promoting, supporting and encouraging the study of the United States since 1955

British Association for American Studies

×

Ralph Willett, Hard-Boiled Detective Fiction

Join BAAS

Ralph Willett, Hard-Boiled Detective Fiction

BAAS Pamphlet No. 23 (First Published 1992)

ISBN: 0 946488 13 4
  1. Introduction
  2. Dashiel Hammett: Taking the Lid Off Life
  3. Raymond Chandler: The Exaggeration of the Possible
  4. Ross MacDonald: Unhappy Families
  5. Honky-Tonking and working Out
  6. Crime Comes to Harlem And LA
  7. Miami Blues
  8. Urban Deserts and Desert Landscapes
  9. Lay That Pistol Down, Babe
  10. Conclusion
  11. Guide to Further Reading
  12. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

hard-boiled fiction: a tough unsentimental style of American crime writing that brought a new tone of earthy realism or naturalism to the field of detective fiction. Hard-boiled fiction uses graphic sex and violence, vivid but often sordid urban backgrounds, and fast-paced, slangy dialogue.
New Encyclopedia Britannica

1. Introduction

Twenty years ago, over-reacting to the popularity of Mickey Spiliane’s books, George Grella announced that the day of the private detective was over[1]: he had been replaced by a monster of sadism and fascism. The recent renaissance of the hard-boiled novel has shown this assertion to be premature. One explanation for the revival is that the termination of the Cold War has brought about the eclipse of the spy thriller creating more space in the market for the fictionalization of crime which, as in the period of Al Capone and Prohibition, fills the newspaper headlines. Another factor is the continued availability of new landscapes and settings: run down, rusty Detroit, the glitzy casinos of Atlantic City and the pastel Art Deco of Miami Beach all feature in the thrillers of Elmore Leonard. Racial and gendered minorities have found voice and visibility in Tony Hillerman’s Navajo Tribal Policemen and the homosexual detectives – male and female – of Joseph Hansen, Barbara Wilson, Mary Wings and others. These examples and the creation of a German, American-style private eye Bernie Gunther by the British writer Philip Kerr indicate the extent to which the formulas and restraints of the genre can be stretched for particular purposes and meanings.

Yet perhaps the most compelling reason for the recent popularity is provided, ironically, by Grella himself. Emphasizing the romantic and literary nature of hard-boiled fiction, he refers to it as “an expanding metaphor for universal sinfulness” (a perception especially applicable to the later novels of Ross Macdonald). In the contemporary world a dominant example of such a metaphor is the image of the city as a waste land devastated by drugs, violence, pollution, garbage and a decaying physical infrastructure. Only detectives, cops and their surrogates temporarily check the enfolding chaos, but ancient moral oversimplifications are out of place and the role of the detective figure is increasingly problematized.

Todorov’s claim that to improve upon detective fiction is to write “literature” is a highly questionable and elitist construction of categories of discourse. The process of undermining that position was beginning in the late 20s when Hammett was able to move rapidly from the pulp aura of Black Mask to the respectability of publisher A.A. Knopf. Hard-boiled detective fiction in the twentieth century both emulates High Art Literature and challenges the hierarchies produced by categories through intertextuality. The blurring of the distinction between Popular and High Art in the work of such novelists as Robert B. Parker enables hard-boiled fiction to be regarded as a new hybrid and self-consciously intertextual form.

Hard-boiled fiction has repeatedly acknowledged earlier American literature and culture, in particular the self-reliance and stoicism of the frontier experience and the Puritan perspective of seeing life as moral drama, which animate such works of classic American literature as Moby-Dick and The Scarlet Letter. The archetype for the hard-boiled detective has been identified by Henry Bamford Parkes as Natty Bumppo in his important essay, “Metamorphoses of Leatherstocking”. In The Drowning Pool (1950) Ross Macdonald’s sleuth Lew Archer refers to himself as Leatherstocking, and in the later novel, The Zebra-Striped Hearse (1962), uses the name Bumppo. The two idealised figures of detective and pioneer share a multiplicity of characteristics: professional skills, physical courage affirmed as masculine potency, fortitude, moral strength, a fierce desire for justice, social marginality and a degree of anti-intellectualism. Bouncing between two worlds they are fully integrated into neither. However the politics of deference which Leatherstocking represents has long since disappeared. Hard-boiled dicks are conspicuously bloody minded towards official institutions and their agents, such as the incompetent, brutal or corrupt police officers who enforce “the law”.

The antipathy towards the intellectual and the scholarly (implied in the very term hard-boiled) has diminished and been reversed. It was in any case often deflected towards the target of the sexually aberrant or merely effete, such as the men in Chandler’s novels who have carefully manicured nails or who use cologne and jewelry. The Maltese Falcon (1930) and Farewell, My Lovely (1940) provide instances which document this manoeuvre: In the latter novel Marlowe describes the living quarters of Lindsay Marriott (ex-Harvard),

The carpet almost tickled my ankles. There was a concert grand piano closed down. On one corner of it stood a tall silver vase on a strip of peach-coloured velvet and a single yellow rose in the vase…. It was the kind of room where people sit with their feet in their laps and sip absinthe through lumped sugar and talk with high affected voices and sometimes just squeak. It was a room where anything could happen except work.[2]

The hard-boiled detective novel is one of the few fictional genres where the depiction of work is a major concern, sometimes pushing the original crime to the periphery. The central focus becomes the detective at his job, reflecting, phoning, making notes, following leads and suspects, interviewing witnesses – and engaging in violent acts. The fictional PI, Kinsey Millhone, gives an ironic account of herself at work in A is for Alibi (1986): “The basic characteristics of any good investigator are a plodding nature and infinite patience. Society has inadvertently been grooming women to this end for years.” (pp.30-31,) A notably economical use of workplaces is found in Barbara Wilson’s Murder in the Collective (1984) when the print workshops run by a left wing group and a lesbian collective become the setting both for murder and for the exploration of sexual and other issues.

In the case of Hammett’s Continental Op, who skilfully elicits information from telephonists, cashiers, taxi drivers and transport workers, professionalism and satisfaction with the work coincide, as the Op himself announces in “The Gutting of Couffignal” (1925). Unlike his successors, he works for employers, the Continental Detective Agency, a business organization representing in microcosm the capitalist state. It is based on Pinkerton’s for whom Hammett was an agent both before and after World War Two and whose motto “We Never Sleep” along with its logo of a staring eye implied the unseen surveillance of a security network. Cold and cynical, the Op demonstrates how devotion to his job makes him both an ideal employee as well as an amoral, irresponsible manipulator.

Foucault described detective fiction as the discourse of the law; others have regarded it similarly as the re-affirmation of the socioeconomic order. The fictional narrative in the hard-boiled novel reproduces the bourgeois individualistic diegesis of capitalist society, discovering crime but mystifying and concealing class, race and gender relations. Traditionally however the hard-boiled detective has been a kind of people’s champion answering, as Philip Marlowe does, the cries of voices heard in the darkness of night. As the voice of the voiceless he appeared to reverse the powerlessness of his audience and to provide a kind of revenge for the betrayals of the democratic promise which the class system continued to commit. At the end of the nineteenth century, the fervent Populist and dystopian writer Ignatius Donnelly in his novel Caesar’s Column (1891) asserted that “the most utterly useless, destructive and damnable crop a country can grow is – millionaires”. This populist sentiment would not register as incongruous in the hard-boiled novel, a genre in which distrust of the wealthy is a recurrent motif: in Chapter 39 of The Long Goodbye (1953) a character insists there is no honest way to make a hundred million dollars. The hostility of the hard-boiled sleuth has also been directed at both criminals and police. Living in modest circumstances (a sign of probity) and opposed to corrupt politicians, decadent plutocrats, careless industrialists, brutal policemen and slimy hoodlums, he has been the focus for populist resentment towards the power over others and over the material world exercised by authority. The contradiction is evidence of the diversity of the crime novel (often displayed through language). Meanings are not fixed and the text can be appropriated for a variety of ideological and formal purposes.

It is nevertheless legitimate to refer to a hard-boiled tradition, one which is manifest in the way its own conventions are replicated, explored and even interrogated. Pre-existent structures are involved thus foregrounding connections with other texts and authors, especially Chandler and Hammett, but also Hemingway and Fitzgerald. In Paul Auster’s deconstruction of the detective story The New York Trilogy (1985-6), the references (among many) are to Hawthorne and Poe, and thus to the origins of the mystery story in the United States.

The “urban jungle”, vicious, savage, devoid of spiritual values, would become normatively the site for the detective’s quests and discoveries. Hard-boiled fiction writers have depicted the American city in modernist terms as a wasteland or desert ruled by an organized plutocracy and a criminal underworld often in collusion with each other. The glamourous surfaces of this world are specious, concealing danger and deceit: lovers turn out to be murderers (The Maltese Falcon), friends are finally shown to be false (The Long Goodbye), a cop may turn out to be a killer (The Lady in the Lake, 1943). Corruption is general, undermining law and order, spreading without interruption to the suburbs and, in recent novels, to more remote areas such as the Florida Keys (James Hall), Montana (James Crumley), and the bayous of Louisiana (James Lee Burke), though the availability of Florida as a crime novel location was established in the Fifties by John D. Macdonald. The fiction of Hammett, Chandler and Ross Macdonald is set in California, the terminus of the frontier journey, described by Chandler in The Little Sister (1949) as the department store state. Thus the disparity between the promise and abundance of the region and the reality of its neon/plastic decadence (symbolized by Los Angeles) is continually present like a dark trace.

The detective then, from our perspective, typifies the alienated urban individual and this constitutes a significant portion of the genre’s audience appeal. The settings in city and suburb and the text’s juxtaposed stories are easily identifiable, and although the PI enjoys a freedom beyond domesticity and routine the reader is provided with a means of interpreting his/her experiences. In their speech, personal style and attitudes the detectives are recognizable American characters. Above all, the brisk, serviceable, colloquial modern style facilitates consumption, avoiding what Jim Collins in Uncommon Cultures calls “semantic imperialism” in favour of a shared language, one with which readers are familiar or which they are willing to learn. It is a language suited to the fast, aggressive modern world of the city depicted so often by movies whose representational style of apparent neutrality informs many hardboiled works of fiction.

Conventional hard-boiled language is terse, laconic, acerbic and witty. One of its enduring vernacular techniques, adopted recently by female PIs, (“I gave him the invitation and drove down the driveway, which was lined by lilac bushes that were about to come into bloom and Mercedes-Benzes that already had”, Judith Van Gieson, North of the Border, 1988, p.63) is the wisecrack, a stylized demonstration of knowledge which expresses an irreverence towards authority and institutional power. Wisecracks put to use as weapons are an assertion of autonomy, a defiant refusal to be browbeaten. They introduce an unsettling element into the interplay of discourses so the chances of the truth gradually unfolding become greater. In novels written in the last quarter of a century characters often employ a language that links detectives, criminals, lawyers and politicos. As professionals sharing wisdom and undergoing similar experiences, they need their own “business” discourse. Wisecracks and verbal toughness are the means of ordering and interpreting experience, marking the investigator out from the crowd. He/she is still an average human being Sue Grafton’s Kinsey Millhone who eats a lot of fast food describes her life as ordinary, uneventful and good – but one who commands and exercises the power of language.

As Scott R. Christianson recently argued, that power is limited since the narrative represents the process of making meaning as a struggle.[3] The detective all too frequently achieves only partial understanding or local effectiveness. Congruent with the ideology of populist cinema (Ford, Capra), evil is resisted but does not entirely disappear. Modern life is fragmented and complex so that the texts of detective fiction can be used deconstructively, undermining efforts at control and closure. Such novels tend towards the feelings of melancholy, regret and emptiness suggested by representative titles: The Big Sleep (1938), The Long Goodbye, The Last Good Kiss (1978). Christianson analyzes the language of hard-boiled fiction as unitary, and numerous texts can be cited in which the detective’s voice is monologically in control. By insisting that detective fiction is a straightforward discourse and (erroneously) that it is not parodic Collins might appear to be lining up in the same camp. However, invoking Bakhtin’s concept of heteroglossia he adds an important qualification; “In the hard-boiled text, one does indeed find an emphasis on common street language; but one also finds that language set in conflicting relationships with other languages present within the same text.”[4]

Literature in the USA has been a clash of diverse voices. Texts as different as Parker’s A Catskill Eagle (1983), Reed’s Mumbo Jumbo (1972) and Constantine’s Always a Body to Trade (1983) show how Afro-American discourse can challenge the notion of a unitary language system. Street talk in hard-boiled fiction emerges as the privileged voice in contestation with other modes of expression: the cynical misanthropy of Potter in The Long Goodbye, the smooth, devious clubman’s rhetoric of Gutman in The Maltese Falcon, or the nervous sanctimony of Meade Alexander, undermined by Spenser (“I like it. I eat French crap a lot”) at a classy restaurant in The Widening Gyre (1983).

2. Dashiell Hammett: Taking the Lid Off Life

Hammett’s first novel, Red Harvest, (1929) includes a relevant exchange between the Continental Op and the godfather figure Elihu Willsson who “owned Personville, heart, soul, skin and guts.” Willsson recognizes the Op’s talent for straight talking but initially clings to his own colourful style: “I want a man to clean this pig-sty of a Poisonville for me, to smoke out the rats, little and big…” (p.39) The Op remains unmoved by this “poetry”: he will undertake a reasonably honest job at the right price, “but a lot of foolishness about smoking rats and pig-pens doesn’t mean anything to me.” Willsson cannot get through to the Op until he adopts the agent’s “sense”. There are other voices in Red Harvest, the hoarse whispers of Max Thaler the gangster, the educated tones of Dan Rolff, but the first-person voice of the Op dominates for both formal and ideological reasons, though Willsson’s poetry is retained in order to represent Poisonville’s inhabitants as animals (monkeys, wolves, hogs) and the Op as their hunter.

The Op’s straightforward talking has certain narrational consequences. A direct, neutral observer and an anonymous figure whose motives are vague, he provides minimal interpretation and analysis for the reader. Paradoxically, as Sinda Gregory intelligently shows, the very proliferation of resulting details, such as the names of people and places, only intensifies the situation: “This sheer bulk of facts actually works against our ability to understand and draw conclusions in the novel. We are so overwhelmed by specifics that it is difficult to separate the significant from the trivial or to gain an overview of what is going on.”[5] Understanding, knowledge and “truth” – and the chances of locating it – are major concerns in Hammett’s novels; they are linked to assumptions of cosmic and social chaos, so that Hammett’s metaphysics impinge on his ideology.

The most illuminating account of “reality” in the work of Hammett appears in Steven Marcus’s justly renowned 1975 introduction to The Continental Op. Reality is existential and nominal, created by individuals to satisfy particular needs; in response, the Op has “to deconstruct, decompose, deploy and defictionalize that ‘reality’ and to construct or reconstruct out of it a true fiction, i.e. an account of what ‘really’ happened” (p.xix). Yet the Op’s version is no more definitive or scientific than the discourses presented to him. Hammett too as author is making a fiction, more comprehensive and coherent than those of his characters but similarly restricted as a subjective view of “reality”.

Hammett, who would later serve time in prison for his communist sympathies, still manages to construct in his ambiguous texts critical images of the effects of capitalism. Personville known as “Poisonville”, Red Harvest‘s original title, is an industrial town blighted by greed, obsession and lust for power. Published in 1929 the year of the Wall Street crash and a time of social crisis, the novel is contextualized historically by references to the IWW, Prohibition and President Wilson’s peace conference in Paris. The town had been under the autocratic control of Elihu Willsson, whose power was now challenged by the gangsters he had imported to smash the Wobblies’ strike. Running the city as a “legitimate” crime syndicate, the criminals, like their counterparts in Hollywood movies of the period, offer a type of authority that ironically seems to work, unlike that of Herbert Hoover who has lost both economic control and the nation’s confidence.

Hammett’s text functions less through realism than through parody, irony and incongruity. “The West” historically the region of frontier promise and opportunity is now as ugly and polluted as the rest of the USA; gangsters behave as though still in Chicago, dislocated in their bizarre long black cars. As Red Harvest and other early novels make clear, Hammett is not in the business of developing moral and heroic protagonists. In Red Harvest, the Op shoots two policemen and acquiesces in the killing of the Police Chief (part of the town’s corruption). He is as guilty and morally reprehensible as the gangsters he exposes and helps to destroy. By precipitating over twenty killings, he is Hammett’s means of escalating violence, anticipating the murderous world of the spaghetti western.

The Op’s complicity must be acknowledged in assessing the extent of Hammett’s radicalism. The corruption of capitalist society is exposed in muckraking fashion, though some readers have commented on the limitation of the exposure to a localised system. (Chandler texts are open to the same charge.) However, Steven Marcus has interpreted Hammett’s novel in terms of Hobbes rather than Marx, so that the milieu represented is one of universal warfare. The result is social anarchy and ideological relativism. On the one hand Hammett’s early work was part of the popular culture of the 1920s when violent industrial conflicts occurred regularly, and taking a drink was for ordinary Americans equivalent to breaking the law. Elihu Willsson in Red Harvest, a combination of Rockefeller and Capone, demonstrated the inseparability of business, politics and crime. On the other hand Hammett’s version of the social and political fabric is universalized: corrupt wealth, along with the power it generates, and capitalist democracy are ubiquitous – and permanent. Hammett’s opposition to big business and machine politics resembles populism which typically rejected not the status quo but the corporate interests in control of it. The Op has no commitment, personal or social, beyond the accomplishment of his job. His creator offers no political solution. Politics too implies fictions and systems which can be manipulated for base motives and which are operated by individuals with at best a tenuous hold on ethical and social values.

The Dain Curse (1929), Hammett’s next novel, turned the crime novel genre and its conventional resolution inside out. A “solution” is only one rendering of reality so that Sinda Gregory’s comment is appropriate: “final solutions are merely fictional projections of our need to impose order on an inexplicable and fundamentally mysterious universe.” (p.78) The structure consists of three sequences: successive sections are used by the writer to undermine each arrived at conclusion. Hammett’s parodic action involves the selection of several types of mystery story – the country-house text, the gothic thriller and the rural, small town tale. But although the first section, “The Dains” takes place in a bourgeois domestic setting with the refinements of rugs, furniture, and artworks, there are suggestions of a murky European past. Edgar Leggett, a strange scientist is described as physically ascetic but mentally sensual like characters from Hawthorne, Poe and Brockden Brown. The impression of literary gothic is increased by the name of the supplier of this sketch: Owen Fitzstephan.

Thus Hammett anticipates the gothic element of “The Temple” which includes a grand guignol depiction of Gabrielle Leggett that appears to derive from Poe’s Madeleine Usher: “She was barefooted. Her only clothing was a yellow silk nightgown that was splashed with dark stains…. There was a dab of blood on one of her cheeks. Her eyes were clear, bright and calm.”[6] The titular temple houses a weird cult, the Temple of the Holy Grail which claims a history stretching back to King Arthur. The cult, which proves to be fraudulent prefigures similar California groupings in, for example, Ross Macdonald’s The Moving Target (1949).

The final part of The Dain Curse “Quesada” casts the Op in a different role, but one which brings closer no single solution or truth. The Op’s hunt is futile since his discussions throughout the text have been with Fitzstephan (a professional novelist) who proves to be insane and the sought after killer. Hammett’s sleuth admits that one guess at the truth is as good as any other, so he is interested in the fiction that appears to fit and which he can reasonably take to court. Fitzstephan is blown in two (surviving to face trial), suggesting the double consciousness or doppelganger device which recurs in Poe’s tales.

Sam Spade, the protagonist of The Maltese Falcon (1930), is for many Hammett’s most memorable creation and in the company of Chandler’s Philip Marlowe the most famous fictional investigator of the hard-boiled type. Hammett described him in the terms of a masculinity dream: “your private detective … wants to be a hard and shifty fellow, able to tee care of himself in any situation, able to get the best of anybody he comes in contact with. . .”[7] Like Marlowe, Spade has a tendency to provoke cops and politicians, but the similarities have been exaggerated in the general mind by the casting of Humphrey Bogart in both roles in the movie versions of The Maltese Falcon and The Big Sleep; later in 1980 the protagonist of The Man with Bogart’s Face would be named Sam Marlowe. Spade, like Ned Beaumont in The Glass Key (1931), is inclined to lose his poise (voluntarily, as an act, but also involuntarily) a characteristic satisfied by the choice of star. There was in the Bogart persona, a hint of instability, of menace, of being psychologically on the edge which emerged more powerfully in In A Lonely Place and The Treasure of the Sierra Madre. Unfortunately the film of The Maltese Falcon insisted on neutralizing the textual ending where Effie realizes what a rat her boss is. Although Spade is a near-psychopath, the movie, obeying the imperatives of Hollywood narrative and genre, closes with the scene in which he arranges the distribution of guilt, thus confirming the character as both self-sufficiently potent and romantic; asked about the black bird Spade (Bogart) describes it as the stuff that dreams are made of.

The Continental Op was a two dimensional figure and, although Spade is given a name and a private life, he too is a stylized character, constructed visually in modernist terms:

Samuel Spade’s jaw was long and bony, his chin a jutting V under the more flexible V of his mouth. His nostrils curved back to make another smaller V. His yellow-grey eyes were horizontal. The V motif was picked up again by thickish brows rising outward from twin creases above a hooked nose…” (p.5, Pan ed., 1975)

Through Spade, albeit using the third person mode for narrative purposes, Hammett exposes the society of which Spade is a mirror image and sets down the themes of his earlier work: the ubiquity of crime, duplicity and corruption, the manipulative but necessary operation of role-playing – and the impossibility in a complex, predatory environment of Fe-imposing order through the solution of crimes.

Spade’s actions derive from a philosophy of the universe as a random series of unrelated contingent events, and some of the features of Spade’s behaviour – impersonality, the concealment of thoughts and information, the need to maintain detachment and control (as in the famous Hemingwayesque account of rolling a cigarette) – are brought into play as he attempts to cope with his situation and his knowledge. While those attempts command respect, they fail to render the character more sympathetic. His long speech offering reasons for turning in the murderer Brigid O’Shaughnessy has been read as constituting a code of honour, morality and professionalism. Morally however Spade is only slightly ahead of his partner Miles Archer and the crooks against whom he pits himself. Critic James Naremore is explicit: “the speech is about nothing more than self-preservation…. Brigid obviously can’t be trusted anyway, and if he did not turn her over he would have no job or anything else.”[8] Like the Op Sam Spade has found a job that suits a temperament which in certain circumstances would allow him to function as a criminal.

The world of The Maltese Falcon encourages its inhabitants to engage in duplicity and false appearances, thus producing a climate of dishonesty and insecurity. Shapeshiftings and masquerades abound, but in this Darwinian universe they are survival strategies. Despite the date 1930 this is no proletarian novel, nor is there any Bakhtinian sense of the carnivalesque in these masquerades, of the overturning of authority.

In the middle of The Maltese Falcon Spade relates the parable of Flitcraft which refers to his own belief system. Flitcraft, an average family man is alarmed by a falling beam. “He felt like somebody had taken the lid off life and let him look at the works.” In other words, life is random, unpredictable, arbitrary. He changes his life simply by moving away and taking another wife, but his new life strongly resembles the old. “He adjusted to beams falling, and then no more of them fell, and he adjusted to them not falling.” (pp.59-60) Whatever interpretation is placed on this anecdote, the narrative context is important: Spade tells the Flitcraft tale to Brigid, delivering a warning about his behaviour and intentions – he too will adjust in order to survive – though the warning is characteristically ambiguous.

Spade is also giving a verbal performance to match those of Brigid and the other thieves: a second type of warning. More frequently the pitting of Spade against his antagonists has an international dimension – tough-talking American detective takes on exotic, duplicitous representatives of the decadent Old World. Brigid who poses as a damsel in distress is also known as Miss Wonderly and Miss Le Blanc (the nihilism of whiteness in Melville comes to mind) while her apartment number 1001 suggests Scheherezade and her fictions. Cairo, a fin-de-siecle dandy and homosexual is less powerful, but Gutman, “the Fat Man” resembles Brigid in his deviousness and ruthlessness. Another visually modernist figure composed of “bulbs” and “pendant cones”, the former shifting in movement like “clustered soap-bubbles”, Gutman in his well-cut suits offers a misleading facade, one made more specious by his wit, charisma and hearty clubman banter, the battle between him and Spade being joined at the level of discourse also. The civilized exterior is a sham. Gutman is responsible for the drugging and near-death of his daughter and despite some regrets he agrees to the sacrifice of his “son” Wilmer. His pose of romantic adventurer decorates the murderous scrabbling for The Maltese Falcon as a guest for the equivalent of the Grail.

The falcon itself generates the action and “stands as the most central symbol for … deceptions and contradictions.” (Gregory, p. 114) The removal of the black paint surface reveals not a treasure but a leaden shape. Like the characters in the novel it is counterfeit but it does function to undermine the concept of private property. No one can claim exclusive ownership of the bird; a commodity-oriented society gives rise to anarchistic greedy struggles over loot.

Like The Maltese Falcon, The Glass Key portrays a landscape of duplicity and ambiguity, but it also represents a return to the milieu of Prohibition era big-city politics encountered earlier in Red Harvest. This is the world of machine politicians and their tame DAs, mobsters, senators, and lawless policemen, in which the city boss with an immigrant name, Madvig, seeks the respect which support from the WASP aristocracy will bring. Violence, power and corruption are so widespread that in one sense the narrative again constitutes an attack on urban politics. This is also however the bleak, opaque environment of the earlier novels so that the central figure, Ned Beaumont, a gambler and Paul Madvig’s loyal hanger-on, is not provided with a motive for solving the murder of Taylor, son of the powerful Senator Henry. Practical, intelligent and efficient, Beaumont has a sense of manners and etiquette. He is vulnerable to violence and attempts suicide after being at the receiving end of a brutal attack. It is a characterization that anticipates the high society urbanity and sophistication of The Thin Man (1934). Before that in this challenging text, Hammett’s elliptical, neutral style and third person point-of-view delivers bewildered individuals who fail to understand each other or indeed themselves. On the one hand the ordinary ties of family and friendship stay remote from them; on the other the idea of a coherent, consistent psychology remains a chimera. All the characters are too ambiguous to be explained by a single motivation: Beaumont, like Spade an actor, is ultimately inscrutable and the novel’s end in which he participates is far from conclusive.

The Glass Key was made into two films and a radio serial, but it is in the narrative texts themselves that commentators have located the sources of film noir. Hammett’s first four novels comprise several of the techniques found in noir, including first person narration and the expressionism of dream sequences. There exists a persuasive argument however that Hammett’s sinewy naturalism, a language which the German director Wim Wenders called “concrete, hard and sharp”, is further from the mannered expressionism of film noir than the flamboyant prose of such contemporaries as Cornell Woolrich. Despite the merit of this argument, film noir with its metaphysical sense of alienation and doom, its sombre images of fragmentation and violence, its sterile destructive personal relationships and its pervasive pessimism is an accurate visualisation of the Hammett universe. Even film noir language can replicate the mystery of human personality and the provisional nature of truth and reality. In Crossfire (1947, screenplay by John Paxton), Mitchell, an innocent murder suspect meets a man in a girl’s apartment:

Man: You know what I told you. It was a lie. I’m not her husband. Met her the same as you did, at the joint. I can’t keep away from her. I want to marry her but she won’t have me.

Mitchell: Is that so?

Man: Do you believe that? That’s a lie too. I don’t love her and I don’t want to marry her. She makes good money though. You got any money on you?

Mitchell: No.

Man: She makes good money sometimes. Hey, do you suppose I could be a soldier? Maybe I could be in the regular army. Makes a good rating and make some dough by the next war.

Mitchell: Why not?

Man: Why not? because I don’t want to. What would I want to be a soldier for? Aagh! I don’t know what I want to do ….

3. Raymond Chandler: The Exaggeration of the Possible

Raymond Chandler was born in Chicago of a Pennsylvanian father and an Irish emigrant mother. In the mid-50s, after the death of his wife, he lived in London and La Jolla alternately for a time. In the 40s he wrote screenplays for Hollywood, but as a young man in London (1911-1912) he wrote essays for The Academy, having been educated at Dulwich College where one of the school houses is still called Marlowe. This oscillation between American and English culture evident in his style and fused in his admiration for Henry James, is one of several tensions in his literature and personality.

His chosen hero Philip Marlowe is a sensitive decent man operating in a world that is violent and corrupt so that the novels depend upon the interplay of individual idealism and social decadence. Chandler found that corruption in the politics and policing of the tarnished city Los Angeles which he both despised and romanticized. Part of Marlowe’s appeal is that he satisfies a populist disdain for authority while continuing to work for law and order thereby aligning himself with its agencies. That disrespect derives from the brutality and amorality of police officers, though some of Chandler’s fictional lawmen are depicted as tired, overworked and honest – rather like the ex-cop Marlowe himself.

Chandler has often been praised for identifying and through Marlowe confronting the malaise consuming the body of American society. However as the author himself announced, Marlowe exists in a fictional world of stylized characters, whereas the real-life private eye is only “a sleazy little drudge from the Burns agency”. Chandler PI is an idealized figure: originally named Mallory he is a “shop-soiled Galahad”, a status hinted at in the famous opening of The Big Sleep. Marlowe’s old-fashioned chivalry is incongruous in contemporary Los Angeles so an element of self-parody is consciously generated. Instead of concealing genre conventions Chandler draws attention to literary discourse as illusion. In The Little Sister Marlowe speaks of “all the tired cliched mannerisms of my trade”, and when a woman threatens him with a gun in The Lady in the Lake he grins and admits, “I’ve never liked this scene”. It is the combination of realism and self-referential artifice that establishes irony and parody as distinctive features of the Chandler discourse. That discourse expressed in the form of first person narration “encourages reader identification with the detective, and so with an illusion of coherent subjectivity, represented as moral integrity.”[9] However, Marlowe’s unitary power is subject to instability. He remains an impoverished bourgeois figure, his vulnerability shared with other alienated characters in 30s writing, and with the solitary, immobile figures of Edward Hopper’s paintings, discovered, as Chandler located Marlowe, “in a lonely street, in lonely rooms.”

The detective’s marginality and the attention given to his solitary life have the effect of distancing him from the problems and pressures of the lower classes. Ernest Mandel’s general point applies no less to Marlowe: “… the sophisticated thriller’s hero has to be a tragic figure, a petty-bourgeois (in no pejorative sense) rather than a proletarian revolutionary protagonist.”[10] Consequently the standard hardboiled text (especially when dialogization is minimal) cannot rupture or transcend the conventional ideology. Afro-Americans and other minority groups are often described with unexamined prejudice (see “Noon Street Nemesis”, 1936, later known as “Pick-up on Noon Street”, with its anonymous underworld “negroes”, one of them a “big gorilla”) while materialistic Jews in Hollywood are made the targets of Marlowe’s snobbery. Some big bosses are personable and friendly. Laird Brunette, for example, in Farewell, My Lovely is the gangster as businessman, an almost admirable type with “guts and brains.” The principal threats Marlowe faces emanate from dangerous, sexy femmes fatales like Ellen Wade (The Long Goodbye) or Velma Valento/Helen Grayle in Farewell, My Lovely. Such characters were represented as doubly deviant, aggressive in their (female) sexuality, aberrant in their “un-feminine” rejection of male dominance. They provided a site for the cultural practice of evil removed from class conflict.

An illuminating comparison can be made between the representation of women in The Big Sleep and Howard Hawks’ film of the novel. Vivien Sternwood is played less as spider-woman than as “Hawksian woman”, an equal half of the Bogart-Bacall romantic couple in what remains a conventional celebration of heterosexuality and all-American values. Hawks does create a range of female types including the sympathetic female taxi driver and the girl in the Acme bookshop, whose brief interlude with Marlowe is tender, witty and erotic.

Marlowe’s family and class origins are obscured. With an apartment instead of a suburban home, he is, as Chandler insists, destined to be poor. The author’s extravagant tropes endow him with a subtlety of mind, but Marlowe is a reluctant aesthete who embellishes his conversation with allusions to Hemingway, Flaubert, Shakespeare and the subjunctive mood. Reluctant since the tough, hard-boiled populist role demands an anti-literary stance that conflates culture and decadence. In The Big Sleep Geiger’s bookshop which deals in first editions is a front for pornography.

The discrimination, vanity and narcissism exhibited by Marlowe are used to articulate his responses (and thus Chandler’s) to the criminal world in which the PI becomes embroiled; his role involves “maintaining a self-preserving critique … against social evils which, be they blatantly ugly or perversely beautiful always carry with them a fatal beauty that raises his creator’s senses.”[11] In the first paragraph of The Big Sleep Marlowe may be “calling on four million dollars” but his choice of clothes powder-blue suit, dark blue shirt, tie and handkerchief – is evidence of a wry aestheticism. Foul or steamy air, stale smells, sweat, cigarette smoke and liquor fumes all function in the text as signifiers of impurity and moral laxity, what Chandler calls nastiness.

Chandler’s own fastidiousness is glimpsed in his attitude towards Los Angeles, a city in his view with as much personality as a paper cup. He was nevertheless fascinated by its cheapness, by that ambience of tacky flamboyance and seediness located between ocean and desert. Chandler’s representations of Los Angeles appear to have varied according to the mood of the moment. At times he actively hunts down instances of stylization; ultimately he preferred its Art Deco tawdriness to most other spots on the globe. Later in the 1980s Elmore Leonard would draw upon a lifetime’s love of South Miami Beach’s Jazz Age Deco, crystallized in its ice cream coloured hotels, for the landscapes of some of the major crime novels of the decade.

By means of Los Angeles Chandler presented a disturbing vision of falsity and strangeness. Marlowe thought the man who invented neon deserved a monument; the city at night often seems composed of artificial light, dynamic but anarchic, almost beyond control like Joseph Stella’s Coney Island lights in the 1913 painting. Towards the end of Farewell, My Lovely even Bay City appears from the ocean as “a jewelled bracelet” that fades into a soft orange glow but in The High Window (1943) a neon sign illuminates a funeral parlour. Elsewhere the surface images of Chandler’s staged urban settings fall to yield the nourishing depth that would challenge the unreality. That unreality derives to an extent from Los Angeles’ architecture and, therefore, the industrial contrivance of Art Deco and streamlining. Derivative and deceptive in their rupture of form and function, the city’s buildings seem designed to embody fantasies of kitsch. Fast food restaurants masquerade as palaces. In The Big Sleep Eddie Mars’ Cypress Club, a mouldering Victorian pile with turrets and scrolled porches, is, in its latest incarnation, a gambling den. The gaming area (once a ballroom) has no hint of the modernistic, of night-club “moderne”: Mars prefers crystal chandeliers and rose-damask walls, “a little faded by time and darkened by dust.” (Penguin ed., 1971, p.131)

Amorphous and de-centred, Los Angeles has lured the ambitious and discontented seeking new opportunities and therefore new lives, and this magnetic attraction it possesses for immigrants has generated different literary interpretations: the racial anger of Chester Himes, the savage satire of Nathaneal West. The achievement of Chandler has been to map the environment of LA and demonstrate its variety. He is drawn to the great estates of the wealthy, private landscapes whose distinctive atmosphere facilitates the coherent presentation of its inhabitants. For example, in The Big Sleep the family portraits and stained glass windows of the Sternwood mansion signify tradition and respectability Further detail imposes a sense of the Gothic: there are rats lurking behind the wainscoting. General Sternwood, crippled by debauchery, exists on heat “like a newborn spider” and his humid, malodorous orchid house grows plants whose stalks resemble “the newly washed fingers of dead men.” (p. 13)

From this ominous base Chandler proceeds to construct a dark vicious world of closed spaces, widespread crime and guilt, mistaken journeys and. actions. The Big Sleep is peopled by mobsters, small-time losers, sexually aggressive women, alienated lawmen and the omnipotent rich. Justice remains elusive despite the detective’s small, temporary successes. The limitations of Marlowe’s effectiveness are made evident: his powerful denunciation of Eddie Mars fails to win Silver-Wig, Mars’ wife (Cissy, Chandler wife, tinted her hair silver and wore wigs), “You think he’s just a gambler. I think he’s a pornographer, a blackmailer, a hot car broker, a killer by remote control, and a suborner of crooked cops.” (p. 187) Mars is allowed to go free. Here as in other hard-boiled texts Marlowe is unable to make full use of the truths he has exposed.

The pattern is reiterated in novel after novel. Marlowe struggles to separate reality from delusion, then is forced to discredit or suppress his own revelations …. In both The Little Sister and The Lady in the Lake the revelation of the truth provokes a pageant of bloodshed … (and) Marlowe stands powerless in the wings.[12] Permeated by irony (Marlowe re-enacts the recent life – and almost the death – of the man Regan for whom he searches), The Big Sleep concludes in “loose ends, a detective who fails, and a pervasive sense of individual despair, social chaos, and the triumph of evil.”[13]

The brutality Chandler attributes to the police in Bay City (Santa Monica) and its sleazy promenade are the physical counterparts of the area’s spiritual decadence. The small town has “a faint smell of ocean”, faint since it is mixed with the odour of hot fat and popcorn. Los Angeles in the 30s was little better. A corrupt police force, a reactionary WASP merchant class and a powerful newspaper dynasty enabled the city to profit from the theft of water from the valleys of Northern California. (This scenario would be at the centre of Polanski’s movie homage to Chandler, Chinatown, 1974.) Politically Los Angeles has had a long tradition of conservatism – from General Harrison Gray Otis’s union busting at the turn of the century through to Reagan’s kitchen cabinet – and one of violence and lawlessness. By the 1930s when Chandler started writing hard-boiled fiction “Murder Inc.” was in business and organized crime was on the rise. After the end of Prohibition (1933) syndicates in LA moved into drugs, prostitution, unions, the oil business, in which Chandler worked for most of the 20s, and gambling for which the off-shore ships in Farewell, My Lovely provide a venue. Those floating casinos outside the three-mile limit were owned by an ex-bootlegger named Cornero; Chandler changed his name and those of the boats. His descriptions were accurate: ‘The Royal Crown … a converted sea-going freighter with scummed and rusted plates, the superstructure cut down to the boatdeck level, and above that two stumpy masts just high enough for a radio antenna.”[14]

Marlowe may have been a romantic fantasy but Chandler made the urban background to his novels so authentic that Greater LA itself took on the status of a character. The texts constitute a socio-political history of America’s biggest west coast city. Chandler is exact in his use of locales, the naming of streets and shops, the descriptions of sounds and sights including specific makes and models of cars. Los Angeles it has been claimed has no weather except when it rains in February. Weather and climate are crucial to Chandler’s narratives: the persistent rain in The Big Sleep, the fierce sun and burning Santa Ana winds in The High Window when “every booze party ends in a fight.” Sites and events are precisely located: Bullock’s the magnificent department store on Wilshire, bronze-green and multi-towered; the once exclusive Bunker Hill, its gothic mansions (in The High Window) turned into apartment houses; and Central Avenue where Moose Malloy looks for Velma in a “dinge joint” in Farewell My Lovely and which once rivalled Harlem’s 125th Street as the major thoroughfare of Black America. In novels and short stories the landscape is a feature,the foothills, beaches, mountains and flatlands all playing a part in the narrative’s unfolding drama.

This is especially the case in The Lady in the Lake which in its melancholy greyness anticipates The Long Goodbye and the work of Ross Macdonald who used it as the model for The Zebra-Striped Hearse. Much of the novel is set in small town California close to the Sierra Madre mountains. Little Fawn Lake is “like a drop of dew caught in a curled leaf” yet both nature and suburban society are vulnerable to individual evil. The central image of Crystal Kingsley’s body floating in the lake represents unnatural sin staining nature itself. The text produces familiar types: the lookalike blondes are promiscuous, the doctor (Almore) deals in drugs, and the Bay City police are routinely vicious. However the overall treatment of the law through its representatives is complex. Lieutenant Degarmo is exposed as a murderer while the local sheriff, Jim Patton, is identified as a rural figure associated with homesteaders and the frontier past. The town/country contrast underlines the novel’s melodramatic fixity though the abandonment of moral relativism is yet again concealed by duplicity. Chandler lethal blonde Muriel Chess dresses in black and white.

Set in post-war Los Angeles, The Little Sister interrelates family connections, organized crime and the movie capital. It has the best claim to be labelled Chandler Hollywood novel. The motifs of illusion and deceit are squarely located in the cinema industry, so that while analogies to acting recur in Chandler fiction, the need for metaphor is here obviated. Nevertheless, Chandler’s Hollywood characters – Dolores Gonzalez (“I must have men, amigo”) whose life is a B movie, Jules Oppenheimer, the Mayer-like movie mogul who enjoys watching his dogs pee (anywhere…), and Sherry Ballou, the refined agent as decadent aristocrat – are self-indulgent exaggerations, grotesques providing occasions for the author’s irritation. Their artificiality is most evident in Joseph P. Toad and his nephew, based on Hammett’s Gutman and Wilmer: “We’re just a couple of bit players”,insists Toad but the condition is widespread. For example, one cop advises another who is talking tough not to “try to steal the picture with that nineteen-thirty dialogue.” (p.168)

The New York Times Book Review attacked The Little Sister for its “scathing hatred of the human race.” Chandler admitted the book was written in a bad mood, and one of its features is a succession of diatribes aimed at modern civilization. The immediate focus Southern California is also a suburban microcosm of the nation:

A long time ago … Los Angeles was just a big dry sunny place with ugly homes and no style, but goodhearted and peaceful … Now we get characters like this Steelgrave [a gambler owning restaurants…. Out in the fancy suburbs dear old Dad is reading the sports page in front of a picture window, with his shoes off thinking he is high class because he has a three-car garage. Mom is in front of her princess dresser trying to paint the suitcases out from under her eyes.[15]

The bilious onslaughts are continued in The Long Goodbye, especially in the long speech by the ultra-rich reclusive newspaper proprietor Harlan Potter. He delivers a misanthropic tirade criticizing planned obsolescence, mass production and (generally and hypocritically) the desire for material goods. Potter is another of Chandler rich, baleful power men – at his holiday home he is neighbour to a Nevada mafioso – but the tone and content of his philosophy are echoed by others in the novel, including Marlowe who is romantically involved with Potter’s daughter Linda Loring. The condemnations of American culture in the 50s include: “the awful state of American television, the grotesque betrayal of medical and social ethics … and that endemic disease, the stupidity and emptiness of people who run for public office.”[16] In part Potter articulates the views of an ageing suburban Raymond Chandler, a man Billy Wilder summed up as “bad-tempered – kind of acid, sour, grouchy.”

With its ex cathedra denunciations and three selfportraits, The Long Goodbye is Chandler’s most personal and intense novel (his wife was dying at the time of composition) as well as his most ambitious. Inside the tradition it looks back to The Big Sleep which also portrays a millionaire with two errant daughters, to The Maltese Falcon which also features a quest motivated by the murder or disappearance of someone close to the PI, and forward to Ross Macdonald’s explorations of marriages and family histories. Roger Wade, the alcoholic, self-pitying fictional novelist, refers in a letter to Scott Fitzgerald; as a study in friendship and loyalty The Long Goodbye bears a resemblance to The Great Gatsby. The relationship between Marlowe and Terry Lennox recalls that of Carraway and Gatsby. Its centrality is indicated by one of Lennox’s false identities, Paul Marston, which shares the PI’s initials John Marston the Elizabethan dramatist was a contemporary of Christopher Marlowe.)

Chandler worked as a Hollywood scriptwriter in the 40s and The Long Goodbye was influenced by the private eye movies of the previous decade, a connection emphasised in the 1973 film version. Robert Altman’s The Long Goodbye is both an extension of and a satire upon the hard-boiled novel. Its status as film is determined by a flood of references to Hollywood, and the “world” of the film (its diegesis) already made strange is rendered hallucinatory by means of photography, lighting, colour and restless or circular camera movements. Sentimental and decent, Elliot Gould as the detective drifts through the plot lethargically as Marlowe wanders dreamily through LA in the literary text. However the filmic narrative does trace the fate of the man with a strict code of honour in an unethical society. As Marlowe tells Lennox before shooting him (a radical departure from the novel) he is the only one who cares. The ending is bewildering in its referentiality (Chaplin, The Third Man and Hollywood Hotel but in its wit and buoyancy and its optimistic sense of release, it expresses nostalgia (“Hooray for Hollywood”) for a cinema which used to insist on morality and closure, while challenging on grounds of style the artistic simplicities of that cinema.

Gatsby’s “Oggsford” connections and war service are transposed to Lennox’s career and character – he is a World War II hero (in the English army) and counts among his associates the racketeer Mendy Menendez. Both Lennox and Wade are mirrors of Marlowe’s loneliness. Marlowe quotes the line “To say goodbye is to die a little” and the novel ends on a melancholy note. Lennox, “part of the manipulating endlessly fluid world”, as the critic John Whitley puts it, is exposed as unprincipled, and the detective’s faith in this “moral defeatist” and in their friendship is shattered. Loyalty though futile is as necessary as social bonds, and Marlowe’s vision of urban violence and suffering at the end of Chapter 38 ends with a typically ambivalent verbal picture of LA: “A city no worse than others, a city rich and vigorous and full of pride, a city lost and beaten and full of emptiness” (Ballantine Books ed., 1974, p.224), an emptiness acknowledged in the text by Lennox tapping his chest with a lighter before murmuring: “In here … there isn’t anything.” (p.311)

4. Ross Macdonald: Unhappy Families

From the pages of the magazine Black Mask there emerged in the 1920s a new type of private eye described by Ross Macdonald as “the classless, restless man of American democracy.” In the same essay “The Writer as Detective Hero” (1973) Macdonald refers to Hammett, whose Continental Op stories began to appear in Black Mask in 1923, as the first American to use the genre as a major novelist would, but he addresses himself at greater length to Chandler, while charging that the latter’s vision lacked the tragic dignity of Hammett’s. Macdonald was fully aware of his debt to Chandler and of the urgent need to distance himself and his creation Lew Archer from his Californian predecessor and the Marlowe books. In the 40s he wrote under his real name Kenneth Millar. Subsequently, after a decade of absorbing the Chandler tradition, Macdonald was able to free himself for the expression of his personal perspective.

Just as novelists in the South have sought to acknowledge and evade the posthumous presence of William Faulkner, writers of hard-boiled detective fiction have responded ambivalently to Chandler’s immense influence. His achievement is substantial and visible: by turning the PI of the pulps into a near-mythic American hero, he tested the boundaries of the genre, in the process producing novels of sensibility and compassion set in a corrupt and alienating society. Emphasizing American speech idioms while retaining a formal English structure (his literary heroes included Henry James and Flaubert), he constructed a new colloquial and flexible prose, brisk, witty and evocative. Geared to the physical, the immediate and the sensuous, Chandler’s mixture of slang, wisecracks and similes was responsible for a distinctive modern style, a sort of urban poetry, its mannerisms often close to self-parody. Current writers have perceived Marlowe in terms of ritual and cliche’: “He gets weepy over lost dogs and little kids, he hates authority, he hates big money. He has a witty riposte and an astute sociological observation for every situation that comes his way.”[17] James Ellroy, recalling the films rather than the books, clearly has a preference for a non-heroic and more ambiguous protagonist. As the central figures have shifted so has the urban environment. Novelist Andrew Vachss insists that the term “mean streets” (associated with Chandler) is now historical and totally inappropriate for “these anarchistic places where there’s no law.” The changes can be calibrated in Arthur Lyons’ novels in which a pornographic bookshop, like Geiger’s, now multiplied a hundred fold in LA, has been surrounded by brothels, massage parlours and sex shops. Lyons’ squalid, horrific landscape is peopled by prostitutes, addicts, paedophiles, religious cultists and, a convention of the contemporary genre, serial killers.

Harold Bloom’s theory that poets live anxiously in the shadow of a “strong” poet derives from an Oedipal model of rivalry between fathers and sons. The “son” achieves power and creativity by undermining the father. Ross Macdonald’s detective novels take place in just such a world of patriarchy, Oedipal patterns and intense psychological energy. Indeed, Terry Eagieton’s description in Literary Theory (1983) of Bloom’s poetic battles – “domestic rows, scenes of guilt, envy, anxiety and aggression” (p. 184) – suggest the atmosphere of a Macdonald narrative.

Renegotiating his literary relationship with Chandler the younger writer saw the “detective-as-redeemer” conception (Philip Marlowe) as retrograde, sentimental and melodramatic. His own protagonist was, in contrast, a conduit, a lens:

These other people [Archer’s clients] are for me the main thing: they are often more intimately related to me and my life than Lew Archer is … his actions are largely directed to putting together the stories of other people’s lives and discovering their significance. He is less a doer than a questioner. … [18]

With the achievements of The Doomsters (1958) and The Galton Case (1959) Macdonald perceived that he was able to break away from the Chandler tradition into a more ambitious psychological and symbolic mode while remaining inside the parameters of the genre. As the Archer series developed professional organized crime featured less and less (Black Money, 1966 is an exception) and Archer was established not as a brash, idealized hunter, but as a sombre mediator between parents and their children, a mixture of social worker and priest, seeking to protect the innocent.

Macdonald’s PI is named after Sam Spade’s murdered partner in The Maltese Falcon, and his early work Blue City (1947) is patterned upon Hammett’s Red Harvest. The Moving Target (1949) and its immediate successors however are set in a Chandleresque milieu of rich men, starlets, gamblers, gangsters and wisecracks. The Drowning Pool (1950) begins, as The Big Sleep does, with the detective calling on a wealthy invalid. Archer like Marlowe is poor and, after the departure of his wife, lonely; both men penetrate the corruption and greed which pervade Southern California and it is through their journeys and pursuits that the physical and moral landscapes of the region are explored and connected. Chandler’s plot structure – the investigative search that uncovers and generates murders – finds recurrent echoes in Macdonald.

When you slept The Big Sleep“, Philip Marlowe said, oil and water were the same as wind and air; for Macdonald oil and water especially in the form of an oil slick off the California coast constituted a sombre image: the death of nature. A prominent member of the Sierra Club and an ardent conservationist, he expressed his ecological concerns in his novels, notably in The Underground Man (1971) and Sleeping Beauty (1973). Earlier, Macdonald’s preoccupations were clustered in one of his “breakthrough” works, The Galton Case: the sins of the past, the lust for power and money, the condition of poor and rich, fathers and sons (and the quest for the lost father) and the failure of the American family.

Already at that stage Macdonald’s pessimism was well developed. The poet Chad Boiling, standing with Archer and looking down on the city, alludes nostalgically to the nation’s origins in speaking of the creation of “a new city of man on the great hills.” However, Macdonald in his essay on The Galton Case takes a less nostalgic attitude with the reminder that a puritanical society had inflicted “the quiet punishments of despair” on the poor and fatherless. In this novel of dispossession, the historical referents are the Depression (and Prohibition) and the Civil War perpetuated, in Antony Galton’s theory, by class inequality. Galton the rich scion whose disappearance in 1936 is made the occasion for Archer’s search “thought of the poor people as white Negroes, and he wanted to do for them what John Brown did for the slaves. Lead them out of bondage…” (Fontana ed., 1972, p.28) The reward for his idealism is a violent death. Ironically commercial progress – the construction of a new shopping centre – uncovers his skull-less skeleton: radicalism is severed from the body politic in the USA. The Oedipal quest is enacted as Galton’s son emerges under the name John Brown Jr.

The presence of the past is shown to be inescapable. A missing person (or object) is pursued thus generating a multitude of discoveries. In a return of the repressed, history yields its secrets. Macdonald’s villains enact their fantasies and crimes at the expense of others. As in Chandler and Fitzgerald, characters who persistently live in the past are destroyed. In particular, the sequence of novels from The Wycherly Woman (1961) to Black Money (in which all the deaths result from past actions) show the suppression of the truth leading inexorably to crises. The danger of exposure is both motivation and narrative; people murder to protect the duplicity of appearance. Retreating from a rigid melodramatic morality Archer is not without sympathy for certain killers: when guilt is universal murderers are victims too. At the end of The Doomsters the killer goes not to the death cell but to hospital. “With Macdonald, the detective story form becomes a family saga of entitlement, generational conflict, and threats to self identity.”[19] Thus the family is a dense metaphor, the site of misused power through time.

In Sleeping Beauty Elizabeth Somerville makes a connection between a World War II incident and contemporary pollution. “We do things on a grander scale in our family. We burn ships and spill oil. It’s the all-American way.” (Fontana ed., 1975, p.65) Mrs. Somerville’s comment is sardonic, but Macdonald while focusing more and more on the middle classes registers as national the lust for money, and the shame, frustration and ethical obtuseness brought about by its absence; a character in The Goodbye Look (1969) tries a criminal logic: “How can a man help breaking the law if he don’t have money to live on?” (Warner Books ed., 1992, p.99)

Through the 60s and 70s Macdonald’s vision darkens. An earlier decade was subject to deforestation and megapolitan growth, but the ocean, Archer claimed in The Drowning Pool, remained unpolluted. The more the land was despoiled so the greater importance assumed by the ocean (especially the Pacific) as the final instance of frontier wilderness. Sleeping Beauty however opens with Archer’s mid-air glimpse of a huge oil slick which has apocalyptic overtones. The world has been stabbed, made to “spill black blood”. The spectators on the beach are like the stoic, futile watchers in Robert Frost’s “Neither Out Far Nor In Deep”. “They looked as if they were waiting for the end of the world, or as if the end had come and they would never move again.” (p.6) While Macdonald’s melancholy deepens through time, corrupt oilmen recur throughout, showing up in The Moving Target, The Drowning Pool, and The Instant Enemy (1968) (as well as Sleeping Beau). Indeed his frequent use of oil pollution as a symbol of moral contamination eventually became a mannerism, a target for parody, “Out in the ocean a lone surfer was riding in on an oil slick that lay on the sea like a black stain of sin on a human soul: he was covered with oil too: oil surfing was the latest fad.”[20]

The trajectory of Archer’s career as traced by Eric Mottram is deconstructive and self-effacing. The endings of the narratives often leave Archer bewildered and exhausted. Before the climax he often moves as though in a dream or simply tired of “one-night stands in desolate places.” He is bemused when the murderer citing the wickedness of others undertakes a relativistic defence of his or her crime. At the conclusion of The Chill (1964) the detective is almost overwhelmed by Letitia’s refusal to accept guilt or responsibility. She is “an elemental power”, “greedy for life”. A triple murderer she protects her fantasy of eternal youth by furtive cunning and a discourse of unflinching self-importance: ‘ I won’t permit you to use such language to me. I’m not a criminal.” (Fontana ed., 1966, p.231) This indifference is characteristic of the surrounding society and contributes to Archer’s alienation, which he records towards the end of Black Morney: “a middle-aged man lying alone in darkness while life fled by like traffic on the freeway” (Fontana ed., 1968, p.223) The PI can neither find a nourishing community nor match up law and moral judgment. Ultimately Macdonald is incapable of penetrating the social structure he has diagnosed so incisively.

The Underground Man in which Archer’s moral security is undermined marks a temporary exhaustion of the genre. But there remains The Blue Hammer with its context of the art market, mining companies, the disruptions of war and (predictably) the instability of families. The final pages refer to freeway space, empty rooms and empty people; at this stage in Eric Mottram’s phrase, “the private eye becomes pure form.” In this novel, however, a degree of optimism is injected by the love affair between Archer and the young journalist Betty Jo. The relationship is not resolved within the text, but it counters the absence of affection elsewhere in Macdonald’s work. Betty Jo is no blonde Californian goddess, encouraging dreams and fantasies. Significantly, the title alludes to the blue pulse in her temple, the evidence of life and of the separate selfhood each partner respects and loves.

The sociological and ideological critique begins in the pre-Archer period of the late 40s. Blue City, a tale of corruption and reform exposes American rhetoric. The future is the proletariat’s “freedom” to be work-slaves and to be controlled by the media. The recent war Is acknowledged as an occasion for training in violence but also as the domestic arena where the conflict between the power of business/ crime and democratic ideals takes place. In later novels World War II becomes the personal or familial past of forbidden secrets; successive wars, Korea and Vietnam, are ignored. The change in Macdonald’s novels, as Paul Skenazy persuasively demonstrates, is manifested through absences: civic or police corruption, the lives of the underclasses, detailed racial and class relations, issues of wages and labour. Even the radicalism of Anthony Galton who sees himself as a latter-day John Brown is faintly absurd. The result is that “social factors disappear as motives for crimes” and the repeated plot “deflects class concerns into issues of psychological trauma and youthful family frustration. … Poverty is seen less as a condition of life, or a position in the social order, than as a form of behaviour.”[21]

Behaviour is determined by local as well as national culture. Macdonald’s descriptions of California have been criticized for adhering too closely to the sprawling suburbs of the white middle class. Archer’s weary aloofness spreads a grey pall over the landscape and its inhabitants, characterized in one way or another – crushed, drowned, stiff or frozen – as lifeless. California, in Macdonald’s words, is superficially a “delightful movie-like dream”, but in Black Money a key question is asked, “When you have money to live on, and a nice house, and good weather most of the time, and still your life goes wrong – well, who can you blame?” (p.77) Corpses turn up in cars and motels. Macdonald at least conveys the rootlessness of a mobile society, eager to blot out history and content to make up values along the way. Towns like Oasis in The Way Some People Die (1951), its lights “lost and little in the great nocturnal spaces” are made up in similar fashion. “It’s the opposite of a ghost town, a town waiting to be born”, announces Keith Dalling. (Fontana ed., 1973, p.4l) But Archer, reminded of a wartime army camp, refers ominously to “the skeleton town”. Dalling has invested in Oasis; an accomplice to murder, his dividend is his own death. In Black Monday, modelled on Macdonald’s favourite book The Great Gatsby, the racially mixed Pedro Domingo (Afro-Indian-white) emerges from Panama and, like Jay Gatz, assumes an invented aristocratic role; for a couple of weeks at the state college in Montevista he is taught by a professor with “Scott Fitzgerald good looks.” Domingo is one of that company of “dangerous dreamers” in Macdonald who act out their fantasies. The dream is also embedded in the Californian (and American) psyche so that the writer can imagine Hollywood precisely as “our national capital”.

Macdonald’s use of California beach culture in The Zebra-Striped Hearse provides another example of the ideological difficulties recounted earlier. The liberated surfers who cruise the coast in their psychedelic car are hostile towards adults. They beg food and clothes, and gather around bonfires “like the bivouacs of nomad tribes or nuclear war survivors.” (Fontana ed., 1985, p. 186) This apocalyptic image registers Archer’s alarm; although the hippies are loyal and peaceful, their challenge to bourgeois conventions is seen as threatening and reprehensible. Their presence is a reminder of enduring conservative elements in hard-boiled narrative. Intertextuality in Macdonald’s novels, while not original, is more developed and varied than in those of his predecessors. The Wycherly Woman contains a reference to Paolo and Francesca and to the second circle of Hell in Dante’s Inferno. The Chill alludes to Waiting For Godot and describes Zeno’s theory of space (Achilles and the tortoise). Ecology in The Underground Man is traced to a disciple of Louis Agassiz and a 19th century checklist of ornithological species in “Santa Teresa”. Ross Macdonald, a Ph.D. in English, writes in the Archer novels for a sophisticated, college-educated audience. The signs pointing the way towards Murder in the English Department, The James Joyce Murders and the hard-boiled narratives of Robert B. Parker can already be dimly observed.

5. Honky-tonking and Working Out

If the investigator becomes pure form in the later works of Ross Macdonald, this knowledge has escaped a considerable number of post-Chandler detective novelists. The hero with a private code reappears, for example, in the books of Robert B. Parker, but Spenser jogs and has some of the characteristics of a New Man. Elsewhere the code becomes the shared experience that links lawmen and villains. Many elements remain – crime, violence, pain, evidence, revelations – but much becomes changed as writers endeavour to chronicle American culture in the last third of the twentieth century. Other characters fill the structural role of the private eye (cops both male and female, lawyers, professors, even drifters with a past), so the post-Chandler period demands to be examined less through the single author or code hero than through region, class, race or gender.

The multiplicity of signifiers which confronts the reader is not quarantined within mass culture. The explosion of American popular music within this period provides parallel sets of images and atmospheres: urban blues – Gar Heywood; cajun -James Lee Burke; old time Jazz and brass band -Julie Smith (New Orleans Mourning); and country – who else but James Crumley.

James Crumley is one of the educated rednecks of hard-boiled fiction. ”He writes books about troubled macho men … desperately,, romantic novels of the private eye as the last denizen of the old West. (Williams, p.133 ) In Dancing Bear (1983) and his finest work, The Last Good Kiss (1978) he sends his investigators to the dusty bars, truck-stop cafe’s and seedy motels of Montana, Colorado and Wyoming – even in the latter to “the most depressing place in the West”, the Salt Lake City bus terminal.Crumley creates strong, sophisticated female characters, but both novels employ the conventional first person narrative of the (male) private eye. The personal/fictional milieu of Crumley, ex-bartender, ex-soldier, resident of Missoula, Montana, is inhabited by good old boys, rowdy divorced hellraisers who hang out in bars to drink, flirt, smoke a few joints and have a party. They are 60s survivors, “the fortysomething Vietnam generation”, authentically and heroically represented by C.W. Sughrue, the part-time PI in The Last Good Kiss. Crumley likes to fill his books with ordinary labouring folk: truck drivers, barmaids, field workers. In The Last Good Kiss he reports the arrival of construction workers at a bar near Sonoma, men who “were probably terrible people who whistled at pretty girls, treated their wives like servants, and voted for Nixon every chance they got, but as far as I was concerned, they beat hell out of a Volvo-load of liberals for hard work and good times.” (Granada ed., 1979, p.33) Later, Crumley describes a male American fantasy of good times as “unencumbered by families or steady jobs or the knave responsibility”, underlining the point by quoting “Freedom’s just another word for nothing else to lose” from Kris Kristofferson’s melancholy country song, “Me and Bobby McGee”.

Indeed the atmosphere and sensibility of the Crumley canon is that of country music, more specifically that of the Texas honky tonk where the songs express the ambivalence of the working class in the South and Southwest: hedonism and puritanism, sin and guilt, violence and sentimentality. The key scene in which Betty Sue Howers recalls her childhood and her inability to touch “those veins like ugly worms” in the legs of her selfless Granny finds an echo in Holly Dunn’s 1980s hit, “Daddy’s Hands” (“Years of work and worry had left their mark behind”).

A paragraph in the first chapter of The Last Good Kiss alludes to Californian Okies, hot windy plains, orange groves, axe handles – and the Bible. The Steinbeck reference is quickly modified as Sughrue begins his pursuit of the Hemingwayesque alcoholic poet Abraham Trehearne who resembles the writer Roger Wade in The Long Goodbye. There are marked differences between the PIs Sughrue and Milo (in The Last Good Kiss and Dancing Bear respectively) and Lew Archer, but their fictional experiences are far from dissimilar. Like Archer, Crumley’s detective-protagonists are continually deceived, lied to and, in the case of Sughrue, betrayed by what his creator has called the ‘literary arrogance” of Trehearne. Sughrue succumbs to the temptations of violence offered by the job, but remains sensitive enough to describe finding lost people as “the saddest part of the chase” and to admit that he can no longer claim to possess “moral certitude.” Exploding the myth of the ratiocinative investigator who is both brilliant and virtuous, he sums up for all defeated private eyes:

“What a case. Private detectives are supposed to find missing persons and solve crimes, so far in this one I had committed all the crimes – everything from grand theft auto to criminal stupidity – and everybody but poor old Rosie and I had known where Betty Sue Flowers was from the beginning.”[22]

Sughrue is asked by bar-owner Rosie Flowers to look for her daughter missing for ten years in the “Sodom and Gomorrah” of San Francisco. Rosie remembers joining the sad parade of parents searching for their children, “holding’ out their pictures to any dirty hippie that would look at it.” (p.280). As Lew Archer had discerned, parents had no idea why their daughters have run away; Galatea Lawrence in The Way Some People Die always came home for Christmas”, but Susan Crandell in The Underground Man had been “a pampered prisoner” in the affluent cell of her room. These are midnight girls in sunset towns and the search for Betty Sue will uncover drugs, theft and pornography.

The crimes in Dancing Bear are the actions of international arms cartels and the dumping of toxic materials. There arc also misdemeanours on a personal level, especially the exploitation of investigator Milo Milodragovitch by a group of manipulative and committed women: Somehow they would save America from toxic waste and corruption, and I would be their stooge, dance to their lies, dream of love in their arms. I didn’t have the heart to be angry.” (vintage ed., 1984, p.218)

Like Crumley’s first detective novel, The Wrong Case (1975), Dancing Bear concludes in forgiveness. Milo’s relationship with each of the women Is complex and he shares in their concern for the environment. The novel begins and ends with garbage; first, Milo’s own trash can “safe from hungry bears” and collected by automatic trucks, finally the landfills and floating incinerators which enable Tewels the chief criminal to grow rich on junk, sell drugs and pollute America. Milo’s environmentalism does not take an extreme form. He is a modern cowboy with the “hardy pioneer genes” of the old-time frontiersman, one who both approves of wilderness areas and wants chain saws and snowmobiles on his own property. He is aware his land was stolen from Native Americans (“a legal theft”) and the novel chooses for its prologue a Benniwah tale: the origins of the Bear Dance, a fable of ritual, community, sweetness – and forgiveness.

The small town setting, Meriwether, has seen better days. Lumber mills are closing, the pulp mill is on half shifts and the air smells “like cat piss and rotten eggs.” The town is as alienating as Hammett’s Poisonville with its yellow smoke and grimy sky, produced by smelting stacks. To these complaints Milo adds his private laments. His favorite bars are closed or “filled with children” and his stomach can now only tolerate peppermint schnapps. He observes his decline mirrored by the destruction of the land through strip mining and pastel tract houses. Battle-weary, he has learnt one grim post-Vietnam truth: modern life IS warfare without end. Eating the dead (as Chil-a-ma-cho ate Brother Bear) is sound ecology.

At the centre of Robert B. Parker’s hard-boiled narratives is another contradictory figure named Spenser. The signifiers of his toughness are evident – his .38 Smith and Wesson, boxing, weight lifting (Parker is the author of Sports Illustrated Weights Training), jogging (Parker advertises running shoes) and his gumshoe occupation. He resembles Lew Archer and Milo Milodragovitch in seeking to save the innocent from victimization. Caring and sensitive, he embraces Rachel Wallace, weeping with her, after rescuing her from kidnappers. A gourmet cook, he has a steady, semi-domestic relationship with a divorced schools counsellor Susan Silverman, one which makes him in the eyes of his friend Paul, “machismo’s captive. Honor, commitment, absolute fidelity, the whole myth.” “Love,” I said. “Love’s in there.” (The Widening Gyre Severn House ed., 1991, p.160)

These qualities modify and challenge the traditional hard-boiled image of a patriarchal masculinity. As David Glover shows, Parker explores the meaning and limits of masculine experience; “… the Spenser books are divided between a world of men and a world of women, moving relentlessly from one to the other. This makes them far less stable than they seem at first.”[23] This instability can surface through the interrogation of Spenser’s profession and his exuberant violence by either Susan or, in Looking for Rachel Wallace (1980), by that novel’s eponymous lesbian activist. Feminist commentary on Parker has found this approach more progressive than the unexamined replication of (male) violence in the private eye texts of Sara Paretsky and Sue Grafton.

As the series develops Susan becomes Spenser’s emotional anchor and it is when she goes to Washington D.C. as a pre-doctoral intern in clinical psychology that her absence highlights the true nature of his self-sufficiency. His surrogate son, Paul Giacomin, elucidates the situation in The Widening Gyre “You were complete, and now you’re not. It makes you doubt yourself. It makes you wonder if you were ever right. You’ve operated on instinct and the conviction that your instincts would be right . But if you were’ wrong, maybe your instincts were wrong.” (p.73) A discussion of the progress of Spenser’s current case, particularly if gender relations are involved, usually becomes an examination of their own relationship and their respective value systems. In The Widening Gyre Susan argues that given the alternatives of dropping out of the Senate race and permitting his wife’s participation in a sex orgy to be exposed, Meade Alexander will have to make the sell-serving decision. Here Susan takes the pragmatic position since ethical judgments are not crucial in her work which is orientated towards results. For his part Spenser, according to Parker, does what he can rather than what he should. While accustomed to the use of violence in solving problems, he also finds it necessary to speculate upon the determinants of behaviour and adopts a more generous attitude towards Alexander.

Susan is aware that she is in the process of creating a self. “I never had a center, a core full of self-certainty and conviction. I’ve merely picked up the colorations of [others].” Earlier she had protested in language which articulates the problem for gender representation of first person male narration. “Always me was perceived through you – you my father, you my husband, you my friend.” (pp. 110, 109). Now missing Spenser is the price she will gladly pay to find her identity. In these circumstances Spenser too pays – for loving Susan totally and without reservation, for living too long with a single dream like Jay Gatsby.

In his work as detective he is aware that some puzzles fail to produce wholly acceptable solutions; such wisdom can be applied to his personal life. His feelings for Susan emanate from his own core and conviction. They cannot be compromised and could, he now realizes, exist without her. At the end of The Widening Gyre Spenser looks down on her in bed as Lew Archer had looked tenderly on Betty Jo’s “blue hammer” and records that he “watched her with the growing certainty that some of her would always be remote, away from me, unknowable, unobtainable, never mine. Watched her and thought these things and knew, as I could know nothing else so surely, that it didn’t matter.” (p. 183)

Elmore Leonard’s Detroit novel City Primeval which negotiates intertextually with the movie High Noon includes a critique of the protagonist Raymond Cruz, accused by a woman journalist of trying to “look like young Wyatt Earp … the no-bullshit Old West lawman.” Leonard’s literary career began with Westerns and in his Boston University Ph.D. Parker presented the hard-boiled private eyes of his predecessors as modern versions of the Deerslayer archetype. The Judas Goat (1978) is crammed with western references, the most significant being Richard Slotkin’s study of the hunter as American mythic hero, Regeneration Through Violence. The attainment of spiritual renewal through self-testing and serving God and nature which Slotkin proposes is denied in the urban thrillers of Parker, Macdonald and others. Violence removed from the wilderness is non-regenerative. Slotkin also alludes to the hunter’s natural humility and self-restraint. That potentiality for Spenser is embodied in his lover Susan; the opposite quality of excess, violent excess is projected through the AfroAmerican enforcer Hawk who saves the PI’s life in Valediction (1984) and, possibly, in Promised Land (1986). He plays Chingachgook to Spenser’s Natty Bumppo.

Parker’s titles alone, The Widening Gyre (Yeats), A Savage Place (Coleridge), A Catskill Eagle (Melville) bear witness to the influence of the western literary tradition upon his writing. Promised Land and Ceremony (1982) allude to Thoreau; Gatsby and Prufrock are evoked in A Savage Place (1981). These references are a signal to the reader that he/she is in the presence of Serious Literature and provide the educated bourgeois audience with a manufacturer’s guarantee that a Spenser thriller does not insult the reader’s intelligence. In populist fashion however Spenser is cool even disapproving towards academics since they are part of a system of professionals which threatens the exercise of private moral judgment. Students are often portrayed negatively: the campus radicals in The Godwulf Manuscript (1974) are vicious while the college boys in The Widening Gyre who deal drugs and call their orgies with middle-aged women “granny parties” are puerile and corrupt. More recently Playmates focused on the seamy side of college sport.

“Respectable” hard-boiled fiction often recalls the literary past: Spenser and Marlowe from the English Renaissance, and the world of chivalric romance in The Big Sleep and The Godwulf Manuscript. Thus Parker and others are able through particular “high art” signifiers (Spenser like Marlowe enjoys the paintings of the Dutch realists such as Rembrandt) to make functional references to several areas of cultural practice. It is however Ross Macdonald’s books that Parker’s narratives bring to mind most readily especially those (God Save the Child, Early Autumn, and Promised Land which explore the breakdown of the modern family. The prising apart of affluent life is given a broader sociocultural context of dreams unfulfilled, most obviously in Promised Land, the title providing the name of Harvey Shepard’s “vacation-land” housing development temporarily sustained with embezzled money.

As the classic writers of hard-boiled fiction anatomized their special corners of California, so Parker sets his “romantic adventures” in Boston and New England whose historically resonant names (Bunker Hill, Plymouth Plantation) can be used to contrast the lost agrarian America of the Founding Fathers and the innocent young Republic with the garish commercial USA of the 1980s. Spenser’s sentimental reverence finds voice in The Widening Gyre. Upon returning from a trip to Washington D.C., the seat of government and its scheming politicians, he looks up

to the top of State Street where the old South Meeting House stood, soft red brick with, on the 2nd floor, the lion and unicorn carved and gleaming in gold leaf adorning the building as they had when the Declaration of Independence was read from its balcony and, before it, the street where Crispus Attucks had been shot. It was a little like cleansing the palate.[24]

6. Crime Comes to Harlem – and LA

Gar Haywood is the Afro-American author of a prize-winning hard-boiled novel, Fear of the Dark (1988), set in the huge ghetto of South Central Los Angeles. Interviewed by John Williams, Haywood admitted that if he made his principal character a policeman, the visible contradictions would entirely disable him as a writer. For the literary Afro-American the whole concept of law and order is problematic, “You can’t expect the cop on the beat to change the system The changes that have to be made to make law and order mean something in this city have to come from a lot farther up.”[25]

Chester Himes had his own reasons (a spell in Ohio State penitentiary, brushes with law officers) for sharing Haywood’s attitude, but he was persuaded in the 1950s to create as the protagonists of his hard-boiled “detective tales” two black “Harlem sheriffs”, Coffin Ed Johnson and Grave Digger Jones. Like their white counterparts, they are figures of social dominance and personal strength, but paradoxically as servants of an hierarchical system, their power is severely constrained. Their condition as Afro-Americans underlines this contradiction and motivates their anger and aggression, the kind of rage kept simmering in Harlem by squalor and oppression and articulated by Violet Mills in “Mad Mama’s Blues” (“Yes, I’m gonna wreck the city, gonna blow it up tonight”).

Quintessentially an American product, the hard-boiled novel, Himes has argued, is “plain and simple violence in narrative form.” Coffin Ed and Digger are hard, at times brutal, and their faces especially Coffin Ed’s, bear the marks and scars of life on the ghetto streets. They are intimidating, cruel and vengeful. Deliberate and detailed violence contributes to the naturalism of their portrayal but they are also, as successors to the Continental Op and Marlowe, fantasy figures. They function symbolically enjoying the cultural role of the bad nigger of folklore like Stagolee; as men of power they demonstrate that even the weak and impoverished can instil fear and terror.

Himes’s cops adopt a melodramatic perspective, dividing the local inhabitants into good and evil, innocent and criminal. This rigid morality produces self-righteous judgments which are acted upon vigorously and – in the manner of Hammett’s Op – excessively, impinging on both the blameless and guilty, men and women alike. The excess could be interpreted as a means of challenging aspects of the genre, such as hard-boiled philosophy and the mystification of the romanticized detective. Other explanations can be argued: for example, violence is a technique of communication and survival. It is also Himes’s method of bestowing humanity and dignity on certain characters both lawmen and lawbreakers. In My Life of Absurdity (1976) he explained that he was protesting against a racism that excused all the sins and faults of soul brother criminals who were as vicious and dangerous as any other criminals. Only by bending the law (non-violently or otherwise) can Harlem’s detectives achieve the cosy resolutions which feature in many of Himes’s crime novels. In All Shot Up (1960) the stolen money ($50,000) is seized by the two officers and sent to the New York Herald Tribune Fresh Air Fund which arranges summer vacations for (New York) city kids of all races.

Coffin Ed and Digger are dispensing justice on behalf of a poverty-stricken community threatened by criminal activity and oppressed by extremes of weather. They identify strongly with the black underclass whose lives they attempt to ameliorate, though the area around Eighth Avenue and 112th Street is presented in Cotton Comes to Harlem as close to irredeemable: “the neighbourhood of the cheap addicts, whisky heads, stumblebums, the flotsam of Harlem; the end of the line for the whores, the hard squeeze for the poor honest labourers and a breeding ground for crime.” (Penguin ed., p.47) As Raymond Nelson argues, “it is one of the brilliant ironies of the Harlem Domestic stories that the detective-heroes can express their genuine love for their people, their altruistic hopes for communal peace and decency, only through the crude brutality that has become their bitter way of life.”[26]

It is not violence alone which identifies these texts as part of the hard-boiled tradition. Coffin Ed and Digger are quick-thinking professionals, incorruptibly honest, loyal to each other and brave. Driven by a central quest, the plots tend to imitate Hammett’s The Maltese Falcon which was recommended to Himes by Marcel Duhamel who commissioned the novels for “La Serie Noire”. The resemblance is closest in the first novel For Love of Imabelle (1957) – Himes’s preferred title was The Five-Cornered Square – where the ore in the padlocked trunk turns out to be fool’s gold. As in Hammett’s novel the lure generates capers, intrigues and savage even spectacular violence on both personal and public levels.

Himes does not employ the characteristic first person narrative mode. Instead he offers a number of viewpoints which, along with the exercise of time inversion, produces a text conveying the disjointedness, the messy and turbulent atmosphere of the black community. The surreal dynamism of this teeming ghetto world with its flamboyant, ribald individuals is rendered in a realistic style which draws upon laconic speech forms and Afro-American vernacular. However Himes maintained that he never really knew what it was like to be an inhabitant of Harlem, claiming in My Life of Absurdity that he was as much of a tourist there as “a white man from downtown changing his luck.” His knowledge of the criminal underworld, its styles, its spoken language, and of the lives of ghetto dwellers in other American cities, especially Cleveland, Ohio, fully compensate for any lack of direct experience of New York. The fictional Harlem is a self-contained society: the reader encounters its restaurants, foods, music, entertainment, churches, clubs and tenements. While its people are deprived, sinful and resilient, black America’s capital is, like Chandler’s LA, mysterious, exotic, intricate and enigmatic. Himes captures the sights of shop windows and graffiti, the sounds of jazz from neighbourhood bars and, in The Heat’s On (1967), the smells “of sizzling barbecue, fried hair, exhaust fumes, rotting garbage, cheap perfumes, unwashed bodies … and all the dried-up odours of poverty.” (Panther ed., 1969, p.28)

The details of a crowded, colourful city provide the anchor for the exuberance of fantasy. In his essay “City of Harlem” Leroi Jones reported that the mythology of the ghetto supplied different images, “the pleasure-happy center of the universe” and “the gathering place for every crippling human vice.” Yet Harlem as Jones insists, eludes definitions, changing continually, questioning the stereotypes of glamour and desperation. In that “milling population of preachers and politicians, sober matriarchs and mock religious prophets, pimps and their chippies, drug pushers and wheel thieves, transvestites and con-men, and shysters of every kind and sex” anything might happen.[27] Himes’s Harlem witnesses racketeering, drug-dealing and hustling in general, so tension, fear and violence – from acid-throwing to stabbings – suffuse the textual material.

New York’s black community has been called Southern in its memories and its culture; The Crazy Kill (1960) contains a complete soul food menu. In Cotton Comes to Harlem (1966) Himes juxtaposes two political projects one of which is the racist movement led by the neoConfederal Colonel Calhoun known as Back-to-the-Southland. This is symbolized by a bale of cotton carrying suggestions of slavery – which more or less describes the conditions awaiting those Harlemites who join up. On the other hand black militancy is represented by the Back-to-Africa movement of the slippery, despicable Deke O’Hara, an ironic reference to Marcus Garvey’s nationalist organization in the 20s which “doesn’t make any sense now.” The irony is reversed in the book’s conclusion: an old junk collector does travel to Ghana with the $87,000 he finds in the cotton bale – money stolen from the O’Hara group’s “Last Chance Rally” by the Colonel’s men. The Colonel who, with his nephew, is guilty of murder evades justice; a deal is struck so that the robbed Harlem families will receive $87,000 (from Back-to-the-Southland) while Calhoun and Ronald Compton return to Alabama which refuses to extradite them for the crime of killing a “Negro”.

The limits of justice, particularly as experienced by Afro-American police officers, become increasingly obvious to Himes, and this theme is developed in the ambitious hard-boiled narrative, Blind Man With a Pistol (1969), a de-centred anti-novel in which the failure of the genre is central to the meaning. Harlem which in Plan B (published in French, 1983) experiences a black revolution, becomes completely anarchic and meaningless. The cops’ investigations are blocked because they would uncover an interracial homosexual scandal embarrassing to the white establishment. So the extreme point of absurdity is reached: crime can no longer be solved for criminality is all-pervasive, generated by urban decay. Similarly the disease of institutional racism in the ghetto is too widespread for any remedy.

Grave Digger and Coffin Ed are here ineffective comic characters with greying hair and middle-aged spread. As riots and demonstrations proliferate, they are no longer able to impose order. Parades representing different ideologies converge and collide. The Black Muslim Michael X tells them, “You don’t really count in the overall pattern.” (Panther ed., 197 l, p. 174) The parable of the blind man with a gun provides the appropriate epilogue, an image – as Himes’ preface warns – of the unorganized violence that connects the Middle East, Vietnam and America’s ghettoes. Himes’ black detectives adjust to impotence and irrationality, taking pot-shots at rats in a building demolished by urban renewal.

Himes’ Black Power militants reappear two decades later in Fear of the Dark. The Brothers of Volition attempt to provoke a bloody civil war but are frustrated by Haywood’s PI, aging balding Aaron Gunner. The lesson for Gunner (and for Afro-America) is to keep alive the dream of a society without racial bigotry and oppression, to relinquish suicidal fantasies of vengeance. The author’s second novel Not long For This World (1990) describes the desperate world of LA’s street gangs: ‘We’re talking about kids that from day one have had no hope of anything, they have totally lost faith in their own future,” (Williams, p. 102) Haywood deplores black and white indifference and politicians with their bland advice, but Gunner’s increased awareness of the dimensions of the problem only constitutes a repetition of the bewildered liberal’s announcement that attention must be paid…..

Walter Mosley sets his crime novel Devil in a Blue Dress (1991) in the Los Angeles of the 1940s when racial conflict gave rise to the Zoot Suit notes, an occasion when, as Chester Himes wrote at the time, the US army, navy and marines “contacted and defeated a handful of youths with darker skins.” Mosley chooses the post-riot date of 1948, one which can evoke the Chandler milieu of the early 40s; the opening scene in Joppy’s Bar in Watts is an interracial inversion of the first scene of Farewell, My Lovely. Historical and geographical intersections authenticate the actions and thoughts of his private eye, Easy Rawlins. The author’s interest in Albert Camus (who earlier had influenced another black writer deeply interested in crime, Richard Wright) becomes evident as Rawlins endeavours to resolve moral dilemmas and to create his identity through his experiences and through the assumption of a professional role. “It was as if for the first time in my life I was doing something on my own terms. Nobody was telling me what to do. I was acting on my own.” (Serpent’s Tail ed., p.131 Becoming a detective vanquishes fear and boredom. The right technique, with a glance at Ralph Ellison, is to act “sort of invisible”. Rawlins is a black Marlowe, tough, hard up, relentless, but without the neuroses and isolation. The denouement however recalls Hammett when Easy with the backing of the rich and powerful Carter concocts a fiction to satisfy the police.

In 1948 the Afro-American and the “Mexican” were on good terms, just “another couple of unlucky stiffs left holding the short end of the stick.” (p. 182) Rawlins who was among the GIs who evacuated the Nazi concentration camps contemplates the historical fate of another marginalized figure – the European Jew. His knowledge helps him to ward off racial self-pity; he still needs to survive while attaining dignity, though in his circumstances he relies upon the assistance of the gun-happy, murderous Mouse, a friend/enemy from earlier days in Houston “where men would kill over a dime wager or a rash word.” (p.40) The racial theme is also developed through Daphne Monet, the mysterious woman of the title. Changeable like the chameleon, sexually generous, dangerous to know, she is the exotic femme fatale of film noir. In addition her fake French accent, her blue dress (as worn by French giris – and by Ilsa Lund in Casablanca – in wartime Paris) and her mixed racial origins suggest the tragic octoroon of Southern literary culture.

Mosley excels in language which describes the characters of LA’s postwar black areas, and the atmosphere of hustling: “There’s no time to walk down the street or make a bar-b-q when somebody’s going to pay you real money to haul refrigerators.” (p.56) A variety of locations borrowed from Chandler fills in the panorama. The canyon roads, Hollywood Hills, and the corporate offices recur. Mosley also transports his protagonist from Joppy’s evil-smelling bar in a butcher’s warehouse to John’s place (a grocery front for an illegal nightclub), Daphne’s one-story duplex on Dinker Street, and Ricardo’s Pool Room on Slauson, “a serious kind of place peopled with jaundice-eyed bad men who smoked and drank heavily while they waited for a crime they could commit.” (p.129)

7. Miami Blues

Billy Wilder’s film Some Like It Hot associated Chicago with night, violence and swift death. Miami, by contrast, stood for sunshine, vitality and fun. Such a polarization was a fictional device as the film tacitly admitted when the plot moved the gangsters south in search of the two musicians who had witnessed the St. Valentine’s Day massacre. Miami Beach already had a history of Mafia domination with Al Capone settling there for a time during Prohibition as an “antique dealer” and pronouncing Miami the “Garden of America”. In the 30s gunfights between G-men and smugglers were common. So Miami’s assumption of the title “Murder Capital USA” in the early 80s was actually an exercise in continuity.

The prevalence of criminality, intriguingly mixed with the Florida postcard attributes of surf, sun, beach and lush tropical plants, made Miami as dangerous and exciting as Chicago. Miami, like New Orleans, was exotic by virtue of its links with other races, languages and cultures, but it was literally colourful capitalizing on the exuberant surfaces of its South Beach Art Deco hotels. In TV’s Miami Vice the colours are pastel and incandescent, constitutive of a city that is post-industrial, vibrant and stylistically ambitious. The artificial settings are complemented by atmospheric, natural ones. “Teeming with images of nature – parrots, flamingos, water and speed, women’s bodies, flaunting sensuality – Miami is our heart of darkness.”[28] Maurice Zola, the old hotelier in Elmore Leonard’s La Brava (1983) repeats the rumour that the east bank of the Apalachicola River between Bristol (where Noah may have built the Ark) and Chatahoochee is the site of the original Garden of Eden. Miami is both heart of darkness and fallen world, its paradisal luxury resting on credit and drug money, the profits from which have criminalized businessmen, politicians and police officers. Moral distinctions are obliterated in a democracy of desire and consumerism.

By the 80s Miami and Florida had taken over from Los Angeles and Southern California in the American consciousness as lotus land, the site of opportunity and easy living. Still evolving and changing, Miami promoted itself as the pioneer centre of a new international arena of leisure. Dreamlike and romantic, Miami had become a place where any possibility could be accommodated. These possibilities would increasingly partake of violence and vice. A clearing house for Immigrants and refugees, Miami was becoming an entrepot for weapons and drugs as well, a city of touts and pimps and middlemen bringing in or delivering whatever the world wished to purchase. Its internationalism would embrace not only cafe society Europeans like Regine but Colombian cocaine cowboys, French-speaking Haitians and the Marielitos tossed out by Castro in 1980 and reckoned to be the most ruthless criminals ever seen in the USA. Many of those involuntary immigrants were drug addicts, prostitutes and homosexuals; among the rest were large numbers of petty crooks, usually brutal, and about two thousand hard core villains. Together they would send Miami’s crime figures through the roof: “The boat-lifters and dopers come in, half the neighbourhood’s already down the toilet”, laments Maurice in LaBrava (Penguin ed., 1985, p.68).

Specific events carried their own meanings. The Liberty City riots were yet another example of the violence waiting to be triggered by Southern racism, while the Suniland shoot-out exposed the dark underside of Middle America. Matix and Platt were army veterans from the Midwest, patriotic, garden-loving, pious suburbanites who killed their wives for the insurance money before or after moving to Miami. When Platt received his money he at once took his kids to Disney World. The two men used the flamboyant cars of young Hispanic males they had murdered to commit armed robbery and were finally shot in a black and gold Chevrolet Monte Carlo. Chunky Gorman, a drugs dealer in Elmore Leonard’s Stick (1983) warns Moke, a young redneck working for a Cuban outfit, that he has seen “white boys … take on that greaseball strut, that curl to the lip and land in a federal correction facility for showing off’. (Penguin ed., 1984, p.21) Part of the fascination of the Miami crime novel is this clash of cultures and the way power depends on images and their manipulation.

Nestor, Chucky’s associate in crime, is Hispanic Cuban, but stories tag him as part Lengua Indian from Paraguay, “raised on the alkaline flats and fed spider eggs” (p.120) to make him evil. He is able to terrify Chucky through references to santeria and animal sacrifice, having known for a long time “gods can scare the shit out of anyone” (p.125). Nestor apart, Cubans usually fill the reductive roles of small time hoods in these texts and are the targets of racist sentiments (“that greaser goon Chavez”, “fucking Cuban hotshot”). Cundo Rey the glitzy go-go artist in LaBrava at least gets to say “I steal cars in darkness, I dance in lights” but needs little invitation to “act crazy”. Jesris Bernal, the incompetent letter bomber in Carl Hiassen’s Tourist Season (1986), is a similar figure ending up as the wild Cuban, a mixture of clown and loose cannon.

In a TV series Matix and Platt might have been cast as the exterminators, agents of law enforcement. They lived and died in Kendall, described by T.D. Allman in Miami: City of the Future (1988) as “a churchgoing, Little-League, nine-to-five kind of place.” It is in an expensive Kendall condo that Freddy ‘Junior” Frenger, the psychopathic protagonist of Charles Willeford’s Miami Blues (1984) holes up with his “platonic” wife Susan. The apartment complex in the novel exemplifies the contradictions and volatility of Miami life in the early 80s. Chic and tropical, it is for the most part unpainted or unfinished as construction prices and interest rates on loans have escalated. A Cuban rides round in a jeep to prevent vandalism. The ironies contained in the Kendall Pines Terrace are repeated elsewhere. Miami with its lush vegetation, its shimmering skyline and its aura of glamour may seem the antithesis of the mean streets of criminality and sudden death found in film noir and the classic hard-boiled thriller. David Buxton in ‘From “The Avengers” to ‘Miami Vice”: form and ideology in television series’ (1990) has brilliantly analyzed the discrepancy between extravagant materialism and incorruptibility in Miami Vice up to 1988. The ambiguous Reaganite package of harsh moralism and conspicuous consumption with its cops as “style heroes” eventually collapses; it is a safe bet that Miami’s flashy cops in three piece suits and Gucci shoes are moonlighting for drug barons. There remains however a degree of Thirties urban realism in Leonard and Willeford, while Stick includes a mansion sufficiently grand to rival earlier monuments belonging to the Grayles and the Sternwoods:

an assortment of low modules stuck together, open sides and walls of glass set at angles, the grounds dropping away from the house in gradual tiers, with wide steps that might front a museum leading down to the terraced patio and on to the swimming pool. A sweep of manicured lawn extended to a boat dock and a southwest view of Biscayne Bay… (pp.82-3)

What has changed is that the democracy of violence and crime has expanded. When Freddy Frenger breaks a Hare Krishna’s finger, observers break into applause and laugh. The wounded beggar who dies of shock turns out to be the brother of the young hooker Frenger shacks up with. Susan and Marty Waggoner were saving up for a Burger King franchise in Okeechobee: “He’ll be the day manager and I’ll manage nights. We’ll build a house on the lake, get us a speedboat and everything.” (Ballantine ed., 1985, p.31) Marty was cheating the Hare Krishnas. As a child he liked to bend back Susan’s fingers and when puberty struck the pair he made her pregnant. So much for the American Dream. But it has appeal for Frenger too. “What I want is a regular life. I want to go to work in the morning or maybe at night, and come home to a clean house, and a decent dinner, and a loving wife like you.” (pp. 143-4) In this quotation Frenger sounds like Matix and Platt. He’s the deadly suburban monster, dream and nightmare made inextricable.

Miami’s amorphousness offers space in which to hustle and rob, to maim others or yourself. “Nothing about Miami was exactly fixed or hard”, Joan Didion noted in her essay Miami (1988), an observation which touches the crux of the Miami novel. Criminality in these books is closely linked to lack of place and roots, lack of traditions except of lawbreaking itself. One consequence of Miami’s status as haven for thousands of travellers and transients is the decline of family life. Watching “Family Feud” on TV Frenger observes that there are no mothers and fathers on the show, only cousins, uncles and perhaps a kid borrowed from the neighbours. The cynical reflection of Police Officer Hoke Moseley on the Waggoners makes an epitaph for the 80s – and for Miami: “That’s some family isn’t it? Incest, prostitution, fanaticism, software.” (p.157) For reasons already acknowledged, the detective (male) of classic hard-boiled fiction is without family. His distance from “family” and the location of crime including murder within the family contribute to the construction of such fiction as literature of resistance.

Instead of family life the Miami novel produces representations of lifestyles linked somewhere by crime. Just as a palm tree, jacuzzi and red staircase seem suspended in the space in Arquitectonica’s Brickell Avenue condominium, so characters in books by Leonard, James Hall and Joseph Koenig float in their individual ways. Koenig’s Floater (1986) is a corpse; the book’s major criminal dumps one, but ends up the same way in the Tamiani canal. Miami’s topography functions as a watery cemetery. Although the Everglades bleed Miami from the rear-view mirror as motorists on the Tamiani Trail approach the Big Cypress Swamp and the Gulf Coast, the wilderness area shares in the ambivalence of the city. Alice in Floater calls her family hunting lodge, with its Spanish moss in the front yard and its freshwater lagoon in the back, “the honeymoon cottage”, but her partner Norodny drowns his first victim in an Everglades guest house. If the Everglades National Park supplies a mythic image of pre-lapsarian nature, it is also a source of Southern Gothic, a miasmal, haunted setting of swampland, mosquitoes, alligators and white trash.

A discourse of the gothic can function as the expressive medium of a nostalgic, even obsessive environmentalism. Skip Wiley’s self-imposed mission in Carl Hiassen’s Tourist Season (1986) is to wreak vengeance on the boosters, wheeler-dealers, bankers and developers who with dredgers and bulldozers have not only turned Miami into “Newark with palm trees” but have made it an environmental disaster area. Ecology is also fundamental to James Hall’s Under Cover of Daylight (1987), its action taking place in the environmentally fragile islands of the Florida Keys. Hall is interested in dealing with the issue of a human being pushed into a situation where his life and values are seriously threatened. So his central character an existentialist-loner called Thorn who lives in a remote Key Largo stilt house and ties flies for a living bears the traces, extended to parody, of a Hemingway figure. The epigraph however is from A Writer’s Journal and Thorn tries to live according to the rhythms of nature, like Thoreau on the shores of Walden Pond. At the book’s conclusion Thorn is purged of guilt and exorcizes the memory of killing his lover’s father. Finally floating he cleanses himself in Lake Surprise, becoming as Thoreau predicted “a still lake of purest crystal.”

Hiassen’s forte is black humour: “Sparky” Harper the President of the Greater Miami Chamber of Commerce is choked to death with a 79c. rubber alligator – still displaying its price tag. The crusaders responsible are eager to make a point. The body chopped off at the legs and dressed in a Jimmy Buffett shirt and Bermuda shorts is stuffed in a suitcase, the Royal Tourister naturally… Hiassen’s later and more coherent novel Double Whammy (1987), (the title is the name of a lure for bass) is an expose’ of fraud and murder among redneck fishermen In the South-Eastern states, especially Florida. In this case the environmentalist response to real estate development on the edge of the Everglades, where the state fish the largemouth bass is dangerously polluted, is divorced from the text’s imaginative gothicisms. Thomas Curl, a foul-mouthed killer, is savaged by a pit bull terrier with a “supernatural” grip. Unable to release from his arm the “demonic mandibles” of the dog (killed by the thrust of a screwdriver), Curl severs the head, still attached as though in symbiosis, and continues his murderous journey. The symbolism – Curl’s bestiality, the moral corruption made physical by infection – is crudely obvious, but is pursued with grim panache through the redneck’s explosive death and a feast for buzzards who reduce the dog’s head to a bare yellow skull.

Chucky in Stick once killed a dog and was moved out of Georgia because of his “mental” problem. His diet of Valium and ‘ludes causes him to float among the coloured lights in his head “swimming under water only without any water.” His own violence remains latent until the narrative reaches its climax, but he belongs nevertheless to that company of psychopaths (Frenger in Miami Blues, Louden in Sideswipe, Norodny in Floater, McMann in Under Cover of Daylight) which surfaces throughout the Miami/Florida intertext. The blankness of tone in Miami Vice (what Fredric Jameson defining postmodernism calls waning of affect) is, in these characters, translated into abnormal psychology so that moral vacancy replaces conscience. Committing one of his murders Norodny has “the disinterested gaze of a man preoccupied with other things as if he might be expecting an important phone call” (Penguin ed., 1989, p.70). For this unreachable type killing is work, a business: “What I did I did to eat, to eat well. It’s not my hobby.” His self-image of average citizen precludes guilt and remorse: “I like kids, a day at the ball park. I pay my taxes on time.” (p.273)

The alternative to floating Chucky-style is a form of existentialism, establishing your identity, your personal style, or just playing a role. The performance often relies on language. For example Latins in Stick dress up, pose and throw out TV lines like “What’s happening, man?” Chucky on the other hand uses hats to change his mood, to psych himself up with fantasies. In LaBrava, as the eponymous photographer derisively observes, Richie Nobles’ act is based upon a stereotype of masculinity, specifically the all-American boy: “Hometown boy – the hair, the toothpick, the hint of swagger in the set of silver-clad shoulders. What an asshole. How did they get so sure of themselves, these guys, without knowing anything? Like people who have read one book.” (p. 133) To Cundo Rey, the “boatlifter” from a Cuban prison and Nobles’ partner in crime, his own style is more authentic; it is Nobles the Florida redneck, the swamp creature who is the alien, Cundo the real American, the man of the city.

In Floater Norodny uses a stolen plane ticket to Hollywood (Florida). The complete narrative is Hitchcock’s Shadow of a Doubt remade by Brian de Palma and ending with a car chase. It is in Leonard’s novels where characters often fail to distinguish between movies/TV and life (Split Images, Get Shorty) that role playing is most frequently associated with the cinema, and Hollywood is a major influence on style as well as material, especially cutting from one centre of consciousness to another. A fan of the movie star since the age of 12 Joe LaBrava is blocked in his attempts to “read” Jean Shaw by desire and memory and by the star’s ability to “act her way out of a safe deposit box” (p. 162). Since most of her lines are from Fifties movies (in which she played Women as Destroyer) LaBrava’s talent for photography, his ability to pierce the facade of contemporary street life is irrelevant and he can only act along in the part provided for him. Like the spider woman she used to portray Jean Shaw is “disillusioned but knows she has to play the game” (p.183)

The intertextuality here is book/movie and literary genre/film genre. LaBrava wrenches his imagination back to the present, persuading himself Jean Shaw is the victim in a “real” drama. He misplaces the knowledge he has that the star has always played the same part, in this instance a member of a trio involved in an elaborate swindle. Leonard refuses to submit to generic determinism. LaBrava ends not with the film noir elimination of the spider woman, but with her marriage to Maurice Zola whom she sought to dupe. In Miami, anything can happen.

8. Urban deserts and desert landscapes

Accounts of recent crime fiction identify a preoccupation with the darker irrational side of “human nature”, with the morbid, the malicious and the sadistic and, correspondingly, a movement away from realism. “Realism” in an old-fashioned sense continues to flourish, often in a complex manner, in the novels of George V. Higgins and Ed McBain. However in an era of post-structuralism and postmodernism, when “history”, “text”, “author” and “closure” have all been questioned, less confidence attends the use of such terminology and the implied transparency of its literary techniques.

The absence of certainty in society and the resultant moral confusion may be seen as part of a new kind of weary post-Vietnam, post-counterculture cynicism. “My books end in ellipses”, says the hardboiled novelist Jonathan Valin, though irresolution and unpunished criminals go back as far as Ross Macdonald and the Chandler of The Long Goodbye. The wit and wisecracking are retained in sharp and abrasive vernaculars, shaping obscenity into rhythmic patterns and often blackly comic. Speech forms are authenticated by being set against each other: in Higgins (The Friends of Eddie Coyle, 1971) the language of the narrative argues its own limitations, its aspiration to authority checked by other language, other people’s stories. K.C. Constantine’s Always a Body to Trade sets different verbal styles side by side within the discourse of law and policing to establish levels of experience and ignorance. The police chief Balzic and the Deputy U.S. Attorney Feinstein, engaged in the trading of the novel’s title, operate pragmatically: “For God’s sake, man, what the hell do you think the law is all about? It’s trade, it’s bargain, it’s compromise; it’s negotiate, it’s deal, deal, deal.” In response the mayor Strohn can only offer the politician’s cliche’s of “law and order” and “a better place to live”. Ironically he knows less about the system of law and its workings than the dope dealer Leroy whose acceptance into the federal witness programme is being negotiated with Feinstein and Balzic. Leroy’s functional street language is admirably geared to the expression of his “pri-or-i-ties.”: “I don’t care how long it takes. I do care how soon we begin. And the sooner the better. Like now! We got the righteous shit, man, and we’re lookin’ to move it. What more you want?”[29]

Heroes and villains, cops and criminals share knowledge – and specialist skills. Peter Letkemann in Crime as Work (1973) showed that criminal life shares the structure and goals (e.g. success through professionalism) of “straight” society. Leroy and Balzic start building a relationship through the discussion of restaurant food as befits men accustomed to making journeys, getting out on the street. Detectives, thieves and con artists are linked in a world where both crime and its detection are work. Thus the reassuring and defining separation of reader (normal) from criminal (abnormal) is elided. Elmore Leonard has his heroes and villains circling each other in mutual pursuit, a process often concluded merely through chance. Simplistic ethical distinctions are rejected in such novels as Hall’s Squall Line (1989) where all the characters are morally compromised, all looking for a place in the sun, or Higgins’ A Choice of Enemies (1983) in which Benny Morgan while thoroughly corrupt is more honourable than his detractors. Leonard avoids “sneering ugly” villains preferring to show their human side. Conversely he will give one of his heroes a prison record (Stick). His shifting use of the third person perspective is radical and disturbing: “In Elmore Leonard’s society, both evil and good have become equally respectable alternative points of view from which a story can be told.”[30]

Use of the same urban (or suburban) setting reassures readers and marketing managers especially when, as in the case of Ed McBain’s 87th Precinct books, the characters fail to age or change. Historically hard-boiled authors have frequently sought to appropriate a particular fictional territory in which streets, districts and public buildings are authentic. Jonathan Valin’s novels are mappings of his home town Cincinnati where the external social landscape of civic achievement and parochial respectability merely conceals sexual predators and psychopaths. In other respects also Valin is formulaic: “I don’t have a whole lot of respect for the rich and the content. I have much greater sympathy for the people who have nothing or very little.”[31] Influenced by Poe and Hawthorne, Valin individualizes his work by mixing horror and comedy, notably in Life’s Work (1986), a black comedy version of Farewell, My Lovely, and in Day of Wrath (1982) where the private eye Harry Stoner finds a severed hand in the refrigerator during a gun battle.

The latter novel features a 1960s style commune, and the recent Fire Lake (1987) examines Sixties types and values both as part of and from the vantage point of the 1980s. Stoner had been both a cop and a hippie; Day of Wrath is evidence of a baffled, ambiguous attitude towards the past. Annie, one of the groupies at Theo’s farm (the scene is music, drugs and sex) attempts to describe the atmosphere. “Instead of money and rules and all the crap you’re taught in school, we had love.” (Futura ed., 1986, p. 152) Earlier Harry suspects that the counterculture of the Sixties and the adolescent’s dream of freedom are the same thing. One of the bands in Day of Wrath is The Furies; the book’s structuring myth is Orpheus and Eurydice – so it ends in Hell with naked bodies, madness and bloody carnage presided over by the mutilated body of the guru/musician, Theo Clinger, tied to a chair with chicken wire and wearing a brown paper crown.

Altamont and Charles Manson displace Woodstock and Timothy Leary. Stoner however also wants to distance himself from the middle class respectability represented by Eastlawn Drive (where Valin grew up), a “never-never land of hollow prosperity.” His pessimism is wide-ranging because he finds a “red, lubricious thread of selfishness” wherever his enquiries take him. It is a selfishness abetted by “a world full of love’s failures”, the inexplicable human urge to destroy the grounds of happiness. With a sentimentality redolent of Marlowe, Stoner resolves to rescue the Eastlawn Drive runaway and thus honour the dead boy who loved her, “If she hadn’t meant a thing to anyone else, she meant the world to him. And that made her worth saving.” (p.201) In Stoner who equates justice with vengeance, the sentimentality is combined with puritanism. like Amos Walker, Loren Estleman’s Detroit shamus, he represents the detective as brutal vigilante, his options closed down by the decadence and amorality of Reagan’s America.

Police officers Leaphorn and Chee in the novels of Tony Hillerman also bend the law in order to bring about justice but without the bitterness of some of their Anglo counterparts. The conventional image of tension, violence and paranoia in an urban wasteland gives way in Hillerman’s fiction to scenes of matchless beauty in Arizona and New Mexico, on and near Navajo, Hopi and Zuni reservations. This unique environment can play an active role often by means of weather. Rain is so infrequent it is mistrusted and revered; in The Dark Wind (1983) lawman Jim Chee is saved by a rain so violent there are no words to describe it. Weather and terrain combine to create a dramatic sense of landscape, a grandeur and immensity congruent with Amerindian cultures and their deep roots in nature.

The southwest wind … made a thousand strange sounds in windows of the old Hopi villages at Shongopovi and Second Mesa. Two hundred vacant miles to the north and east, it sandblasted the stone sculptures of Monument Valley Tribal park and whistled eastward across the maze of canyons on the Utah-Arizona border.” (Listening Woman. Harper ed., 1978. p1)

In both structure and characterization Hillerman acknowledges the tradition of hard-boiled fiction. The elements of crime, clues, evidence, investigation, danger and resolution are contained in a narrative where the “detective” achieves feats of intelligence, endurance and courage, like his predecessors in the genre. Hillerman’s indictment of the failure of the American family and his portraits of lost adolescents recall similar themes and motifs in Ross Macdonald and Valin. People of Darkness (1980) echoes Chandler as well as Macdonald. It discovers evil in the figure of a wealthy man Benjamin Vines who uses his economic and political power to silence those with dangerous information about an oil well explosion years ago. The detective figure in the Hillerman tales however (Leaphorn or Chee) is a member of the Navajo Tribal Police Force and the gumshoe’s code of ethics is replaced by the tribal policeman’s Navajo code. So the term “metamorphosis of Leatherstocking” assumes an ironic meaning, one underlined by the circumstance that Hillerman’s cops professionally uphold laws unrelated to, or in conflict with, Navajo sensibility and culture. Both Leaphorn and Chee are expert trackers taught by their forbears, and in People of Darkness Chee rehearses parts of the Stalking Way (“As I, the Black God, go toward it./ As the male game of darkness comes toward me.”) before starting out on his quest. One of Leatherstocking’s names is Pathfinder, but after the recession of warfare against the Amerindians, the “Indian fighter” was replaced in story papers and dime novels by the detective so that criminal whites and Mexicans could be (fictionally) apprehended.

Joe Leaphorn is a cross-cultural figure, a participant in the contemporary crime fighting world of short wave radios and pickup trucks, and the contemporary Amerindian world of poverty, alcoholism and white racism. Although his grandfather was a singer of curing rituals, his interest in Navajo folklore is professional and academic, having been like his fellow officer Jim Chee a student of anthropology. “He may well be a religious man,” Hillerman says of Leaphorn, “but he’s going to see mythology in a more abstract, poetic sense.”[32] Joe’s success can in part be attributed to his ability to understand and balance different cultures. He provides an interesting contrast to those rootless outcasts, in the work of Silko, Momaday and others, caught between the destructive modern values of whites and traditional Native American structures of belief that remain too distant. A Navajo librarian tells Hillerman: “We read Welch and Silko and we say, ‘That’s us, they really understand us, that’s us and it’s beautiful, but it’s so terribly sad.’ Then we read you and we say, ‘Yeah that’s us too, and we win.”‘ (Williams, p.80) Hillerman’s narratives too contain characters caught between cultures, sad bewildered individuals, often adolescents, seeking a personal identity, or dehumanized irredeemable figures such as Goldrims in Listening Woman. Disloyal and beyond culture, Goldrims scorns Anglo culture, desecrates Navajo land and exploits the militant Buffalo Society.

Crime in this world is a rupture of harmony, especially when the transgressions of criminals are parodies of authentic ceremonies. Metaphorically evildoers in Hillerman whether Navajo or Anglo are represented as Navajo Wolves: rejecting beauty and order these “witches”, cousins to the psychopaths of the Miami novel, embrace wickedness, spreading the sickness and darkness of which they are agents by means of sacrilege and murder. Leaphorn acts on his knowledge of Navajo culture and its healing ceremonies to establish clues and discern motives. As the individual Navajo aspires to a state of harmony, so the lawman searches for meaning and tries to restore order. The lesson of the Navajo Way is central, both for life and policing: “Interdependency of nature. Every cause has its effect. Every action its reaction. A reason for everything. In all things, a pattern, and in this pattern, the beauty of harmony. Thus one learned to live with evil, by understanding it, by reading its cause.” (Dance Hall of the Dead, Harper ed.,1973, p.55)

Leaphorn (and Chee, who is studying to be a ritual singer) accept the Navajo Origin Myth with its explanation of evil: that which is unnatural. The Navajo holds that evil can be turned against itself and thus be destroyed; Hillerman frequently dramatizes this concept, notably in the violent climax of People of Darkness. There can be only one punishment for the heinous crime of murder, heinous since death is just unrelieved horror. This eschatology differs from that of the Plains Amerindians, the Zuni or the Hopi. In Dance Hall of the Dead the murderer is beyond the reach of Anglo law but his sacrilege is punished by Zuni tribesmen during the Shalako ritual; he simply disappears.

The two Navajo lawmen need to understand the practices and beliefs of various cultures. Chee, seeking to penetrate Hopi culture in order to pursue his enquiries, secretly visits a Hopi village during an initiation ceremony (The Dark Wind); similarly in Dance Hall of the Dead Leaphorn has to deal with his feeling that Zunis consider themselves superior to Navajos. The honouring of a custom, for instance staying outside a hogan waiting for an invitation to enter, can be practical. Ghosts would lack the patience to wait so would not be around to follow the visitor into the hogan of the host.

Hillerman’s research is in part informal, the benefit of living close to Navajo country, but it also relies on anthropology and on the University of New Mexico’s extraordinary collection of Native American materials and Western Americana. His acknowledgments – from Navajo schoolchildren to members of the US Park Service and Smithsonian curators – are extensive, creating the impression of a collective text. Aware that there is more to Navajo culture than the shamanism and mysticism embraced by New Age people, Hillerman has a certain didactic propensity but one kept under control by the imperatives of storytelling.

9. Lay That Pistol Down, Babe

Although the first feminist detective, Amelia Butterworth, appeared in the last years of the nineteenth century, the roles played by women in hard-boiled fiction have until recently been severely circumscribed. The demands of popular culture ideology have been responsible for the representation of women as both attractive and desirable on the one hand, amoral and predatory, the embodiment of male fantasies and fears, on the other. A conservative social order exhibits alarm and panic when female activities such as attempts to acquire wealth and influence are seen to be motivated by desires not in accordance with patriarchal definitions of the feminine. Such women must be controlled through assault, arrest or killing. Thus the violence of the private eye often has a misogynistic core: “the intense masculinity of the hard-boiled detective is in part a symbolic denial and protective coloration against complex sexual and status anxieties focusing on women.”[33]

As a result of the Women’s Movement of the 1960s and 1970s, the increased visibility of women in public life and the proliferation of books for, by and about female experience, American women writers have absorbed and transformed the genre’s conventions. Through the creation of young, independent, resourceful female detectives – professional and amateur, hetero and homosexual – they have offered alternatives to the aggressive macho prototype who has traditionally patronized women. This has been achieved despite an abiding masculine belief that “woman’s proper sphere” lies in the domestic home and that detecting is, as the P.D.James novel ironically suggests, “an unsuitable job for a woman”.

The feminist thriller must function on a tightrope: continuing to observe characteristic devices, motifs and mannerisms while expressing a challenge to sexism and masculinist values. The best known detectives, V.I. Warshawski and Kinsey Millhone, are urban women who drink, talk abrasively and carry a gun. Like their male counterparts they endure acute physical suffering and danger; at the end of C is for Corpse (1986) for example Kinsey Millhone is “sick as a dog” in hospital, recovering from barbiturates injected by a murderous doctor, while in B is for Burglar (1985) she winds up with bruised cheeks, a swollen mouth and bullet wounds in the arm. Millhone’s creator Sue Grafton also acknowledges the male tradition by setting her novels in Ross Macdonald’s Santa Teresa. Maureen Reddy has shown that A is for Alibi (1982) reverses the plot of The Maltese Falcon with Kinsey Milihone shooting the killer, a man with whom she has had an affair. The new breed of female detectives express themselves sexually, but the threat sex poses is usually to their independence. Their life-styles are subject to invective by men since they challenge notions of male superiority and female subordination. It is to preserve such independence that the heroines in the Grafton and Paretsky books work alone; both are orphans, both divorced and have cut or lost most formal family ties.

The choice of profession implies the reversal of that passivity imposed on hard-boiled fictional women. Millhone jogs, as does Warshawski, taking her exercise with the dog Poppy which saves her life in Toxic Shock (1988). They avoid unnecessary violence, Paretsky having campaigned within MWA against the representation of sadism in crime fiction. Guns are used in self-defence or to save a friend, and female detectives are more willing to admit physical weakness and vulnerability as well as fear. Paretsky in particular, while refusing to document sexual attacks, conveys the fear experienced by women living and working in the savage American cities of the Eighties. Although there is no single pattern in the fictional behaviour of male and female characters, it is a feminist perspective (whatever their differences) which links V.I. Warshawski, Amanda Cross’s Kate Fansler and lesbian sleuths such as Mary Wings’ Emma Victor. In the Cross books the university stands in for the country home of classic English detective fiction and similarly exhibits the social and sexual attitudes of the surrounding civic landscape. Thus the investigation of a crime encompasses such issues as the significance of gender, women’s work and sexuality and – especially in The Question of Max (1976) – male elitism and sexism.

Some of the Amanda Cross mysteries have been received with caution by women readers who have found the detective’s feminism occasionally flawed and who have realized she has no strong relationships with other women. However, Death in a Tenured Position (1981) it could be argued, far from copping out by having its victimized female professor commit suicide, demonstrates that she is “murdered” by an unresponsive patriarchal society. In that respect it can be seen to share with the lesbian thriller the hypothesis that the ideological basis of crime is located in the very system that claims to enact justice and to protect women.

Like the professional PI, the amateur lesbian detective is cut off from “respectable” society (a situation ameliorated by the support of other women) and adopts a critical stance towards that society. One of the challenges posed by women, and lesbians in particular, is that “of radically altering radical power relations through a moral vision that does not assume the value of hierarchical order.”[34] The literary expression of that challenge questions the assumptions of traditional hard-boiled fiction. Barbara Wilson’s Murder in the Collective (1984) is here relevant in several ways:

1) It rejects the fast-paced action plot of hard-boiled fiction in favour of the de-centring procedures of character analysis and the investigation of relationships.

2) It emphasizes a female independence that also allows for interdependence. Thus consciousness, it is held, can be developed by means of participation in a collective enterprise.

3) By exploring ideologies of gender, race and class it interrogates the role of the detective.

In Wilson’s novel, the murdered man Jeremy is exposed as a CIA informer, a blackmailer of Filipinos in exile who has threatened them with deportation and death at the hands of Marcos’ agents. His killer is Zenaida (Zee), his Filipino ex-wife, whose deed is contextuaiized with revelations about the sexual exploitation of Filipinos by US armed forces. Pam Nilson “surrenders” her role as detective, impressed by the behaviour of June (Jeremy’s wife) towards Zee: “she’d confronted her like a woman and stayed to comfort her like a friend” (Women’s Press, p. 179). Nilson’s decision is supported by other members of the collective (including June) and this defiance of the genre is strengthened by an additional text, an authentic list of books, periodicals and organizations giving further information on Filipinos in America and the position of women in the Third World.

The success and popularity of She Came Too Late (1986) derives from its combination of recognizable hard-boiled fiction motifs and the persuasive depiction of lesbian desire and consciousness. Emma Victor can play the tough detective, hard-drinking, laconic and witty (“Hugo had an expression like a cross between Timothy Leary and Bambi”, Women’s Press, pp. 117-8). She finds her investigations taking her to the Glassman mansion, home to the rich and famous where stained glass leaded panels adorn the back door and porch. The difference between this visit and similar scenes in Chandler is that Emma Victor in black dress, tights and high heels is performing “in drag”. Her lesbian perspective provides the ambivalence with which Stacy Weldemeer – a dubious, success-hungry feminist and an Eighties version of the femme fatale – is viewed. Weldemeer is boss of a women’s health clinic and it is through her that Wilson raises the question of restraints upon women in an economy controlled by men. Along with power the book explores work, sexuality, fertility and squeezing the unions, and has an agenda of political issues comparable to those in Paretsky’s novels, which were generated by a need to oppose the construction of gender in traditional hard-boiled texts. “I had always had trouble with the way women were treated as either tramps or helpless victims who stand around weeping. I wanted to read about a woman who could solve her own problems.”[35]

Paretsky’s problem-solving detective with her empty fridge, whisky bottles and Bruno Magli shoes, fights in a series of novels both for justice and her identity. The various names by which she is addressed illustrate the latter struggle. V.I. stands for Victoria Iphigenia, the Greek reference implying the possibility of death (sacrifice) and rescue (divine intervention). To her friends such as Lotty Herschel, she is Vic; lieutenant Bobby Mallory, her dead father’s best friend patronizes her with the name Vicki; while her family who share Mallory’s disapproval of V.I.’s profession (“playing police”) irritate her with “Victoria

The family is a recurrent theme in the Warshawski novels, V.I.’s ancestry being working class Polish/Italian and her roots the blue collar community of Hegewisch in Chicago, to which she returns in Toxic Shock (Blood Shot, US). The older generations (parent and grandparents) are relatively untouched by modern mass culture and cling to the belief system brought over from Europe – and the accompanying prejudices. By local standards, V.I. and the girl she used to baby-sit Caroline Djiak, a social worker, should have settled down and had children. V.I. recoils from the demands and cajolery of the remaining members of her original family. Dead (Boom Boom her cousin in Deadlock) or alive (Aunt Elena in Burn Marks), they also draw her into life-threatening scenarios. She has, instead, her own surrogate family in which the principals are Mr Contreras, a sympathetic neighbour and Lotty Herschel, a doctor and, like V.I.’s mother, a refugee from fascism. The need for the warmth of human contact this group can provide is sometimes emphasized in the denouement of a novel. In Deadlock (1984) V.I.’s nightmares are assuaged by Bledsoe’s love-making, while in Toxic Shock V.I. and Caroline Djiak embrace as sisters after an estrangement. On Mallory’s sixtieth birthday, V.I. gives him her father’s old police badge (Burn Marks, 1990).

V.I.’s beat is Chicago with its grim, desperate south and west side ghettos and its history of political and institutional corruption. Paretsky recalls a newspaper contest on whether Chicago or New Jersey had more public officials under indictment. The city then is once more a moral landscape with each book devoted to a specific social injustice variously caused by greed, malice and injustice: hospital malpractice, securities scams, toxic waste poisoning, redevelopment fraud. In Bitter Medicine (1987) the personal becomes political when Lotty Herschel’s clinic is attacked by pro-life militants, and the plot brings together blacks, Jews and women to confront the white male medical establishment. The challenge offered to the corporate rich – to big medicine, big business, big shipping, etc. – sustains the populism found in much hard-boiled fiction and also its contradictions. The social criticisms of the Warshawski novels are constrained by the nature of the genre. As Paretsky concedes, V.I. is only capable of doing “some very small things” for individuals.

10. Conclusion

In the hard-boiled genre deconstructive procedures are especially relevant since the detective’s quest is by analogy the attempted establishment of meaning and the re-ordering of the “real” world. Deconstruction however, with its proposal that language is a play of signs, challenges conventional ideas of truth, certainty and reality. When all text is discursive, meaning and identity, it is claimed, dissolve. Hard-boiled texts, like popular culture itself, remain a site of struggle and negotiation. Edward Said has denied the possibility of a textual universe with no connection to actuality. Such a statement supports an argument that part of the struggle derives from the status of the crime novel as mimesis – that is, as registering a definite sense of the American urban scene, salient examples being Bitter Medicine, Day of Wrath and LaBrava.

Language functions variously in that registration, bonding heroes and villains and providing the verbal component of personal roles; Leonard refers to “street-corner styles” and “wise-guy overtones”. In Higgins the repetition of “fuckin”‘ can create meaning separate from the relation between signifier and signified, a Derridean “surplus” extending beyond the basic connection. In Ishmael Reed’s intertextual Mumbo Jumbo, the search for ‘Jes Grew’ and its Text encompasses (and criticizes) both black and white forms and conventions. “Both a book about texts and a book of texts, a composite narrative composed of sub-texts, pre-texts, post-texts and narratives-within-narratives”, it demonstrates its intentions in its very title, a vulgarized Anglo version of Swahili: “‘Mumbo jumbo’ is the received and ethnocentric Western designation for the rituals of black religions as well as for all black languages themselves.”[36] The plot – in various senses – of Mumbo Jumbo is couched in the form of a detective novel. Its narrative materials include good and evil organizations, corpses, clues, the exploration of the past, arrests and, of course, a detective Papa LaBas who is also an African-American trickster. A series of mysteries create suspense in the plot, but the novel denies the conventional ending of resolution. Jes Grew’s Text (of blackness) disappears, never to be found.

Indeterminacy also characterizes Jerome Charyn’s extraordinary tetralogy The Isaac Quartet (1974-78). In the last novel Secret Isaac, Isaac Sidel now “Deputy Commish” of Police for New York, “both perceives the futile madness of the surrounding world, and defies it.” His vain attempt to solve a crime “becomes a profound and doomed search for coherence and meaning.”[37] The search does have a conclusion, the killing of the corrupt ex-Chief Inspector Coote McNeill in Ireland. Isaac does not triumph; he is merely “the great survivor”. His vengeance stems from his guilt at failing to protect Annie Powell, though he is already consumed by the guilt which comes with allowing “his blue-eyed angel” Coen to be killed. Like his predecessors in the works of Chandler and Macdonald, Isaac combines remorse, suffering and failure. He has Marlowe’s belief in redemption, but this extensive narrative of questing fathers and runaway children inevitably recalls Ross Macdonald; in his introduction Charyn cites The Galton Case as the stimulus for Blue Eyes, the first of the series to be published.

Mike Wooff’s ground-breaking article on these books is entitled “Exploding the Genre”. Thus Isaac is not merely a conventional gumshoe but a succession of identities; “at times the grieving father searching for his lost daughter and his dead, surrogate son, [Coeni and, at times, a wounded prophet, a protector-defender of the dispossessed, and a dark angel bringing death.”[38] Some of these descriptions have a mythic resonance and there is in the quartet an attention to ethnicity and the spiritual that makes a link with Tony Hillerman’s narratives. Charyn however is more radical in style; he writes a poetic prose that defamiliarizes his unstable world of law, policing, desire and violence into sketches of dreamlike, surreal disconnections. It is as though George V. Higgins had been rewritten by Stephen Crane, also in his time a recorder of New York life: “Please, Isaac, lemme pop this Jorge once behind the ear. We’ll see what flows out, water, piss or blood.”The Chief closed Brodsky’s face with a horrible scowl. He wasn’t looking for company. He sat at the back of the car. He could have taken off Brodsky’s lip with the heat spilling from both his eyes. “Esther”, he muttered. He was sick of a world of lollipops.”[39]

The Isaac Quartet which contains references to Saul Bellow, Soldiers’ Pay and Joyce’s Dublin is enriched by its intertextuality. It is at the same time less abstract and cerebral than Paul Auster’s The New York Trilogy which challenges traditional elements of crime fiction and parodies the genre of romance, with which hard-boiled novels share the single quest and the restoration of order.

Quinn in City of Glass (1985) is writer, detective and protagonist. In these roles – he is “private eye”, “I” and the observer’s seeing eye in a fiction about who sees what – he endeavours to impose sense and order on his “world”, but the language games and mirror images (Quinn’s pseudonym is William Wilson) frustrate the effort to arrive at truth. Quinn vanishes from the text into the city walls. In Ghosts (1986) where Blue is lured by White to follow Black, language is pared down even further. Blue will feel himself merging with Black and this instability of names extends to the names of places which exist only provisionally in language, specifically the language of travel narratives. The travel, mental and physical, of Auster’s searchers never “ends” since language is indeterminate, lacking a single meaning.

The temporary or partial nature of resolution is also to be found in conventional crime fiction where the “reality” is chaos, disruption and resolute criminality, the spectre that shadows the American Dream. Social dislocation, excesses of power and wealth and what Mandel calls the diffused violence of late capitalism provide the conditions for a radical mode of hard-boiled fiction, exploring tensions within bourgeois society. A writer such as Dashiell Hammett exhibits a conscious radicalism, albeit marked by pessimism and irony. Ideologically one of his successors has been ex-student activist, Gordon DeMarco, whose October Heat (1979) described the efforts of the Mob and big business to prevent the socialist author Upton Sinclair becoming governor of California. DeMarco is a rare case and one of the features of comment on radicalism in the crime novel has been the necessary attention given to European writers; the Italians Fruttero and Luchentini (Night of the Grand Boss, 1979), Leonard Sciascia, and Sjowall and Wahloo whose analysis of mechanized bureaucratized policing as a function of decadent capitalism reveals the political limitations of the Ed McBain novels.

Radicalism in American crime fiction is most visible currently in feminist and gay versions of the genre which have suggested new fictional models of behaviour and sexuality. The durability and flexibility of the genre will continue to be tested as parts of urban America, what Andrew Vaches describes as the cutting edge of Darwinism, become terminal landscapes of razor wire fences, junk-yards and toxic waste dumps. Vachss’ own novels, brutal nightmare visions of garbage and graffiti, violence and drugs in the desolate wastes of New York city have moved the genre towards the cyberpunk wing of science fiction. The indictment of American society is that the fiction of Vachss, Patricia Cornwell and others (novels of child abuse, pornography and serial killers) are based on authentic cases. Real life materials will continue to permit a discourse of contradictions and irony: Al Goldstein, America’s best-known pornographer is about to stand for election as a public official in Fort Lauderdale on a law and order ticket.

11. Guide to Further Reading

Classics

Despite its premature announcement of the demise of the hard-boiled dick, George Grella’s “Murder and the Mean Streets: the Hard-Boiled Detective Novel” remains essential if melancholy reading not least for the American Studies scholar. Its most accessible location may be Robin Winks’s excellent edition of general essays, Detective Fiction (Englewood Cliffs, NJ.: Prentice-Hall, 1980). Another starting point must be the relevant sections of John G. Cawelti, Adventure, Mystery and Romance: Formula Stories as Art and Popular Culture (Chicago, London: University of Chicago Press, 1976) which explores the psychological and cultural significance of the genre. Geoffrey O’Brien, Hard-Boiled America (New York: Van Nostrand Reinhold Co., 1981) is a book for fans as its subtitle, “The Lurid Years of Paperbacks”, suggests. More concerned with books than texts it does fill a gap.

Contemporaries

Recent collections of essays are more specialized than Winks (1980). They include Brian Doherty ed., American Crime Fiction: Studies in the Genre (London: Macmillan, 1988) and Ian A. Bell and Graham Daldry eds., Watching the Detectives: Essays on Crime Fiction (London: Macmillan, 1990). My interest in the period after Chandler, Hammett and Macdonald was first aroused by the surveys in David Geherin, Sons of Sam Spade: The Private Eye Novel in the Seventies (New York: Ungar, 1980) and The American Private Eye: The Image in Fiction (New York: Ungar, 1985). That interest has recently been revived by John Williams’ interviews with contemporary American crime writers in his exhilarating Into the Badlands (London: Paladin, 1991) which offers as a bonus a roller coaster tour of the USA in the late Eighties. David A. Bowman, “The Cincinnati Kid: an interview with Jonathan Valin”, Armchair Detective, Summer 1987, 20/3, 228-238 is one of those rare interviews which deserve the title “in depth”. Stephen F. Mullken, Chester Himes: A Critical Appraisal (Columbia: University of Missouri Press, 1976) provides an exploration of Coffin Ed and Gravedigger in their Harlem setting. Jim Coffins’ emphasis on diversity of language and intertextuality endeavours to appropriate the form for postmodernism in Uncommon Cultures: Popular Culture and Postmodernism (New York, London: Routledge, 1989).

Chandler and Hammett

The fine essays on Hammett by James Naremore (“Dashiell Hammett and the Poetics of Hard-Boiled Detection” in Bernard Benstock, ed., Essays on Detective Fiction, London: Macmillan, 1983, 49-72) and by Steven Marcus (“Dashiell Hammett and the Continental Op”, Partisan Review XLI/3, 1974, 362-77) complement each other. Hammett criticism flourished in the Eighties, although Peter Wolfe, Beams Falling: The Art of Dashiell Hammett (Bowling Green, Ohio: Bowling Green U.P., 1980) is weak on The Maltese Falcon and is generally inferior to Sinda Gregory, Private Investigations: The Novels of Dashiell Hammett (Carbondale, Illinois: Southern Illinois U.P., 1985). The authorized biography, Diane Johnson, Dashiell Hammett: A Life (London: Chatto and Windus, 1984) is by turns sentimental and sensational. Frank McShane’s painstaking biography, The Life of Raymond Chandler (London: Cape, 1976) and essays by Russell Davies – in Miriam Gross ed., The World of Raymond Chandler (London: Weidenfeld and Nicolson, 1977) – and Fredric Jameson (“On Raymond Chandler”, Southern Review 6, Summer 1970, 624-50) are intelligent and trenchant, but the definitive critical study of Chandler is still to be written. One candidate would be Stephen Knight: see his Form and Ideology in Crime Fiction (London: Macmillan. 1980) and “‘A Hard Cheerfulness’: An Introduction, to Raymond Chandler” in Docherty ed., American Crime Fiction, 39-53 (cited under Contemporaries). The Simple Art of Murder (Boston: Houghton Mifflin, 1950) by Chandler himself, and Dorothy Gardiner and Kathrine Sorley Walker eds., Raymond Chandler Speaking (London: Hamish Hamilton, 1962) are essential reading. He remains a fertile source of parody as Liz Lochhead’s sketch, “Phyllis Marlowe: Only Diamonds Are Forever”, True Confessions and New Cliches (Edinburgh: Polygon Books, 1985) and the comic-book version of The Waste Land (Martin Rowson) in which T.S. Eliot finds himself in Marlowe’s mean streets, testify.

Ross Macdonald

Jerry Spier’s Ross Macdonald (New York: Ungar, 1978) is highly readable. Peter Wolfe, Dreamers Who Dream Their Dreams: The World of Ross Macdonald’s Novels (Bowling Green, Ohio: Bowling Green U.P., 1976) has the scope of a reference book; it is diffuse but often perceptive. Most stimulating of all is Eric Mottram, “Ross Macdonald and the Past of a Formula” in Benstock, Essays on Detective Fiction, 97-118.

Feminists

Collections of essays on feminism and literature frequently contain an entry on contemporary female sleuths: the most impressive and persuasive work can be found in Anne Cranny-Francis, Feminist Fiction: Feminist Uses of Generic Fiction (Cambridge: Polity Press, 1990). An excellent monograph on the subject is Maureen T. Reddy, Sisters in Crime: Feminism and the Crime Novel (New York: Continuum,1988). One of the best introductions remains Marilyn Stasio’s “Lady Gumshoes: Boiled Less Hard”, New York Times Book Review, 28 Apr. 1985, l, 39-40.

Radicals

Its Marxist orientation has not prevented Ernest Mandel’s Delightful Murder: A Social History of the Crime Story (London, Sydney: Pluto Press, 1984) from being described as the best introduction of its kind. It is certainly personal and refreshingly demystifying, and also, at times, mistaken. For a similar perspective, Denis Porter, The Pursuit of Crime: Art and Ideology in Detective Fiction (New Haven, London: Yale U.P., 1981) and Stephen Knight, Form and Ideology in Crime Fiction (cited under Chandler) should be consulted. Porter employs a solid American culture context, while Knight is penetrating on McBain and on the concept of justice.

12. Notes

  1. George Grella, “Murder and the Mean Streets” (1970), in Robin W. Winks, ed., Detective Fiction (Englewood Cliffs NJ.: Prentice Hall, 1980). Cf. Geoffrey O’Brien, Hard-Boiled America (New York: Van Nostrand Reinhold, 1981), “By the early Fifties, the private eye had pretty well run his course”, p. 119. Back
  2. Raymond Chandler, Farewell, My Lovely (Harmondsworth: Penguin, 1975), p.46. Back
  3. Scott R. Christianson, “Tough Talk and Wisecracks: Language as Power in American Detective Fiction”, Journal of Popular Culture, 23/2 (Fall 1989), pp.151-162. Back
  4. Jim Collins, Uncommon Cultures: Popular Culture and Postmodernism (London, New York: Routledge, 1989), p.67. Back
  5. Sinda Gregory, Private Investigations: The Novels of Dashiell Hammell (Carbondale: Southern Illinois U.P., 1985), p.42. Back
  6. Dashiell Hammett, The Dain Curse in Dashiell Hammell: The Four Great Novels (London: Picador, 1982), p.257. Back
  7. Dashiell Hammett, The Maltese Falcon (New York: Modern Library, 1934), p.ii. Back
  8. James Naremore, “Dashiell Hammett and the Poetics of HardBoiled Detection” in Bernard Benstock, ed., Essays on Detective Fiction (London: Macmillan, 1983), pp.63-4. Back
  9. Anne Cranny-Francis, Feminist Fiction: Feminist Uses of Generic Fiction (Cambridge: Polity Press, 1990), p.154. Back
  10. Ernest Mandel, Delightful Murder: A Social History of the Crime Story (London, Sydney: Pluto Press, 1984), p.125. Back
  11. Russell Davies, “Omnes Me Impune Lacessunt” in Miriam Gross, ed., The World of Raymond Chandler (London: Weidenfeld and Nicolson, 1977), pp.41-2. Back
  12. Liahna K. Babener, “Raymond Chandler’s City of Lies” in David Fine, ed., Los Angeles in Fiction: A Collection of Original Essays (Albuquerque: University of New Mexico Press, 1984), p. 126. Back
  13. Peter J. Rabinowitz, “Rats Behind the Wainscoting: Politics, Convention and Chandler’s The Big Sleep“, Texas Studies in Language and Literature, 22/2 (Summer 1980), p.230. Back
  14. Chandler, Farewell, My Lovely, p.211. Back
  15. Raymond Chandler, The Little Sister (Harmondsworth: Penguin, 1973), pp.181. Back
  16. John S. Whilley, Detectives and Friends: Dashiell Hammell’s The Glass Key and Raymond Chandler’s The Long Goodbye (Exeter: American Arts Documentation Centre, University of Exeter, 1981), p. 23. Back
  17. James Ellroy in John Williams, Into the Badlands: a Journey through the American Dream (London: Paladin/Grafton, 1991), p.90. Vachss’ assessment is on p.229. Back
  18. Ross Macdonald, “The Writer as Detective Hero” in Winks, Detective Fiction, p. 185. Back
  19. Paul Skenazy, “Bringing It All Back Home: Ross Macdonald’s California”, South Dakota Review, 24/1 (Spring 1986), p.89. Back
  20. Richard Lingeman, “The Underground Bye-Bye” Dew York Times Book Review, 6 June 1971, p.6. Back
  21. Skenazy, pp.78-80. Back
  22. James Crumley, The Lost Good Kiss (London: Granada, 1981), p.171. Back
  23. David Glover, “Higgins, Leonard, Parker, Inc”, OVERhere, 8/1 (Spring 1988), p.88. Back
  24. Robert B. Parker, The Widening Gyre (Wallington: Severn House, 1983), p.153. Back
  25. Williams, p.100. Back
  26. Raymond Nelson, “Domestic Harlem: The Detective Fiction of Chester Himes”, Virginia Quarterly Review, 48/2 (Spring 1972), p.270. Back
  27. A. Robert Lee, “Harlem on my Mind: Fictions of a Black Metropolis” in Graham Clarke, ed., The American City: Literal and Cultural Perspectives (London, New York: Vision Press/St Martins Press, 1988), p.80. Back
  28. Kathleen Karlyn Rowe, “Power in Prime Time: Miami Vice and LA Law”, Jump Cut 33, p.20. Back
  29. K.C. Constantine, Always A Body to Trade (London: Allison & Busby, 1986), pp.197, 188. Back
  30. Glen W. Most. “Elmore Leonard: Splitting Images” in Barbara A. Rader & Howard G. Zettler, eds., The Sleuth and the Scholar: Origins, Evolution and Current Trends in Detective Fiction (Westport, Conn.: Greenwood Press, 1988), p.106. Back
  31. David A. Bowman, “The Cincinatti Kid: An Interview with Jonathan Valin”, Armchair Detective, 20/3 (Summer 1987), p.236. Back
  32. Alex Ward, “Navajo Cops on the Case”, New York Times Magazine, 18 May 1989, p.56. Back
  33. John G. Cawelti, Adventure, Mister and Romance: Formula Stories as Art & Popular Culture (Chicago: University of Chicago Press, 1976), p.154. Back
  34. Maureen T. Reddy, Sisters in Crime: Feminism and the Crime Novel (New York: Continuum, 1988), pp.130-131. Back
  35. Marilyn Stasio, “Lady Gumshoes: Boiled Less Hard”, New York Times Book Review, 28 April 1985, p.39. Back
  36. Henry Louis Gates, Jr., “The blackness of blackness: a critique of the sign and the Signifying Monkey” in Gates, ed., Black Literature and Literal Theory (London, New York: Methuen, 1984), p.299. Back
  37. Mike Woolf, “Exploding the Genre: The Crime Fiction of Jerome Charyn” in Brian Doherty, ed., American Crime Fiction: Studies in the Genre (London: Macmillan, 1988), pp.138, 137. Back
  38. Ibid., p. 136. Back
  39. Jerome Charyn, Mariyln the Wild (1976) in The Isaac Quartet (London: Zomba Books, 1984), pp.77-78. Back

Top of the Page

John Dumbrell, Vietnam: American Involvement at Home and Abroad

BAAS Pamphlet No. 22 (First Published 1992)

ISBN: 0 946488 12 6
  1. Chronology
  2. List Of Acronyms
  3. Introduction
  4. Origins Of The War
    i. From Containment to Dien Bien Phu: 1946-1954ii. From Geneva to the 1963 Assassinations
  5. The Tragedy Of Lyndon Johnson
    i. Escalation and Americanisation: 1964-1967ii. Tet and After: 1968
  6. Nixon And Kissinger
    i. Nixon’s Strategyii. Widening the War: 1969-1972iii. The Peace Process; 1972-1975
  7. Interpreting The War
    i. Explaining American Involvement: Quagmires and Turning Pointsii. Explaining American Failure: The Debate over Military Strategyiii. American Public Opinion and the Antiwar Movement
  8. Conclusion
  9. Guide To Further Reading
  10. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. Outline Chronology

1945-1953: Presidency of Harry Truman

1945
August: Viet Minh comes to power after Japanese surrender
1946
November/December: War begins between French and Viet Minh
1950
May: Truman Administration now clearly backing the French war effort

1953-1961: Presidency of Dwight Eisenhower

1954
7 May: French defeated at Dien Bien Phu
8 May 21 July: Geneva Conference: Vietnam divided
June: Diem forms government in South
1956
April: Final French withdrawal
1960
20 December: Establishment of National Liberation Front (NLF)

1961-1963: Presidency of John Kennedy

1961
July: Neutralisation of Laos
19-25 October: Taylor-Rostow mission
1963
November: Military coup against Diem

1963-1969: Presidency of Lyndon Johnson

1964
7 August: Gulf of Tonkin resolution
1965
24 February: Rolling Thunder bombing begins
8 March: 3,500 US Marines land at Danang
1967
April and October: Mass antiwar rallies in US
1968
30-31 January: Tet offensive begins
26 March: “Wise Men” meeting
31 March: President Johnson announces bombing limit and withdraws from Presidential race

1969-1974: Presidency of Richard Nixon

1969
Troop withdrawals
18 March: Secret bombing of Cambodia begins
4 August: Kissinger begins secret meetings with North Vietnamese leaders
October-November: Mass antiwar protest in US
1970
March: Thieu’s “land-to-the-tiller” reforms
30 April: Cambodian invasion
4 May: Kent State killings
1971
8 February: South Vietnamese army (ARVN) invade Laos
1972
30 March: Beginning of the North Vietnamese army’s Easter offensive
18 December: Christmas bombing of Hanoi and Haiphong begins
1973
27 January: Peace agreement signed

1974-1977: Presidency of Gerald Ford

1975
30 April: Fall of Saigon

 

2. List of Acronyms

AID Agency for International Development
ARVN Army of the Republic of Vietnam (South Vietnamese army)
CIA Central Intelligence Agency
CORDS Civil Operations and Revolutionary Development Support
MAAG (American) Military Assistance Advisory Group
MACV Military Assistance Command, Vietnam (successor to MAAG)
NLF National Liberation Front (South Vietnamese communist organisation)
NVA North Vietnamese Army
PLAF People’s Liberation Armed Forces (NLF military wing)
PRG Provisional Revolutionary Government (post-1968 name for NLF)
SEATO South East Asia Treaty Organisation

 

3. Introduction

The Vietnam war divided American society at every level. Received truths about the conduct of American foreign policy, about Presidential power, about America’s role in the world, about the entire national purpose, were all called into question. An identifiable “Vietnam generation”—former student radicals, yes, but also military and business leaders, academics and creative artists, representatives of “middle” and of “fringe” America, even Presidential candidates -emerged to shape and question American cultural and political values. The Presidencies of Jimmy Carter, Ronald Reagan and George Bush offered different responses to the memories of Vietnam. The need to recognise Vietnam’s “lessons”, or alternatively to exorcise the “Vietnam syndrome”, hovered above Carter’s foreign policy, Reagan’s reassertion of American power in the early 1980s, and George Bush’s handling of the 1990-91 crisis and war in the Persian Gulf.

The passions and divisions generated by the war have still to collapse into purely academic and scholarly debate. Much of the war’s history has, in fact, been written not by academic historians, but by journalists, political memoirists and polemicists of various kinds. Some basic questions have still to attain any consensual resolution. How and why did the United States become so deeply immersed in so unlikely and remote a place as Vietnam? Why did America lose? (Did America lose?) To what extent were particular patterns and structures of decision-making to blame? Was the war lost by civilian or by military leaders? What would have constituted a winning military strategy? Was the war lost through domestic dissent? Was American involvement morally justified? Perhaps, above all, there remains the question as to what kind of a war it really was. At one level, this is a military question; the debate among rival military theorists has been particularly intense. Was it essentially a conventional war or was it, after all, a guerrilla-based insurgency? Was it a civil war? In political and moral terms, was it in any sense a war for freedom? A war against international communism? Against “the people”? Against North Vietnam? Against North and South Vietnam?

We shall return to the central questions in Sections 5 and 6, after surveying the course of US-Vietnamese relations between 1946 and 1975. Before embarking upon this survey, however, it is essential, by way of introduction, to draw attention to two major features which shaped the backdrop to the war: the legacy of French colonialism and the development of the American doctrine of containment.

Vietnam did not uccumb to European imperialism until the mid­19th century. French colonialism involved a familiar alliance between missionary zeal and politico-economic aggrandisement. Saigon emerged as a cosmopolitan financial centre and hub of rice exports to Japan and China. The French sought to exploit the ethnic, dynastic, religious and geographical divisions within the country. Vietnam was divided into three separate administrative units: Tonkin in the North, Annam in the centre and Cochinchina in the South. Colonial rule rested upon coercion and on the immiseration of the peasantry through land seizures and punitive taxation. Shortly before his death in 1945 President Franklin Roosevelt remarked to former Secretary of State Cordell Hull:

France has had the country for nearly one hundred years, and the people are worse off than they were in the beginning. France has milked it for one hundred years. The people of Indo-China deserve something better …[1]

The French colons were backed by elements from the Vietnamese mandarin class and by the (often Chinese) commercial sector. Yet Vietnamese nationalism grew: initially under the tutelage of scholar­leaders; but—against a background of a disaffected peasantry and social dislocations caused by urbanisation—increasingly associated with the Indochina Communist Party, founded in 1930.

As Roosevelt’s comment showed, there was considerable American sympathy in the 1930s and early 1940s for the anti-imperialist cause. The nationalist and communist opposition was allied with the US in the struggle against Japanese forces which, in concert with French collaborationists, controlled the area between 1940 and 1945. (In 1945, the “August revolution” brought Vietnamese communists temporarily to power). The American Office of Strategic Services, predecessor to the CIA, worked closely with anti Japanese communist guerrillas in this period. FDR’s scheme for a postwar Indochina centred on the idea of “trusteeship”, a phased transfer of power to anticolonial forces. However, the unpredictability of China’s future cast a shadow over this idea and, under his successor, President Harry Truman, it was abandoned. The US was not directly involved in the re-establishment of French rule in 1945-46, although it acquiesced in it. The return of French power was facilitated in the South by the British, who were concerned about their own postwar imperial future. (British troops occupied Saigon and environs between September 1945 and March 1946). In the North the French comeback represented part of a bargain made with China.[2]

In a speech to Congress on March 12, 1947, in what became known as the Truman Doctrine, the President announced that nations were now faced with a clear choice between communism, a system “based upon the will of a minority forcibly imposed upon the majority”, and freedom.[3] Containment of communism, a doctrine originally conceived by diplomat George Kerman as a policy for Europe, was to become America’s global priority. Asian communism, despite its infinite complexity and interpenetration with anticolo­nialism, was seen as merely the Eastern flank of expansionary world communism. To many Republicans the “fall” of China to communism in 1949 sugested that traitors were at work in Washington. For virtually all US policy-makers the “fall” reinforced arguments for global containment. In April 1950 National Security Council Resolution 68 was forwarded to Truman. It outlined a worldwide policy of militarised containment, “a policy of calculated and gradual coercion”. In the same month the joint Chiefs of Staff declared Southeast Asia to be “a vital segment in the line of containment of communism”.[4] Two months later, the North Korean invasion of South Korea seemed to confirm all America’s worst fears. Small countries, even those with little obvious significance for US interests, must not be allowed to “fall”. Once the row of dominoes—small countries and large—began crashing, then liberty, even American liberty, was in danger of extinction.

Detailed exposition of the developing Cold War and of containment theory is beyond the scope of this pamphlet. Suffice it to say that, between 1946 and the early 1960s, the US became in large degree a “national security state” rooted in the doctrine of anti-communist containment. The doctrine permeated public and private belief systems. Thus, a West Virginia carpenter, interviewed in 1963 as part of the first Harris survey of opinion on Vietnam, declared: “If we don’t stand up for people oppressed by Communism, we’ll soon be oppressed ourselves”.[5]

4. Origins of the War

From Containment to Dien Bien Phu: 1946-1954
Between 1946 and
1954, the containment of communism became the fundamental stated purpose of US foreign policy. Against this background, the United States slowly became implicated in the war being fought in Indochina between the French colonial power and Viet Minh nationalists led by Ho Chi Minh. A formal declaration of Vietnamese independence, consciously modelled on Thomas Jefferson’s Declaration of 1776, was promulgated by Ho on September 2, 1945. In leading the war against France, Ho stood in a long line of Vietnamese nationalists, promising to rid Vietnam of foreign inter­lopers and to institute the “mandate of heaven”. Ho, however, was also a communist and the Viet Minh a communist organisation. As the Pentagon Papers, the US Department of Defence study of Vietnam policy commissioned in 1967, were to note, the Viet Minh had been formed and led by the Communist Party in Indochina. This fact, however, did not prevent Ho and his organisation from being regarded in Indochina as the authentic voice of Vietnamese nationalism. The Pentagon Papers concluded that the “Viet Minh was irrefutably nationalist, popular and patriotic”.[6]

By the early 1950s, the United States was solidly backing the French cause. In May 1950, President Truman undertook to provide 750,000 dollars in economic and ten million in military aid. A year later, the US was underwriting approximately 40 per cent of French war costs; by 1954, the figure was nearer 80.

As noted above, the period before 1950 had seen some indications of an alternative policy. In the late 1940s, American diplomats in Vietnam, along with the State Department’s Far Eastern bureau, agreed that Soviet links to Ho Chi Minh’s nationalist movement could not be detected. Ho himself in the mid-1940s felt that the US was independent Vietnam’s best hope as a backer. Ho repeatedly requested American help, even promising that an independent Indochina could be a “fertile field for American capital and enterprise”.[7] By 1950, however, the American die had been cast in favour of French colonialism. The desire of the Truman Administration (and especially of Secretary of State Dean Acheson) to be seen in the face of domestic criticism to be applying the doctrine of world communist containment to Asia clearly underpinned these decisions. The Chinese revolution of 1949 was rapidly followed by Chinese and Soviet recognition of Ho’s regime. These events hardened American attitudes. As war broke out in Korea in June 1950, Vietnam was already being incorporated as part of the Asian sector of the Cold War battlefield.

The United States was also anxious to augment French governmental authority for reasons not directly connected with events in Asia. The power of the French Communist Party (of which Ho Chi Minh, incidentally, was a founder-member) was a factor here, as was the need for French help in establishing the North Atlantic Treaty Organisation. French and American interests in Vietnam, however, by no means coincided. France was fighting to retain and re-establish its empire; any concessions it made, such as the 1948 installation of the Ammanese Bao Dai as head of the “State of Vietnam”, were simply cosmetic changes designed, at least in part, to make it easier for Americans to back the war.

US policy-makers continued to favour an independent, non­communist Vietnam. Rather than to a restoration of French imperialism, they looked to an economically resurgent Japan becoming, in Acheson’s phrase, the “workshop of Asia”. The likelihood of Japan emerging as the fulcrum of a new liberal capitalist regional order was seen to depend upon access to Indochinese raw materials and markets. The immediate concern for US policy-makers was the military containment of communism. A State Department working group reported in February 1950:

… failure of the French Bao Dai “experiment” would mean the communization of Indochina. It is Bao Dai (or a similar anti­communist successor); there is no alternative.[8]

President Eisenhower and his Secretary of State John Foster Dulles sought to square the circle of anti colonialist modernisation and anti­communism in their “liberation” doctrine, which had anti-imperialist as well as anti-communist overtones. In the short term, however, Ike and Dulles were prepared to fund the French effort, with no guarantee that such backing would entitle the US to dictate the war’s terms. Soon France’s inability to hold up the falling Vietnamese domino became apparent. The siege and impending French defeat at Dien Bien Phu in early 1954 triggered a major debate within America’s political leadership over the nature of the Indochinese commitment. Eisenhower himself agreed with General Matthew Ridgway that an open-ended US ground force commitment, especially so soon after the achievement of peace in Korea, was unacceptable. The President further dismissed the possibility of once again using nuclear weapons in Asia, and argued that unilateral American military intervention would “lay us open to the charge of imperialism or colonialism …”.[9]

The Dien Bien Phu hawks were led by Admiral Arthur Radford and Vice-President Richard Nixon, who subsequently was consistently to hold that American inaction in 1954 set the stage for all future difficulties. Congressional and allied doubts over military action also affected the eventual decision. No doubt Dulles would have liked the British to have, in Anthony Short’s phrase, signed “his shotgun permit”.[10] However, there was no concerted American enthusiasm in 1954 for rescuing the French from an embarrassing imperialist nemesis. Vietnam was to become the object of American nation-building.

From Geneva to the 1963 Assassinations
With America providing
neither air nor ground support, the French garrison surrendered on May 7, 1954. French withdrawal, and the creation of a political vacuum which the US was likely to occupy, appeared imminent. At an international conference in Geneva (April July 1954), eventual agreement was reached on the partitioning, on a provisional basis, of Vietnam along the Seventeenth Parallel. Viet Minh and French forces were to regroup North and South of the line respectively, observing an indefinite ceasefire. The Final Declaration at Geneva (to which only France, Britain, the Soviets and China gave nequivocal support) required that free elections be held, North and South, by July 1956; and that neither sector was to join any military alliance nor receive any significant outside military support. To the Viet Minh the settlement was a disappointment. It effectively involved a retreat from areas in the South which had long been under communist/nationalist control. They were pressured into cooperating with the settlement by the Soviets and China, who wished to avoid provoking the United States, and contented themselves with the prospect of victory in the elections. The initial American reaction was to oppose the “loss” of North Vietnam. Dunes and Radford concocted another plan of intervention (July 1954) only to have it vetoed by Ike and Ridgway. Official US policy emerged in a promise not to “disturb” the settlement by threat or the use of force. Slowly, the Americans turned to the consolidation of anti-communism in the South: notably around the figure of Ngo Dinh Diem.

An autocratic, anti-French Catholic nationalist, Diem was asked by Bao Dai to form a government in June 1954. (He deposed Bao Dai the following year and proclaimed the “Republic of Vietnam”). Progressively, the Eisenhower Administration—most famously through the offices of Colonel Edward Lansdale, working through the CIA station in Saigon—insinuated American influence in the interstices of the Diem regime. Consolidation of Diem’s authority, notably against non-communist sectarian opposition, was complemented by the drawing-in of South Vietnam into SEATO, a new American-dominated military alliance. This, along with the US military presence (coordinated, from April 1956, through the Military Assistance Advisory Group (MAAG), set up to train Diem’s army) demolished any remaining American commitment to the Geneva agreements. Protesting that free elections would be impossible in the North anyway, Diem and his American sponsors abandoned any prospect of free elections in the South as well. These developments, along with increasing repression of former Viet Minh sympathisers in the South, provoked a communist response. Northern communist leaders were preoccupied with consolidating their rule above the Seventeenth Parallel; however, they also began to accede to requests from Southern party members and supporters—never an organisationally separate body—to re-prioritise the “Southern revolution”. In September 1960, the Hanoi leadership endorsed plans to set up the National Liberation Front (NLF) in the South to lead the “people’s democratic national revolution” there. (In February 1961, various armed units were combined into the People’s Liberation Armed Force (PLAF), the NLF’s military wing).

When John F. Kennedy became President in 1961, he inherited an American commitment in Vietnam which was institutionalised and even, to a degree, militarised. The Eisenhower Administration had injected about one billion dollars into Indochina—most of it into Vietnam—since 1955. Kennedy inherited the CIA and MAAG presences in Vietnam, with between 800 and 900 American “advisers”. Alternative policies had been abandoned: not only the embryonic anti colonial, pro-Ho policy from the 1940s, but also the view advanced by Eisenhower era doubters like General J. Lawton Collins and Secretary of Defence Charles Wilson, who either distrusted Diem or feared the implications of an open-ended American commitment.

To the Vietnam problem, summarised for the new President in a pessimistic report from Colonel Lansdale, Kennedy brought considerable personal knowledge. JFK had visited Vietnam in 1951 and had concluded that America had allied itself “to the desperate efforts of the French regime to hang on to the remnants of an empire”. During the Dien Bien Phu debate in Congress, Kennedy had stated:

I am frankly of the belief that no amount of American military assistance in Indo-China can conquer … “an enemy of the people” which has the sympathy and covert support of the people.[11]

Kennedy believed that Vietnamese hearts and minds could be deflected from communism and towards the viable nationalist alternative offered by Diem. Central to this project were the doctrines of counter-insurgency and pacification. Communist-inspired “wars of national liberation” were to be met and defused by American nation­builders. During Kennedy’s Presidency, a version of this was attempted, with some success, by the elite US Green Beret forces among the montagnards of the Vietnamese Central Highlands. In the long run, however, hearts and minds could not be won without thoroughgoing, democratic political reform and economic (primarily land) reform. And this Diem would not countenance, dismissing all reform as a sign of weakness. As the war deepened, the activist, technocratic attitudes of the Kennedy Administration appeared increasingly lame. Adam Yarmolinsky, Special Assistant to Robert McNamara (Kennedy’s technocratic Secretary of Defence) later characterised them as follows:

…. all we were going to have to do was send one of our Green Berets out into the woods to do battle with one of their crack guerrilla fighters and they would have a clean fight, and the best man would win and they would get together and start curing all the villagers of smallpox.[12]

For all this adrenalin-inducing New Frontierism, JFK’s initial instincts tended to be cautious. He sought to do enough to keep Diem afloat, while maintaining pressure on the regime to reform. Diem, for his part, was anxious to protect his independence, telling Vice­President Lyndon B. Johnson in Spring 1961 that he did not want US ground troops. Better than his American backers, Diem knew the domestic danger of being seen as the agent of a new colonialism. Returning from Saigon in May 1961, Johnson told Kennedy that, if Indochina were not protected from communism, the US might as well pull its defences back to San Francisco. In October national security adviser Walt Rostow and General Maxwell Taylor reported that Diem would sink in the face of communist opposition unless US assistance were dramatically increased.[13]

Although not fully in line with the Taylor-Rostow recommendations, Kennedy’s late 1961 decision to escalate was pivotal. By 1963 over 16,000 US troops—now organised under the auspices of Military Assistance Command, Vietnam (MACV) and headed by General Paul Harkins—were in Vietnam. By November of that year there appear to have been at least 63 American combat deaths. Three of these occurred during US helicopter involvement in the January 1963 battle of Ap Bac; PLAF guerrillas (now commonly known as “Viet Cong” or, in the case of Harkins and his NIACV staff, “raggedy-ass little bastards”) defeated a far larger and better equipped force from Diem’s army.[14]

Before Ap Bac, it was the complex situation in Laos—a threeway struggle between the communist Pathet Lao, CIA-backed rightists and neutralists—that commanded Kennedy’s attention. His favoured solution, “neutralisation” of Laos, formed the basis of American policy at a further Geneva conference in 1962. In General Maxwell Taylor’s words, there emerged a “tacit understanding” that the fate of Laos, the backdoor to South Vietnam, would be “resolved in Vietnam”.[15] The result of the “tacit understanding” was effectively a partitioning of Laos. Throughout the war, the North Vietnamese, despite direct and indirect American incursions, were able to keep the Laotian backdoor open. From a primitive route the Ho Chi Minh trail was developed into a year-round access to South Vietnam. By the early 1970s, tanks were moving along what American soldiers nicknamed the “Averell Harriman Freeway” (after JFK’s chief negotiator at Geneva in 1962). Kennedy’s policy in Laos also had a more immediate effect on Vietnam. As Paul Nitze (Assistant Secretary of Defence under Kennedy) recalled:

After our shift in Laos, the executive branch decided that if we then appeared to give up in Vietnam, the Kremlin was likely to doubt that we intended to stand firm anywhere.[16]

Meanwhile, the relationship between Diem and the Americans began to evolve like a Jacobean tragedy. At times, and not for the last time in the war, the South Vietnamese puppet appeared to be controlling the American puppeteer. The litany of Diem’s excesses became familiar to sections of the American public now becoming sensitised to events in Vietnam: corruption and nepotism; the flamboyant posturing of his brother Ngo Dinh Nhu and Madam Nhu; the Strategic Hamlet Programme, wherein peasants were forcibly moved from their lands to “protect” them from communists; the intense and unpredictable repression, especially of Buddhists; and the 1963 Buddhist self-immolations (Madam Nhu’s “monk barbecue”). When Henry Cabot Lodge replaced the pro-Diem Frederick Notting as US Ambassador in Saigon in August 1963, he quickly made his view known that Diem was the problem. What followed was, in the words of William Bundy (another assistant to McNamara in this period), a confused but basic American decision:

…. in effect to recognise, and to some extent help create, a situation where a military coup was entirely possible, and then to acquiesce in it.[17]

Leading American policy-makers had come to the view that Diem’s “penchant for self-destruction”[18] should be allowed to run its course. On the first day of the month that was to witness JFK’s own assassination, Diem and his brother were murdered in a military coup.

Much speculation centres on what would have happened had Kennedy lived. He inaugurated a high-level review of Vietnam policy shortly before he died. However, as William Bundy noted, “there really was no significant discussion of the option of withdrawal”.[19] Kennedy was capable of acute insight into the nature of the American dilemma in Vietnam. He was also capable of transgressing his own analysis and of basing policy on a facile optimism.

5. The Tragedy of Lyndon Johnson

Escalation and Americanisation: 1964-1967
The ousting of
Diem simply exchanged compromised nationalism for political instability and uncertainty; military and other cliques vied for control. Alarmed by CIA reports, President Lyndon Johnson set his face against Vietnamese “neutralisation”, which he saw as simply “another name for a Communist take-over”.[20] Soon after he assumed office, LBJ was told by McNamara that “the situation has … been deteriorating in the countryside … to a far greater degree than we realised …”.[21] Though not publicly criticised, the blandly optimistic Harkins was made a bureaucratic scapegoat and was replaced at MACV in June 1964, by William Westmoreland. By the end of 1964, the number of American military personnel in Vietnam had risen to 23,300.

Escalation in Vietnam contrasted sharply with Johnson’s portrayal of himself in the 1964 Presidential election campaign against Republican Barry Goldwater as the peace (though as he reasonably pointed out in his memoirs, not at “any price”) candidate. Speaking in Ohio in October 1964, LBJ famously promised: “… we are not about to send American boys … to do what Asian boys ought to be doing for themselves”.[22] By this time Johnson had acquired what he was to interpret as a Congressional carte blanche for expanding the war: the Gulf of Tonkin resolution (August 1964). The resolution represented a kind of declaration of war manque. Opposed in the Senate only by Wayne Morse (Oregon) and Ernest Gruening (Alaska), the resolution was a response to reports of North Vietnamese torpedo attacks on American warships. The precise degree of Administration deception regarding the Gulf of Tonkin incidents remains in doubt; it certainly seems likely, for example, that the attack reported as having occurred on August 4 never actually took place. Johnson and McNamara succeeded in eliciting from Congress (with the strong support of the still-loyal Senate Foreign Relations Committee chairman, William Fulbright) permission to take “all necessary measures”. In retaliating against North Vietnamese shipping and bases, LBJ set the war’s pattern, declaring: “… our response … will he limited and fitting”.[23]

Political instability in Saigon and apparent enemy breakthroughs set the seal on the crucial 1965 escalation decisions. An attack on the US base at Pleiku in February 1965 occasioned the launching of what was to become Rolling Thunder, the air bombardment of North Vietnam which continued intermittently until 1968. As conditions in the South deteriorated, Westmoreland, in March 1965, concluded that it was time “to put our own finger in the dyke”. On March 8, Danang witnessed the first open commitment of US combat forces on the mainland of Asia since the Korean war. Almost immediately, Westmoreland asked Maxwell Taylor to support a request for more marines to protect radio relay units and airstrips at Phu Bai.

Concerned to keep commitments limited and to preserve the Great Society domestic reform programme at home, LBJ refused to accede fully to Westmoreland’s and the Joint Chiefs’ troop requests. In a speech at Johns Hopkins University in April 1965, he announced an economic investment programme, a kind of Great Society for Indochina. Troop levels were raised: to 125,000 in July and to 160,000 by the end of 1965. McNamara informed the President on July 28:

Our objectives … are quite limited in Vietnam. They are, first, to permit the South Vietnamese to be independent and make a free choice of government and second, to establish for the world that Communist externally inspired and supported wars of liberation will not work.[24]

Undoubtedly, commitments were limited. Yet insufficient attention was paid to ascertaining exactly how limited a commitment could secure McNamara’s objectives. Significant casualty levels, moreover, were now inevitable. US personnel previously restricted to patrolling were now to engage in “search and destroy” missions against enemy main forces. The first major confrontation occurred in the Central Highlands (Ia Drang valley) in November 1965. The conflict followed a soon to be familiar pattern: the PLAY took heavy losses but finally retreated into Cambodia to regroup.[25]

Westmoreland’s war of attrition was no more efficacious than that waged by his predecessor, Harkins. It had simply become more thoroughly Americanised. Commenting on the 233 American deaths in four days of fighting at la Drang, US Marine Corps general, Victor Krulak noted that Vo Nguyen Giap (the North Vietnamese military commander):

… was sure that if the cost in casualties and francs was high enough, the French would defeat themselves in Paris. He was right. It is likely that he feels the same about the U.S.A.[26]

By the end of 1967, the US was sustaining significant losses without any outstanding compensatory gains. The antiwar movement was erupting on American college campuses, whose male inhabitants now faced the prospect of being drafted; by mid-1967, some 30,000 draft calls per month were being issued. In South Vietnam, two new leaders had emerged in the persons of Nguyen Cao Ky and Nguyen Van Thieu; by late 1967 their hold on power was being guaranteed by almost half a million US troops. President Johnson found himself wedged between military demands for all-out war, involving mobilising the reserves and invasion of Cambodia and Laos, and increasingly doveish noises emanating from Pentagon civilians. Characteristically, LBJ continued to oppose total war while sanctioning incremental troop increases. His San Antonio speech of September 1967 indicated that some kind of mutual de-escalation, and even communist participation in a postwar South Vietnamese government, might be acceptable. The current military and political situation was not improving, however. Elections were held in September 1967, but failed to establish wide legitimacy for the Ky-Thieu regime. The annual cost of the American commitment was now approaching 30 billion dollars. In October 1967, McNamara warned LBJ that, if current strategy were held, the President would have to face fatality levels of between 24 and 30,000 by early 1969. The next month, McNamara moved quietly to the Presidency of the World Bank. The divisions in the Pentagon became apparent on 21 November when Westmoreland told the National Press Club that the end of the war was in sight. In fact, Westmoreland’s famous “crossover point”—where communists were being killed faster than they could be replaced—appeared as far away as ever. Moreover, communist forces simply refused to play along with Westmoreland’s big unit war. American use of defoliants, napalm and designation of “free fire zones” further alienated, destroyed and made homeless much of the rural population.

Tet and After: 1968
Between 1964 and 1968, Johnson undertook over 70 peace initiatives, involving 16 bombing pauses. Hanoi never indicated, however, that it would settle for anything less than domination of the entire country. As the 1968 US Presidential election season opened, world attention focused on the remote outpost of Khe Sanh, near the Seventeenth Parallel. Here, US Marines (“like antichrists at vespers”, in Michael Herr’s phrase [27]) were defending themselves under siege conditions, attempting to prevent a replay of Dien Bien Phu. The siege, which lasted until April, now appears to have been a successful diversion, drawing American attention away from the cities. During the late January New Year (Tet) holiday, communist forces launched a coordinated attack on these urban areas. (At the time, Westmoreland interpreted the offensive as an attempt to divert resources from Khe Sanh).

Just as perceptions of the Khe Sanh siege have changed, so it is now clear that at least one of Westmoreland’s military judgements of 1968 was correct: the Tet offensive was a military defeat for North Vietnam. The offensive seems to have been launched with a variety of motives. Hanoi wished to restore declining morale and to demonstrate to the US that the war was unwinnable. It also looked to the precipitation of a general uprising—a “people’s war”—in the South. This manifestly did not occur. Indeed, with approximately one million South Vietnamese made homeless by the offensive, there was even some evidence of a rally towards Thieu and the South Vietnamese government. Westmoreland, claiming that North Vietnam had now cashed all its “military chips”, proposed a strategy to end the war: a troop increase (including reserve mobilisation) of 206,000, an attack on communist sanctuaries in Laos and Cambodia, and an amphibious assault on bases in North Vietnam, all accompanied by intensified bombing.

The view from home, however, was very different. Television coverage of the war stunned an American public which had so recently been assured that all was proceeding according to plan. The communists, after all, were able to assault even the US embassy in Saigon and to hold Hue until Feruary 24. Massive US firepower had to be deployed to recover the cities; (the offensive involved some 36 provincial capitals and 64 district towns). The battle for Ben Tre provoked the most notorious of all American military pronounce­ments: “We had to destroy the town to save it”.[28]

Lyndon Johnson was being buffeted on all sides: by antiwar protesters, by journalists, by blacks in inner city riots, by the Eugene McCarthy and Robert Kennedy candidacies, even by his Treasury Secretary Joe Fowler, who warned him about the war’s impact on confidence in the dollar. Westmoreland’s strategic proposals were handed over for appraisal by Clark Clifford, the new Defence Secretary. Instructed to give LBJ the “lesser of evils”, Clifford and other Pentagon civilians embarked on a full-scale review of Vietnam policy. Clifford was deeply disturbed by domestic strife, elite doubts about the war, and also by the apparently poor economic prospects occasioned by the conflict.[29] His report amplified McNamara’s 1967 doubts and urged delegation of military responsibilities to the South Vietnamese themselves, as well as a move away from Westmoreland’s war of attrition. The US should direct its efforts at achieving security for the people in South Vietnam and at urging political and land reform. Total victory, conventionally defined, was seen as an impractical goal. With Eugene McCarthy gaining a 42 per cent vote in the New Hampshire primary on 12 March, there ensued a battle for the soul of Lyndon Johnson. On the one side were Westmoreland, the Joint Chiefs of Staff, Ambassador Ellsworth Bunker, Walt Rostow, and his old associate Abe Fortas. According to the latter, “unless we `win’ in Vietnam, our total national personality will … change”. America would succumb to “self-doubt and timidity”.[30] On the other side stood Clifford and White House aide Harry McPherson, who by now were, in effect, urging LBJ to disengage. Even now, however, Johnson was sticking to his middle ground. By 22 March he had decided against Westmoreland’s strategy on the grounds that the situation in South Vietnam was stabilising and the “middle way” was still viable. On 26 March, however, there occurred the crucial meeting of Johnson’s irregular elite advisory group, the “Wise Men”. Dean Acheson told the President that “we can no longer do the job we set out to do in the time we have left and we must begin to disengage”.[31]

From Johnson’s vantage point, he had been undercut by “establish­ment bastards”.[32] His speech of 31 March gave recognition to what LBJ now saw as inescapable political reality. Johnson announced that bombing would be strictly limited to an area north of the demilitarised zone. Peace overtures would be begun and Johnson himself would step down from the Democratic nomination race. The “bastards” may have got him, but still Johnson was not prepared to abandon his “middle way”. The peace process initiated in Paris indicated that LBJ had departed very little from his original agenda. He envisaged a peace settlement without significant prior American troop withdrawals and with a post-settlement allied force left to defend South Vietnam.

General Creighton Abrams, who succeeded Westmoreland in mid­1968, carried on the war of attrition and the “search and destroy” missions. Not only was this strategy now effectively discredited, but it was being increasingly used to draw off American units into prepared killing grounds.

6. Nixon and Kissinger

Nixon’s Strategy
The self-laceration of the Democratic party continued into the tumultuous Chicago convention and culminated in Hubert Humphrey’s November 1968 defeat at the hands of Richard Nixon.

In Vietnam, President Nixon and his national security adviser Henry Kissinger sought “peace with honour”, a “reasonable chance” for South Vietnam’s survival—generally thought to require the exclusion of communists from participation in government—or at least a “decent interval” (between US disengagement and communist takeover of the South). The new leadership saw itself as. realist in every sense of that word; unencumbered by moralism, it would attune US policy to US interests and secure a new balance of power. The United States should not be seen to have abandoned an ally in the face of communist expansionism. Defined in these terms, Vietnam policy came to shape the whole of the Nixon-Kissinger “grand design” in foreign policy. Detente with the Soviets and with China was, at one level, designed to bring great-power pressure on North Vietnam to come to terms. The Nixon Doctrine, promulgated at Guam in July 1969, attempted to spell out to Asian countries that the United States was no longer prepared to take on the role of military defender. The Guam remarks clarified and extended Nixon’s answer to the Vietnam puzzle: the policy of Vietnamisation. Described by Ambassador Bunker as an attempt to change the colour of the corpses, Vietnamisation involved the handing over of defence responsibilities to a beefed-up South Vietnamese army (the ARVN). By the end of Nixon’s first term, a series of phased withdrawals had seen US ground troop totals decline from a peak of 543,000 (April 1969) to 25,000. Draft calls were reduced steadily from mid-1969.

Alongside Vietnamisation, Nixon committed himself, characteristi­cally, to not one but two peace processes. Protracted meetings between the combatants occurred not only in the official Paris talks, but also in parallel clandestine talks. Nixon and Kissinger sought to influence and orchestrate such negotiations by, on the one hand, controlled and decisive displays of intense force and, on the other, by threatening to unleash uncontrolled force: the threat of “mad bomber” Nixon. The threat of disinterring the “old” Nixon might help Hanoi see reason. The President instructed Kissinger in the early part of 1972 to:

… tell these sons of bitches that the President is a madman and you don’t know how to deal with him. Once re-elected I’ll be a mad bomber.[33]

In retrospect, the Nixon-Kissinger strategy on Vietnam (“walk [sic] softly and carry a big stick”[34]—as Nixon described it at the 1968 Miami convention) appears more carefully thought out than it probably was. As in other areas of Nixon’s foreign and domestic policies, there is a danger of attributing too much coherence to what was often a combination of muddled calculation, short-term reaction and (unintentional, as well as intentional) transmission of mixed signals.

Widening the War: 1969-1972
American troops in
Vietnam took high casualties during the communist offensive launched in February 1969. Nixon’s response was to combine peace feelers and Vietnamisation with the drawing up of plans for what essentially amounted to all-out war. Without the knowl­edge of Secretary of State, William Rogers, or Defence Secretary, Melvin Laird, a secret plan, Duck Hook, was drawn up. It would have combined a ground invasion of the North with massive bombing (notably of the Northern dyke system) and mining. The plan envisioned the destruction of the Ho Chi Minh trail, possibly through the use of nuclear power. It was never implemented. Kissinger received the plan (drawn up by his military aide, Alexander Haig, and the Pentagon Office of the Chief of Naval Operations) in July 1969. On October 17, Kissinger recommended against it and on November 1, Nixon himself decided to abandon it. At one level, Duck Hook represented a coherent “win-the-war” strategy; twenty years later, Nixon described his rejection of it as the worst decision he made in the White House. The reality was, however, as Nixon and Kissinger appreciated at the time, that Duck Hook threatened a disruption to American domestic peace greater even than the turmoil of 1968. The White House could not rely even on its “silent majority” (much less the elite groups whose support had already cracked) to back a course of action, which would have involved high US losses and whose short-term military success could not be guaranteed.

Despite the Duck Hook confusion and the huge antiwar demonstrations in the US of October/November 1969, Washington was able to take some comfort from the way events were moving in South Vietnam. The communist overextension of 1968 was still being felt, with PLAF combat strength now in decline. In particular, MACV calculated that Southern recruitment to the PLAF was tailing off. Perhaps the war of attrition/ “crossover point” strategy—in abeyance since the onset of Vietnamisation—was beginning to work after all? In addition, President Thieu, under American pressure and with US financial backing, was at last addressing the issue at the heart of rural discontent: land reform. The communists had long exploited the hostility of landless and tenant farmer-peasants to the corrupt Saigon institutions which controlled rural credit, as well as to those landlords against whom various South Vietnamese governments had refused to act. Approximately two-thirds of the rural population in South Vietnam were tenant farmers, many of whom worked extremely small holdings. The communists’ policy of distributing land under their control had long formed the basis of their appeal; it was effected by a subtle marriage of the idiom of Marxian socialism to that of Vietnamese village communal traditionalism.[35] Thieu’s “Land-to-the-Tiller” law (March, 1970) promised to eliminate tenancy and to break—or at least rival—the communists’ ability to use the trump card of land reform. Pacification and rural development programmes also appeared to be making some headway at last, as did political reforms at village level.[36] At a cruder level, the Phoenix programme, a “hit” operation to eliminate civilian communist cadres was (it is now clear) having a devastating impact. Yet the upshot of all this should not be exaggerated. Thieu’s government had not suddenly become legitimated, popular and democratic. The ARVN, in particular, was still unreliable. However, any notion of reviving the Tet people’s rising idea was now ended. The Americans were now fighting the North Vietnamese army (NVA).

Many American lives, and many more Asian, were lost between 1969 and 1973 in pursuing the Nixon version of the “middle course”. Henry Kissinger correctly points out that not “even the strongest critics in the mainstream of American life”[37] wished to imperil US credibility by pulling out in 1969. American public opinion also, between 1969 and 1973, broadly favoured “peace with honour” over immediate withdrawal. The fact still remains, however, that the “middle course” -in effect, troop withdrawals accompanied by tough, periodic military action—simply was never going to result in a peace settlement which amounted to anything more than an American defeat.

Nowhere was the bankruptcy of the Nixon-Kissinger approach made clearer than in the 1969-1971 extensions of the war into Cambodia and Laos. The Cambodia incursions provoked the most severe public and Congressional criticism, and domestic discord, of the entire Vietnam era.

Under LBJ, American transgressions of Laotian and Cambodian neutrality had been confined to CIA-backed anti-communist adventures and B-52 bombing of the Ho Chi Minh trail. In April 1969, Nixon ordered the more concerted (and equally illegal) “secret” (MENU) bombing of communist sanctuaries in Cambodia, which was to continue into 1973. Senior Air Force personnel were ordered to falsify information, even concealing from the pilots themselves exactly upon whom the bombs were dropping. On April 30, 1970, Nixon launched an actual invasion. ARVN and US forces went into Cambodia with the intention of destroying the PLAF “headquarters” there. This action was occasioned by the overthrow, with American complicity, of the (relatively) neutralist Prince Sihanouk by the more pliantly pro-American Lon Nol. Nixon and Kissinger took their chance to wield the “big stick”. They felt, of course, that the communists had long seen Indochina as one battlefield; it was the Vietnamese communists who had first violated Cambodian neutrality, and who now threatened to topple Lon Nol. Domestic protest in the US was inevitable, and both Secretary of State Ropers and Secretary of Defence Laird opposed the invasion. For Nixon and Kissinger, however, the invasion was a test of strength against the despised antiwar movement. Kissinger “knew that … another round of domestic acrimony, protest, and perhaps even violence was possible”.[38] Though American and South Vietnamese forces did come close to the PLAF bases, the invasion was a military fiasco. No communist “headquarters” were unearthed, while enemy forces were able to evade the American advance. It is true that the shifting of communist sanctuaries to more remote areas in Cambodia did reduce the PLAF threat to the heavily populated Mekong delta region. However, the Ho Chi Minh trail remained intact. Moreover, the invasion pointed up the limitations of the ARVN. After 1970, the real threat was clearly from the North, over the demilitarised zone. But the locally-supported, family­oriented ARVN simply could not be successfully removed from the Mekong delta to defend the Northern provinces.

At every level, the Nixon-Kissinger decision to expand the war stood condemned. Domestic violence came as students, recently described by Nixon as “bums”, were shot dead at Kent State University in Ohio. The destabilisation of Cambodia, of course, was soon to usher in the (increasingly anti-Hanoi and eventually US­assisted) Khmer Rouge regime, which effectively declared murderous war on the Cambodian people. On June 30, 1970, the Senate passed the Cooper-Church amendment, barring US military from further fighting in Cambodia. By this time, troops were already withdrawing, having failed to “search and destroy”. Nonetheless, Cooper-Church, along with the full Congressional repeal of the Gulf of Tonkin resolution in December 1970, served notice that the legislature was no longer willing to defer to the President’s war-making prerogative.

The ARVN’s invasion of Laos in February 1971 completed the undermining of the White House strategy. The massive military and non-military spending associated with Vietnamisation was having a positive, albeit often grotesquely distorted, effect on the South Vietnamese economy. At the level of the ARVN, however, it was simply not working. The ineptness of the Laotian invasion convinced North Vietnam’s Foreign Minister, Nguyen Co Thach, that here “was the defeat of Vietnamization”.[39]

When the NVA launched its massive offensive against the Northern provinces at Easter 1972, the ARVN did better and the invasion was turned away. However, the crucial allied victory at Kontum (in the Central Highlands) was achieved only after America’s “civilian general” John Paul Vann had substituted US for ARVN leadership. A similar picture emerged from the battle at An Loc, where US Air Force fighter­bombers and Army attack helicopters rescued the ARVN.

The Peace Process: 1972-1975
The peace talks between the US and North Vietnam were among the most tortuous in the history of warfare. By 1969, a succession of disputes deadlocked negotiations: wrangles over who could participate, over the postwar role of the PRG (the Provisional Revolutionary Government, as the NLF was now called), the US and South Vietnamese refusal to accept anything smacking of coalition government for the South, disagreements relating to prisoners-of-war, bombing pauses and American attempts to secure promises from Hanoi regarding its postwar behaviour. In public, the most obvious dispute revolved around political-military linkage. In essence, the North Vietnamese were willing to cooperate with what they saw as the ineffectual policy of Vietnamisation. They would not, however, remove NVA troops from the South, nor (in any meaningful sense) guarantee their conduct subsequent to the American withdrawal. As Kissinger later wrote:

The North Vietnamese considered themselves in a life-and-death struggle; they did not treat negotiations as an enterprise separate from the struggle; they were a form of it.[40]

Yet the Americans also were fighting while negotiating. Nixon’s response to the Easter 1972 NVA offensive was not simply defensive. With a considerable degree of public support, he ordered the mining of Haiphong harbour and the intensive bombing (long urged by General Abrams) of Northern cities. If this meant endangering detente—a summit meeting with Soviet leader Brezhnev was in the offing -then so be it. In fact, the summit went ahead and Moscow made little objection even when it lost a ship in Haiphong harbour. Moscow wanted the North Vietnamese to come to terms, but the truth was that Hanoi was relatively immune from great-power pressure.

In private, Kissinger was prepared to confide to the North Vietnamese that America was sick of the war, but that its hands were tied by the need to win Thieu’s support for any settlement, as well as by US concern for its international credibility. By the Autumn of 1972, the White House had decided to take a different tack. Some things appeared to be going reasonably well. After all, the Spring invasion had been held back and the post-1968 security, pacification and spending programmes were undoubtedly having an effect. Yet many heavy NVA divisions had simply withdrawn back across the demilitarised zone, or into Laos or Cambodia—the 1970 and 1971 invasions had actually made this easier—to await American withdrawal. Moreover, the Presidential election was imminent, (the Democrat George McGovern was in effect an antiwar candidate) and the peace talks were deadlocked. US policy thus entered a new phase: intense, destructive bombing of Northern targets (Linebacker I), the offer of significant reconstruction aid to post-settlement North Vietnam, and moves towards the achievement of joint concessions over the political future for the South. Against the wishes of Thieu, Kissinger accepted the idea of a three-part electoral commission (involving Saigon, the PRG and neutralists). Hanoi did indeed seem prepared to accept an arrangement which, although leaving Thieu in power, granted political status to the PRG. On October 26, 1972, Kissinger announced: “peace is at hand”. The massive victory over McGovern was now only a formality. Yet peace was not at hand. Thieu’s opposition to the agreement was an important factor in the post-election collapse of the October settlement. Too much has been made of Thieu’s opposition, however. Nixon was not above threatening him with the fate of Diem if he became too obstructive.[41] New American military aid (Enhance Plus) also took the sting out of Thieu’s opposition. Kissinger’s “peace is at hand” statement was part bureaucratic gaffe, part election ruse. Nixon himself had grave doubts about the settlement, with its overtones of coalition government and implications (as a Kissinger aide put it) of having “flushed Thieu down the election drain”.[42]

The demise of the October settlement had been the responsibility of the United States and South Vietnam. Now, however, North Vietnam began to backtrack on previous concessions regarding prisoner-of-war exchange. This further deadlock stimulated what was to become, after the Cambodia invasion, the most controversial act of Nixon’s policy in Vietnam: the 1972 Christmas LINEBACKER II bombings. Following his election victory, Nixon sought solitude at Camp David. Privately disturbed by early signs that the Watergate scandal at home might veer out of control, he now sought to end the war. He turned to the “mad bomber” strategy. Between December 18 and 30, over 2,000 sorties were flown. Nixon berated Admiral Thomas Moorer:

I don’t want any more of this crap about the fact that we couldn’t hit this target or that one. This is your chance to use military power effectively to win this war … [43]

“Smart” bombs, of the type later employed in the 1991 Gulf war, were used to obliterate military and communications systems. Inevitably, there were civilian casualties: most famously the Bach Mai hospital in Hanoi.

The final Paris agreement was signed in the wake of the bombing on January 28, 1973. US troops were to leave within sixty days; NVA troops in the South (at least 150,000) were to remain. To the South Vietnamese leaders the agreement’s asymmetry amounted to a betrayal. Thieu took comfort in promises, contained in letters that he concealed in his bedroom, that the US would not simply leave Saigon to its fate. The cease-fire was soon violated by all parties. As one South Vietnamese official put it: “The only provision of the Paris Agreements that was observed was the removal of foreign troops from Vietnam, namely American troops”.[44]

As US air bombardment on Cambodia and on NVA forces in the South continued, Congress moved to cut off funds for the war. On July 1, 1973, President Nixon, unable to battle it out with Congress on Vietnam as well as over Watergate, signed a bill which effectively ended American military involvement in Indochina. The end-game was a mismatch, distinguished by spectacular reversals of strategy on Thieu’s part. For the ARVN, Operation Enhance Plus had come too late and too abruptly; the South Vietnamese army lacked the expertise to use much of the equipment. As a highly disciplined communist force surrounded and (on April 29, 1975) occupied Saigon, Kissinger declared: “The Vietnam debate has run its course’’.[45] The post-Watergate Congress was in no mood to redeem promises made by Nixon and President Ford. Journalist John Pilger’s last memory of Vietnam was of a US admiral’s voice anticipating his ship’s return to the Philippines:

Well folks, that just about wraps up Vietnam. So let’s all have a party and get outta here, so we can mosey on back to Subic Bay and get ourselves a genuine Budweiser beer.[46]

7. Interpreting the War

Explaining American Involvement: Quagmires and Turning Points
Vietnam had little obvious or direct economic or strategic significance for the United States. The importance of South East Asian raw materials was, it is true, considered by officials within the Truman, Eisenhower and Kennedy Administrations.[47] Yet any imputation to the United States of direct economic interest in Vietnam is mistaken. As Harry McPherson, aide to President Johnson, declared in 1985, his boss “did not go in to save iron ore”. He “went in to try to prevent Asia from being rolled up by the Chinese Communists”.[48]

The Vietnam war was fought within the ideological parameters of containment theory. By the late 1950s, the United States stood at the apex of a hegemonic world system, based on liberal free trade doctrines and American-guaranteed mutual security pacts. Successful protection of this system in the face of expansionary communism was held to depend on the maintenance of American “credibility”. As Thomas McCormick has put it, the US was “acting for the whole system rather than [for] immediate American interests”.[49]

Especially after 1960, American preoccupation with “credibility” was evident in both the private and public pronouncements of decision­ makers. In November 1961, national security adviser Walt Rostow identified the “gut issue” in Vietnam for John Kennedy’s benefit. This was “not whether Diem is or is not a good ruler”. Rather it was:

… whether we shall continue to accept the systematic infiltration of men from outside and the operation from outside of a guerrilla war against him …. The whole world is asking a simple question: what will the U.S. do about it?[50]

In July 1965, Johnson replied as follows to George Ball’s case against escalation:

But George, wouldn’t all these countries say that Uncle Sam was a paper tiger, wouldn’t we lose credibility breaking the word of three presidents, if we did as you have proposed?[51]

Nixon, according to Kissinger, was convinced that precipitate withdrawal would “dishearten allies who depended on us and embolden adversaries to undertake new adventures”.[52]

Policy-makers during the Vietnam era were instinctive globalists. True, perceptions of Vietnam were situated within concern for the regional SEATO security system. Yet this was but part of the non­communist world system, defined by the theorists of globalised containment as a seamless web where a threat to one part constituted a threat to the whole. The problem with post-Kennan containment theory was that it offered no criteria whereby commitments could be prioritised, or American interests ranked. It rested on the fallacy that the United States could and would intervene to protect any threatened facet of the system, however remote and peripheral to traditional US security priorities.

In a sense, of course, the United States was fighting world communism in Vietnam (despite the existence of significant non­communist opposition to the various Saigon regimes). The poor human rights record of the revolutionary Vietnamese regime has, along with the collapse of bureaucratised command communism in Europe, lessened the temptation to romanticise America’s adversaries in the war. Vietnamese communist sources have also shed light on this issue. Such testimony needs to be treated with caution, and read in the light of the Vietnamese victors’ immediate political concerns. Nevertheless, it now seems clear that insurgent forces in the South were directed from Hanoi. The idea, dear to liberal antiwar opinion, that there was an autonomous, “democratic” insurgency in the South, now stands in need of extreme qualification, if not outright abandonment.[53] Hanoi received perhaps 85 per cent of its oil and almost all its sophisticated military hardware from the Soviet Union. Chinese aid may have amounted to somewhere in the region of ten billion dollars.[54] Even the Sino-Soviet split (which US policymakers undoubtedly underestimated) to some degree worked to Hanoi’s advantage, as Moscow and Peking competed to help North Vietnam’s cause. Nevertheless, the great-power support for Hanoi was in no way equivalent to US support for Saigon. Chinese-Vietnamese relations were permeated by a venerable tradition of mutual suspicion, while the limits to Moscow’s influence were clearly demonstrated during the peace negotiations. North Vietnam was not merely the creature of Moscow and Peking. Hanoi’s nationalist credentials had a profound appeal throughout Vietnam, while the primarily communist-directed revolution appealed to both the idealism and the self-interest of the South Vietnamese people. Writing in May 1965, John Paul Vann (then employed by the US Agency for International Development) captured this well in a letter to General Robert York:

I am convinced that, even though the National Liberation Front is Communist-dominated, that the great majority of the people supporting it are doing so because it is their only hope to change and improve their living conditions and opportunities.[55]

The abstract injunctions of globalised containment caused US policymakers to ignore Vann’s concern, or to marginalise it in the sideshow of pacification. Attitudes towards Vietnam were also shaped by a peculiar kind of cultural conditioning which inclined American decision-makers against serious engagement with the issues raised in Vann’s letter. At one level, there was the unreflecting “can-do” optimism of the Kennedy and Johnson advisers. This was allied (as Patrick Lloyd Hatcher has demonstrated) to the misapplication by America’s internationalist elite of concepts of collective security and economic intervention designed for European conditions.[56] Cultural misunderstandings proliferated. Pacification officers in the late 1960s had to complete Hamlet Evaluation System questionnaires designed to elicit how many televisions were in each village.[57] The US effort was driven by the ideology of American democratic liberalism and the assumption that American democratic capitalist history provided a model for the world. To quote Loren Baritz:

… our national myth showed us that we were good, our technology made us strong, and our bureaucracy gave us standard operating procedures. It was not a winning combination.[58]

Early (usually disaffected liberal) critiques of the war advocated a “quagmire” interpretation of Vietnam. US policymakers, locked in cultural misunderstanding and incremental decision-making, were seen as having stumbled unwittingly into the Indochinese swamp. To some extent, American involvement was the result of a series of small, incremental decisions which failed to address fundamental questions. Yet there were clearly major “turning point” decisions: for example, the 1950 decision to aid the French, to back Diem in 1954, to increase the number of advisers in the early 1960s, the decision not to oppose Diem’s overthrow, the 1965 escalation and “limited war” decisions, Johnson’s 1968 refusal of Westmoreland’s troop increase requests, Nixon’s Vietnamisation and Cambodia invasion decisions. These “turning point” decisions were not taken in isolation: either from prior Indochina commitments, or from concerns not directly related to Vietnam (such as LBJ’s desire to save the Great Society or Nixon’s calculations about detente). Some “big” decisions, such as to commence bombing, were probably seen initially as little more than tentative experiments. Even allowing for all this, the “quagmire” paradigm seems inappropriate. American leaders were aware of alternatives to the courses of action they took, although they obviously did not foresee the dire consequences of the preferred alternatives. At least six close advisers—not just George Ball—alerted Johnson to the dangers of graduated escalation during the first half of 1965. Sycophancy and “groupthink” do operate at high decision-making levels. However, it would be quite wrong to imagine that Presidents were not exposed to conflicting views and fundamental questioning. Chester Bowles, Kennedy’s Ambassador at Large, provided a thoroughgoing review for the President in April 1962. He outlined various scenarios and forecast that current policy would lead to “an uneasy fluid stalemate”.[59] In 1965, Vice-President Hubert Humphrey warned LBJ that he was taking “big historic gambles” in Vietnam without any clear, “politically understandable” rationale.[60] As Leslie Gelb and Richard Betts have argued, Kennedy and Johnson opted for limited objective alternatives in Vietnam wittingly. They decided on measures to “hurt the enemy but not destroy him” in order to escape the political costs either of “losing” Vietnam or of full-scale national mobilisation.[61] Under Nixon, the decision-making structure was excep­tionally and dangerously elitist. Alternative views were insufficiently aired at the highest level. Ropers and Laird were excluded from key discussions. Yet there can be no question that Nixon and Kissinger were aware of the possible consequences of their policies, and of alternatives to them.

Examination of Vietnam decision-making also reveals the extent to which the Cold War consensus had emasculated the US Congress. Some individuals played roles of importance. Senator Mike Mansfield furnished his friend Lyndon Johnson with dissenting views. Senator William Fulbright, after breaking with the Administration in 1965, provided elite critics with a platform in the 1966 and 1968 Senate Foreign Relations Committee hearings. Congressional investigations illuminated some dark areas: for example, war crimes (especially the 1968 massacre at My Lai), and the plight of refugees in South Vietnam. In the last resort, in the Spring of 1973, Congress actually ended the war. However, the abdication of responsibility enshrined in the 1964 Gulf of Tonkin resolution stands as a warning against passive legislative acquiescence in Presidential war-making.

Explaining American Failure: The Debate over Military Strategy
Many of the
reasons for American failure are implicit in the preceding narrative: underestimation of the enemy and its ability to sustain massive losses, the mismatch between US “limited” and communist “total” war, the failure or inability to generate popular support for the Saigon regime, the opening and maintenance of the Ho Chi Minh trail and so on: perhaps, above all, the failure to develop, and sell to the American public, a coherent “win-the-war” strategy. Such explanations tend to assume, as does the entire debate over military strategy to be considered below, that victory was possible had different decisions been taken. Before discussing the military issues, it is important to examine this assumption.

Essentially, “victory” would have consisted in the military defeat of North Vietnam (and the Chinese if they had intervened), combined with the generation of popular support for the non-communist government in the South. Political and public opinion in the US would have had to have been mobilised behind the effort, and have been prepared to support (physically and financially) security arrangements agreed with the post-war South. Such a “victory” is not inconceivable. The gains made after 1969 in generating at least some popular support for Thieu’s regime indicates that at least some progress along these lines was possible. There was nothing foreordained about the communists’ triumph. Given their nationalist credentials, however, it was always very likely (as the population of South Vietnam well appreciated); and its likelihood was compounded by the way the Americans understood and fought the war. America was not simply defending South Vietnam from an attack from the North. Contrary to the situation in Korea in the early 1950s, the US in Vietnam was placing itself in the path of an ongoing national revolution that had already achieved power in the North. This was the case even in the early 1970s when the military picture was much more one of a conventional war against the NVA. The American effort and Thieu’s reforms did dent the communists’ revolutionary and nationalist legitimacy. However, they failed to replace it with anything resembling popular confidence in and support for the Saigon regime.

Should the United States simply have acquiesced in a communist victory some time during the Eisenhower, Kennedy or Johnson presidencies, not so much letting the domino fall as recognising the national legitimacy of Ho’s movement? Such an outcome would not have guaranteed peace and liberty to the people of South Vietnam. It would, however, have represented a mature, flexible interpretation of containment theory, and would have spared the US the misery of the war. After all, in the long term, acquiescence in a communist victory was actually all that Nixon and Kissinger achieved in 1973, with only a “decent interval” sparing America’s blushes. (The above interpretation is open to the objection that post-Kennan containment theory demanded that the US draw a line somewhere in Asia: that some kind of “Vietnam war” be fought (and maybe lost) at some point. Such reasoning, however, simply leads us into an impenetrable labyrinth of counter-factual speculation).

In actuality, their interpretation of containment theory and concern for US credibility prevented American leaders from acquiescing in a communist takeover before 1973. This being the case, American “victory” would have required fundamental, and culturally appropriate, political and economic reform in the South, combined with the extension of non-coercive security to the Southern population. The South Vietnamese had to come to see Saigon and its American sponsors as the guarantor of reform and security, rather than as an alien force whose rule guaranteed disruption. If this analysis is at all correct, it follows that even those “pacification” measures which fell short of thoroughgoing political and economic reform, in a context of non-coercive security, were unlikely to produce the desired results. Nevertheless, they were, from the viewpoint of feasible American “victory” as sketched above, at least on the right lines. In point of fact, the American pacification efforts were fractionalised and marginalised. The various agencies (AID, CORDS and the marines’ Combined Action Platoons, for example) were not subjected to even formalised central coordination until the early 1970s. Pacification never formed a priority for MACV, which remained wedded to firepower and “search-and-destroy”. Westmoreland was openly hostile to the Combined Action Platoons. To a degree, pacification did attempt to deal with the problem of peasant disaffection. But it was also seen as part of the process which was creating dislocation. Pacification officers were, in the words of one ARVN officer, “rich soldiers with luxury stereo musics, C-Rations, all types equipments”.[62]

Not only pacification, but the entire US effort was characterised by inter-service rivalries, bureaucratic battles, poor tactics and byzantine command structures.[63] Again, some improvement was made in the early 1970s with General Abrams’ “one war strategy”; but, as Andrew Krepinevich has argued, the extent to which this effected real changes on the ground is debatable. Institutional fractionation reinforced strategic confusion.[64]

Within LBJ’s general “limited war” constraints—no mass mobilisation, no ground offensive outside South Vietnam—the ground war generals were largely allowed to develop their own strategies and tactics. The resulting, misconceived war of attrition was kept afloat by an almost liturgical recitation of enemy “body counts”. Estimations of enemy strength and casualties were immensely difficult in a war where the enemy was so often invisible. “Good” intelligence (for example, on the futility of Rolling Thunder) was often ignored. The extent to which the military may have deliberately falsified “body count” and order of Battle figures has probably been exaggerated. What is certain, however, is that the whole “numbers mill” had, by 1968, become dangerously misleading.[65]

A host of alternative, “winning” strategies have been advocated in the postwar years. Many of these embrace some kind of enclave strategy.[66] Attempting to protect the whole of South Vietnam, American ground forces did become desperately overextended, almost aimlessly deployed across the whole country. The battles for Hue and Khe Sanh, at the very least, could have been avoided. Enemy­influenced rural areas of South Vietnam were devastated by mass bombing and herbicide attacks. Concentration on providing physical security for the wide Mekong delta (III and IV Corps) region might have mitigated some of this suffering, as well as the attendant alienation of so much of the South Vietnamese peasantry. Further strategic disputes centre on the question of whether the war was insufficiently or excessively Americanised. In retrospect, however, a major failure appears to have been the neglect (until too late) of developing the ARVN as a disciplined, quasi-independent force capable of defending South Vietnam. Most military figures agree that the prior ruling out of open intervention beyond South Vietnam’s borders sent dangerous signals to Hanoi. Others have argued that counter-insurgency, far from being underprioritised, actually diverted attention from the need to provide physical security for the South Vietnamese population.[67] Colonel Harry Summers has gone still further, claiming that the war should have been conceptualised in conventional terms—an attack from the North—from the outset. This writing-out of any significant guerrilla/insurgency dimension appears bizarre and is persuasive only in relation to the war’s final phase. It is probably best understood as a contribution to postwar military rehabilitation. [68]

The war in the air was micro-managed by civilian leaders to a far greater degree than its ground counterpart. It was equally misconceived. Johnson’s Rolling Thunder bombing was confused as to objectives and counter-productive in its effects. The Johnson Administration seems to have conceived of the campaign as a means primarily of communicating with the enemy—convincing Hanoi that the US meant business—while simultaneously lifting South Vietnamese morale. Graduated “ouch warfare”[69] proved impossible to orchestrate successfully. The effect on the citizenry of North Vietnam was, if anything, a hardening of resolve. Despite target limitation, collateral civilian damage was inevitable and world opinion was alerted.

As the bombing developed, so did the US Air Force move to demonstrate its capacity to unleash “technowar”;[70] the scientific application of force would not only break Hanoi’s will, but would also destroy North Vietnam’s capacity to support Southern insurgents. In fact, the North Vietnamese infrastructure was undeveloped. Interdiction and infrastructural destruction strategies conceived for the European theatre were simply inappropriate. As noted above, the massive bombing of South Vietnam was entirely unproductive. Any case for the efficacy of airpower in Vietnam rests not on Rolling Thunder, but on Nixon’s air campaigns: Freedom Train (April 1972), Linebacker I (May October 1972) and Linebacker II (the 1972 Christmas bombing). Freedom Train, designed to break civilian morale, failed. A case can be made for the effectiveness of the Linebacker attacks: accurate bombing primarily directed at military targets. Any such effectiveness derived from the fact that North Vietnam was by this time essentially fighting a conventional war and hence was more vulnerable to air attack. Yet even with Linebacker II, the point must be made that the agreement which it “forced” upon Hanoi was one very similar to that made in October 1972 and also one which Hanoi had no intention of honouring.[71]

American Public Opinion and the Antiwar Movement
It is misleading
to suggest that it was the unsticking of US public and Congressional opinion which snatched defeat from the jaws of victory in the period 1972-1975. Thieu’s post-1969 reforms were too little, too late. However, it is the case that American public anxiety about the war effectively ruled out the only course of action which might have secured the gains made after 1969: an expanded American underwriting of the radical economic and political transformation of South Vietnam.

The Vietnam war demonstrated the unwillingness of US public opinion indefinitely to tolerate a conflict which appeared to lack clear objectives and which was producing high casualty rates. This is not to deny that, at times, public opinion was extraordinarily hawkish. In June 1965, for example, the weight of poll evidence indicated a clear public preference for sending in more troops. With the exception of the 1972 Christmas bombing (opposed 51-37 per cent), the major escalatory decisions were supported. Even the 1970 Cambodian invasion initially elicited a narrow majority in favour. Yet the public were continually frustrated by lack of progress and appear to have been deeply disturbed by media coverage of the Ter offensive. Only 23 per cent agreed with Westmoreland’s view that Ter constituted an American victory. Gallup data showed almost one person in five switching from a “hawk” to a “dove” position between early February and mid-March 1968.[72]

Despite his private opposition to the conflict, Hubert Humphrey’s greatest handicap in the 1968 Presidential election was his link to Johnson’s failed war policy. 82 per cent of Americans in 1968 saw Nixon as truly committed to a Vietnamese peace. Nixon appreciated the complex structure of public opinion, in which a “silent majority” was prepared to support Presidential initiatives. By late 1971, polls recorded a record low level of confidence in political leaders and a generalised opposition to the war. But such opposition tended to be diffuse and ill-informed. Nixon was able to buy time by his troop withdrawal and draft decisions, as well as by his exploitation of the prisoner-of-war issue. Above all, the McGovern candidacy in 1972 was a gift to Nixon. Convinced that peace would come anyway, voters would not support a candidate widely perceived and portrayed as “extreme”. For many Americans, moreover, Vietnam was not the most important issue facing the nation. A December 1971 poll revealed that only 15 per cent thought that it was. Nevertheless, by this time, White House perceptions of both potential and actual public opposition were clearly constraining policy.[73]

One of the most tricky areas of interpretation concerns the interaction between public opinion, media coverage and the activities of the antiwar movement. The myth of the oppositional media destroying Presidents and altering the course of the war remains strong. LBJ gave it credence in his attack on the National Association of Broadcasters on the day after his March 31, 1968 speech. However, the postwar notion that public opinion swung away from the war because of crusading journalism and vivid photography is largely myth. Some newspapersnotably the Los Angeles Times and New York Times—were schizophrenic in their coverage; the tatter’s 1971 serialisation of the Pentagon Papers (leaked by Daniel Ellsberg, a Pentagon civilian) represented a key stage in the unravelling of consensus. Generally, however, the media were instinctually pro-war and only shifted when sharp elite divisions had already become apparent. Undoubtedly, the famous photographs and film footage of napalm and bomb damage did have an impact. Analysis of television and press coverage, however, does not support Nixon’s charges of antiwar bias. White House communication failures -especially Johnson’s failure to create an effective “rhetoric of limited war”[74]—were more damaging to the Administration cause than any activities of crusading journalists. Coverage of the 1970 Kent State killings certainly affected public opinion quite deeply; however, media treatment of the antiwar movement was almost entirely hostile. In April 1967, for example, Time poked fun at “Vietniks and Peaceniks, Trotskyites and potskyites”.

Time’s depiction of the antiwar movement was, of course, grossly unfair. Charles Chatfield’s description sets the record straight:

Liberals and leftists, men and women, blacks and whites, students and established intellectuals, clergy and laity: countless citizens passed in and out of the antiwar movement. Its core was indelibly middle class and well educated. It was a typically American reform effort—a voluntary crusade attracting adherents and impelling them to act out of a felt personal responsibility for social wrongs.[75]

The movement had its roots in the American civil disobedience tradition and in the middle class activism that had mobilised for disarmament causes prior to the 1963 nuclear test ban treaty. Deepening involvement and the imposition of the draft coalesced with generational changes to produce the variegated movement described by Chatfield. Familiar American institutions and movements became polarised between pro- and antiwar opinion. Trade unions, universities, professions, political parties, churches, even—to an extent—the army itself, were all affected in this way. Many African-Americans supported Martin Luther King Jr.’s criticism of the war (and of the draft process which discriminated against blacks); more conservative civil rights leaders, however, were cautious about breaking with Johnson. A body of elite dissidents—Senators, writers, entertainers, disaffected bureau­crats—constituted a kind of disembodied superstratum. Different groups and individuals, of course, opposed the war for different reasons: pacifism, opposition to American imperialism, realist calculation that American interests were no longer being served in Vietnam, and so on. Such divisions were reflected within the movement, especially after 1965 when it began to attract New Left radicals and counter-culturists. Between 1965 and the splintering of Students for a Democratic Society in 1969, the movement witnessed extended debates between advocates of electoralism such as Allard Lowenstein and radical direct action supporters like David Dellinger. The shambles of the 1967 National Conference for New Politics, and also the antagonism between the Vietnam Moratorium Committee and the more radical “New Mobe” in 1969, illustrated these divisions. The New Left was weak on theory; it had enormous difficulty in giving re-birth to American leftism; and undoubtedly it did exhibit, to a destructive degree, what Todd Gitlin has called the histrionic “politics of spurious amplification”.[76] However, despite the excesses of the New Left and the intramovement divisions, impressive mass marches and demonstrations were organised. The movement’s impact on public opinion was complex and to a certain degree counter-productive. It undoubtedly contributed to the develop­ing bunker mentality in the Johnson and Nixon Administrations. This mentality was evidenced in the vindictive and illegal harassment visited by the Johnson and Nixon Administrations on the movement. To an extent, antiwar activities contributed to Nixon’s “silent majority” backlash. Yet, marches, petitions, occupations and teach-ins did shake complacency and slowly alter the decisional climate. In various walks of life, influential opinion leaders were alerting sections of the population to antiwar ideas.[77] In helping turn Nixon against Duck Hook  in 1969, the movement achieved perhaps its most specific (though at the time unnoticed) success. From about 1967, war managers were confronted not only by a public whose unquestioning acceptance of official versions of the war could no longer automatically be assumed; but also by a potentially alienated generation of young, well educated Americans, who appeared increasingly reluctant to serve the recruitment needs of corporate capitalism.

8. Conclusion

“Limited” war in Vietnam was devastating in its impact on all parties. American deaths totalled over 58,000; over half of these died after 1968. Over 150,000 were wounded. Compared with their numerical strength in the population as a whole, black and Hispanic death and casualty rates were disproportionately high. Possibly two million Vietnamese were killed (between 850,000 and 950,000 of them Viet Cong or NVA personnel). By the war’s end there were about 21 million bomb craters in South Vietnam. Some ten and a half million South Vietnamese people were turned into refugees by the war. By the war’s end South Vietnam’s natural forest-cover had been reduced by about 60 per cent. Forces supporting the Americans—especially the ARVN and the South Koreans, but also Australian, New Zealand and Thai forces—suffered significant losses. US Vietnam veterans have been prone to suicide and psychic disorder. When Congress set up readjustment counselling centres in 1979, some 200,000 Vietnam-era veterans sought help in the programme’s first three and a half years. The direct financial cost of the war to the US was between 112 and 155 billion dollars, with indirect costs estimated at 925 billion. Both in political and economic terms, the war marked the beginning of the end of the (short) “long cycle” of American international domination dating ‘from World War II. Within the US, the war stimulated a loss of confidence in governmental institutions, a new public scepticism about the entire “national security state” and a collapse of the consensus that had dominated US foreign policy since 1946.[78]

We are now in a position to return to the questions raised in the Introduction. The United States became involved in Indochina knowingly and consciously, and essentially for reasons connected with what was perceived to be necessary to contain communism and protect America’s hegemonic status. The notion of protecting American credibility underscored pivotal decisions on the war. The United States lost the war; Nixon’s attempts to resurrect 1973 as the year of victory rest on unrealistic assessments both of the Paris agreement and of the success of Vietnamisation. American decision-making structures performed poorly, but they did serve the immediate political purposes of successive Presidents. Washington’s secretive, politicised, elite direction of the undeclared war compounded difficulties, rather than facilitating their resolution. The war managers were forced, eventually, to take the threat of domestic protest seriously. Post-1970 public unwillingness to sanction a further expansion of commitments constituted an important constraint upon elite decisions. The military performed poorly and adapted neither its procedure nor its doctrine in any coordinated fashion to the requirements of what was part- guerrilla insurgency, part- conventional war. But the war was lost essentially through civilian, rather than military, misjudgement. The reluctance of US policymakers directly to tackle the question—especially before 1969—of the Saigon regime’s unpopularity was crucial.

American involvement was morally ambivalent. US credibility was the dominant concern, yet American decision-makers certainly did see the containment of communism as in the interests of the South Vietnamese people. As noted in the Introduction, events since 1975 have done nothing to burnish the reputation of neo-Stalinist communism, whether in the Third World or elsewhere. Nevertheless, this should not obscure the fact that the US was (even in the early 1970s) defending a reactionary, unrepresentative and undemocratic elite in South Vietnam. In a profoundly disquieting sense, America’s war was one against the people of South, as well as North, Vietnam—a war waged as part of a futile and misconceived attempt to force them to be free.

9. Guide to Further Reading

Anyone approaching the subject anew would be well advised to begin not with a conventional history, but with the following highly personal accounts: Frances Fitzgerald, Fire In The Lake: The Vietnamese and the Americans in Vietnam (Boston: Little, Brown, 1972); Tim O’Brien, IfIDie In A Combat Zone (London: Paladin, 1989); John Balaban, Remembering Heaven’s Face: A Moral Witness in Vietnam (New York: Poseidon, 1991); and the works by Sheehan [14] and Herr [27] cited in the notes.

Among the best single volume histories are those by Herring 7; Kolko 2; Lewy [36]; Palmer [19]; Turley [53]; Brown [2]; Lomperis [36]; and Wintle [18], as well as: Marilyn Young, The Vietnam Wars, 1945-1990 (New York: Harper Collins, 1991); and James S. Olson and Randy Roberts, Where the Domino Fell- America and Vietnam 1945 to 1990 (New York: St. Martin’s, 1991). These volumes cover just about the full range of possible interpretations. R.B. Smith’s An International History of the Vietnam War (3 vols., London: Macmillan, 1983, 1985 and 1991) should also be consulted. Many “first generation” Vietnam books are still valuable, for example B.B. Fall, The Two Net-Nams (New York: Praeger, 1967). However, especially on the years before Kennedy and on LBJ, there are some outstanding recent histories. On the war’s origins, see Rotter [8]; Short b; George McT. Kahin Intervention (New York: Knopf, 1986); and Lloyd C. Gardner, Approaching Vietnam: From World War II Through Dienbienphu (New York: Norton, 1988). On the Eisenhower-Dulles period, see David L. Anderson, Trapped By Success: The Eisenhower Administration and Vietnam, 1953-1961 (.New York: Columbia University Press, 1991); G. C. Herring, “‘A Good Stout Effort’: John Foster Duller and the Indochina Crisis”, in R.H. Immerman, ed., John Foster Duller and the Diplomacy of the Cold War (Princeton: Princeton University Press, 1990); and R.H. Immerman, “The United States and the Geneva Conference of 1954: A New Look”, Diplomatic History, 14 (1990), pp.43-66. On Kennedy, see Timothy P. Malta, John F. Kennedy and the .New Pacific Community, 1961-1963 (Basingstoke Macmillan, 1990); and L J. Barren and S.E. Pelz, “The Failed Search for Victory: Vietnam and the Politics of War”, in T.G. Paterson, ed., Kennedy’s Quest for Victory (New York: Oxford University Press, 1989). On Johnson, see the two Berman volumes [24],[30]; Barrett [60]; VanDeMark [13]; Cable [20]; and Turner [74]; also H.Y. Schandler, The Unmaking of a President. Lyndon Johnson and Vietnam (Princeton: Princeton University Press, 1977). Good secondary sources on Nixon and Kissinger are: Hersh”; Morris”; Robert D. Schulzinger, Henry Kissinger.- Doctor of Diplomacy (New York: Columbia University Press, 1989); and William Shaweross, Sideshow: Kissinger, Nixon and the Destruction of Cambodia (New York: Simon and Schuster, 1979). See also Frank Snepp A Decent Internal, (New York: Random House, 1977); and G.C. Herring, “The Nixon Strategy in Vietnam”, in P. Braestrup, ed., Vietnam As History (Washington DC: University Press of America, 1984).

Among many outstanding memoirs are: Nixon [43]; Kissinger [52],[37] Taylor [15]; Colby [67]; Cooper [65]; Clifford [29]; Frederick Molting, From Trust to Tragedy (New York: Praeger, 1988); William C. Westmoreland, A Soldier Reports (New York: Dell, 1980); Harry McPherson, A Political Education (Boston: Houghton Mifflin, 1988); and Dean Rusk (as told to R. Rusk, ed., D.S. Papp), As I Saw It (New York: Norton, 1990). See also Eugene E. McCarthy, The Year of the People (Garden City: Doubleday, 1969) and Geore McGovern, Grassroots: The Autobioraphy of George McGovern (New York: Random House, 1977).

Debates on strategy and the role of the US military may be followed in Krepinevich [14]; Petersen [62]; Clarke [64]; Kinnard [64]; Summers [68]; Matthews and Brown [66]; Thayer [63]; Grinter and Dunn [68]; and Clodfelter [71]. See also P.B. Davidson, Vietnam at War (Novato: Presidio, 1988) and John Schlight, The War in South Vietnam: The Years of the Offensive, 1965-1968 (Washington DC: Office of Air Force History, 1988). Mueller [12] is indispensable on public opinion. See also Harris [5] and H.S. Foster, Activism Replaces Isolationism: US Public Attitudes, 1940-1975 (Washington DC: Foxhall Press, 1983). On the antiwar movement, see DeBenedetti [39]; Small [77]; and Nancy Zaroulis and Gerald Sullivan, Who Spoke Up? American Protest Against the War in Vietnam, 1963-1975 (Garden City: Doubleday, 1984). See also Todd Gitlin, The Sixties: Years of Hope, Days of Rage (New York: Bantam, 1987); and Edward P. Morgan, The 60s Experience: Hard Lessons About Modern America (Philadelphia: Temple University Press, 1991). On the war’s domestic impact, see McPherson [18]; D.C. Hallin, The `Uncensored War’ (New York: Oxford University Press, 1986); Peter Braestrup, Big Story (Vol. I) (Boulder: Westview, 1977); and L.M. Baskir and W.A. Strauss, Chance and Circumstance (New York: Random House, 1978).

US policy-making is discussed in: David Halberstam, The Best and the Brightest (London: Barrie and Jenkins, 1972); Komer [63]; Gelb and Betts [61]; J.P. Burke and F.I. Greenstein, How Presidents Test Reality: Decisions on Vietnam, 1954 and 1965 (New York: Russell Sage, 1989); and G.K. Osborn, et al, eds., Democracy, Strategy and Vietnam. Implications for American Policy Making (Lexington: Heath, 1987). Among many excellent studies of Vietnam at war are: Jeffrey Race, War Comes to Long An (Berkeley: University of California Press, 1971); James W. Trullinger, Village at War (New York: Longman, 1980); and Eric M. Bergerud, The Dynamics of Defeat: The Vietnam War in Hau Nghia Province (Boulder: Westview, 1991). The war has produced many fine oral histories, for example: Wallace Terry, ed., Bloods.’ An Oral History of the War by Black Veterans (New York: Random House, 1984).

Lastly, on the cultural background and impact of the war, see: Baritz [58]; John Hellman, American Myth and the Legacy of Vietnam (New York: Columbia University Press, 1986); and Jeffrey Walsh and James Aulich, eds., Vietnam Images: War and Representation (Basingstoke: Macmillan, 1989).

10. Notes

  1. Cited in Christopher Hitchens, Blood, Class and Nostalgia: Anglo­ American Ironies (London: Vintage, 1991), pp.253-254. Back
  2. See T. Louise Brown, War and Aftermath in Vietnam (London: Routledge, 1991), pp. 1-23; P.M. Dunn, The First Vietnam War (London: Hurst, 1985), pp.167-183; Gabriel Kolko, Vietnam: Anatomy of War 1945-1975 (London: Unwin Hyman, 1987), pp.13-61. Back
  3. Cited in Stephen E. Ambrose, Rise to Globalism (5th. ed., Harmondsworth: Penguin, 1988), p.78. Back
  4. Cited in Norman A. Graebner, “Introduction” to N.A. Graebner, ed., The National Security (New York: Oxford University Press, 1986), pp.3-36, 32. See also Richard Crockatt, The United ,States and the Cold War 1941-53 (BAAS Pamphlet No. 18, 1989), p.34. Back
  5. Cited in Louis Harris, The Anguish of Change (New York: Norton, 1973), pp.53-54. See generally Crockatt, The United States and the Cold War and Daniel Yergin, Shattered Peace: The Origins of the Cold War and the American National Security State (Harmondsworth: Penguin, 1980). Back
  6. The Pentagon Papers (Senator Gravel Edition) Vo1.I (Boston: Beacon Press, 1971), p.47. See also Anthony Short, The Origins of the Vietnam War (London: Longman, 1989), Ch. 1. Back
  7. Cited in George C. Herring, America’s Longest War The United States and Vietnam 1950-1975 (New York: Knopf, 1986), p.10. See also Scott L. Bills, Empire and Cold War- The Roots of US-Third World Antagonism, 1945-47 (London: Macmillan, 1990), pp.144-5. Back
  8. Gareth Porter, ed., Vietnam: The Definitive Documentation of Human Decisions, VoI.I (London: Heyden, 1979), p.227. See also Andrew J. Rotten The Path to Vietnam: Origins of the American Commitment to Southeast Asia (Ithaca: Cornell University Press, 1987). Back
  9. Cited in Stephen E. Ambrose, Eisenhower: The President (London: Allen and Unwin, 1984), p.182. Back
  10. The Origins of the Vietnam War, p.148. Back
  11. William C. Gibbons, The US. Government and the Vietnam War. Executive and Legislative Roles and Relationships, Part 1, 1945-1960 (Princeton: Princeton University Press, 1986), pp.93, 204. Back
  12. Michael Charlton and Anthony Moncrieff, Many Reasons Why: The American Involvement in Vietnam (London: Scolar, 1978), pp. 60-61. Back
  13. See Brian VanDeMark, Into the Quagmire: Lyndon Johnson and the Escalation of the Vietnam War (New York: Oxford University Press, 1990), pp.7-10. Back
  14. See Andrew F. Krepinevich, The Army and Vietnam (Baltimore: Johns Hopkins University Press, 1988), p.62; Neil Sheehan, A Bright Shining Lie: John Paul Vann and America in Vietnam (London: Picador, 1990), pp. 201-67. Back
  15. See Maxwell Taylor, Swords and Ploughshares (New York: Norton, 1972), pp.218-219; also, Norman B. Hannah, The Kg to Failure: Laos and the Vietnam War (Lanham: Madison Books, 1987), p.60. Back
  16. Paul H. Nitze, From Hiroshima to Glasnost (London: Weidenfeld and Nicolson, 1989), p.255. Back
  17. Kenneth W. Thompson, ed., The Kennedy Presidency.’ Intimate Perspectives (Lanham: University Press of America, 1985), pp.261-2. Back
  18. G.C. Herring, “`Peoples Quite Apart’: Americans, South Vietnamese, and the War in Vietnam”, Diplomatic History 8 (1989), p.2. Back
  19. Thompson, (note 17), p.262. See also Bruce Palmer, The 25 Year War: America’s Military Role in Vietnam (New York: Da Capo, 1984), pp.24-25. Back
  20. See Larry Cable, Unholy Grail: The US and the Wars in Vietnam 1965-8 (London: Routledge, 1991), pp. 11-14. Back
  21. Sheehan, A Bright Shining Lie, p.376. Back
  22. Lyndon B. Johnson, The Vantage Point: Perspectives of the Presidency (New York: Holt, Rinehart and Winston, 1971), p.68; Gibbons, The US Government and the Vietnam War, Part 2: 1961-1964, p.357. Back
  23. L.A. Sobel, ed., South Vietnam, VoI.P 1961-65 (New York: Facts on File, 1973), p.116. Back
  24. Cited in Larry Berman, Planning A Tragedy: The Americanization of the War in Vietnam (New York: Norton, 1982), p.142. Back
  25. See Robert Mason, Chickenhawk (London: Corgi, 1989), Ch.5. Back
  26. Cited in Sheehan, A Bright Shining Lie, p.630. Back
  27. Dispatches (London: Picador, 1979), p.89. Back
  28. Cited in Kolko, Vietnam, p.309. Back
  29. See Clark Clifford, Counsel to the President (New York: Random House, 1991), Ch.28. Back
  30. Cited in Larry German, Lyndon johnson’s War. The Road to Stalemate in Vietnam, (New York: Norton, 1989), p.185. Back
  31. Ibid, p.196. Back
  32. Roger Morris, Uncertain Greatness: Henry Kissinger and American Foreign Policy (New York: Harper and Row, 1977), p.44. Back
  33. Seymour M. Hersh, Kissinger.- The Price of Power (New York: Summit Books, 1983), p.568. See also H.R. Haldeman, The Ends of Power (New York: NYT Books, 1978), p.83. Back
  34. Cited in S.E. Ambrose, Nixon: The Triumph of a Politician 1962-1972 (London: Simon and Schuster, 1989) p.224; also, Raymond L. Garthoff, Detente and Confrontation (Washington D.C.: Brookings 1985), p.70. Back
  35. See Nancy Wiergersma, Vietnam: Peasant Land, Peasant Revolution (London: Macmillan, 1988), pp.4-10. Back
  36. See Guenter Lewy, America in Vietnam (New York: Oxford University Press, 1978), pp.186-189; Samuel L. Popkin, The Rational Peasant (Berkeley: University of California Press, 1979); Timothy J. Lomperis, The War Everyone Lost—and Won (Baton Rouge: Louisiana State University Press, 1984), p.104. Back
  37. See H.M. Kissinger, White House Years (Boston: Little, Brown, 1979), pp.288, 286. Back
  38. Ibid., p.481. Back
  39. Cited in Hersh, Kissinger, p.311. See also Charles DeBenedetti (with Charles Chatfield), An American Ordeal: The Antiwar Movement of the Vietnam Era (Syracuse: Syracuse University Press, 1990), Ch. 10. Back
  40. White House Years, p.260. Back
  41. Nguyen Tien Hung and Jerrold L. Schechter, The Palace File (New York: Harper and Row, 1986), pp.73-74. Back
  42. Allan E. Goodman, The Lost Peace (Stanford: Hoover Institution Press, 1978), p.145. Back
  43. Richard M. Nixon, RN.- The Memoirs of Richard Nixon (London: Sidgwick and Jackson, 1978), p.734. Back
  44. Stephen T. Hosmer et al, The Fall of South Vietnam (New York: Crane, Rusak, 1980), p.30. See also Bui Diem (with D. Chanoff, In The Jaws of History (Boston: Houghton Mifflin, 1987). Back
  45. Herring, America’s Longest War, p.266; David Butler, The Fall of Saigon (London: Abacus, 1985), p.514. Back
  46. John Pilger, Heroes (London: Pan, 1989), p.229. Back
  47. See, e.g., William J. Miller, Henry Cabot Lodge.’ A Biography (New York: Heineman, 1967), p.372; R.B. Smith, An International History of the Vietnam War.’ The Kennedy Strategy (New York: Macmillan, 1985), p.143. Back
  48. Cited in Kenneth W. Thompson, “The Johnson Presidency, and Foreign Policy”, in Bernard J. Firestone and Robert C. Vogt, eds., Lyndon Baines Johnson and the Uses of Power (New York: Greenwood, 1988), p.291. See also David DiLeo, George Ball, Vietnam, and the Rethinking of Containment (Chapel Hill: University of North Carolina Press, 1991). Back
  49. T. J. McCormick, ‘“Every System needs a Center Sometime”’, in Lloyd Gardner, ed., Redefining the Past: Essays in Diplomatic History in Honor of William Appleman Williams (Corvallis: Oregon State University Press, 1986), p.215. Back
  50. Foreign Relations of the United States, 1961-1963, Tool. I Vietnam (Washington DC: US Government Printing Office, 1988), p.602. Back
  51. Cited in Yuen Foong Khong, “Credibility and the Trauma of Vietnam”, in L. Carl Brown, ed., Centerstage: American Diplomacy since World War II (New York: Holmes and Meier, 1990), p.243. Back
  52. H.M. Kissinger, Years of Upheaval (Boston: Little, Brown, 1982), p.88. Back
  53. See, e.g., Truong Tang, “The Myth of a Liberation”, New York Review of Books, 21 Oct. 1982, pp.31-36; William S. Turley, The Second Indochina War (Boulder: Westview Press, 1986), pp.40-44; Carlyle A. Thayer, War By Other Means: National Liberation and Revolution in Viet-Nam 1954-60 (London: Allen and Unwin, 1989), pp.190-196. Back
  54. Lomperis, The War Everyone Lost—and Won, p.75. Back
  55. Sheehan, A Bright Shining Lie, p.524. Back
  56. P.L. Hatcher, The Suicide of an Elite: American Internationalists and Vietnam (Stanford: Stanford University Press, 1990), (e.g., p.296). Back
  57. David Donovan, Once a Warrior King (London: Corgi, 1990), p.188. Back
  58. Loren Baritz, Backfire (New York: Morrow, 1985), p.54. Back
  59. Foreign Relations of the United States, 1961-1963, Vol. Il Vietnam (Washington DC: US Government Printing Office, 1990), p.301. Back
  60. David M. Barren, “The Mythology Surrounding Lyndon Johnson, his Advisers and the 1965 Decision to Escalate the Vietnam War”, Political Science Quarterly, 103 (1988), pp.637-663, 646. Back
  61. L.H. Gelb and R.K. Betts, The Irony of Vietnam: The System Worked (Washington DC: Brookings, 1979). Back
  62. Cited in Michael E. Petersen, The Combined Action Platoons: The US Marines’ Other War in Vietnam (New York: Praeger, 1989), pp.93-4. Back
  63. See Thomas C. Thayer, War Without Fronts: The American Experience in Vietnam (Boulder: Westview, 1985); Robert W. Komer Bureaucracy at War.’ US Performance in the Vietnam Conflict (Boulder: Westview, 1986); and David H. Hackworth, About Face (London: Pan, 1991), p.614. Back
  64. See Krepinevich, The Army in Vietnam, pp.252-257; but see also Jeffrey J. Clarke, Advice and Support: The Final Years, 1965-1973 (Washington DC: US Army Center of Military History, 1988), pp.303-308; also, Douglas Kinnard, “The ‘Strategy’ of the War In South Vietnam”, in J.F. Veninga and H.A. Wilmer, eds., Vietnam in Remission (Salado: Texas A. and M. University Press, 1985), p.23. Back
  65. See Berman, Lyndon Yohnson’s War, pp.20, 163-4; Chester Cooper, The Lost Crusade: America in Vietnam (New York: Dodd, Mead, 1970), p.427; James J. Wirtz, “Intelligence to Please?”, Political Science Quarterly, 106 (1991), pp.239-63; Cable, Unholy Grail, Ch.7. Back
  66. See, e.g., Hung P. Nguyen, “Communist Offensive Strategy and the Defense of South Vietnam”, in Lloyd D. Matthews and D.E. Brown, eds., Assessing the Vietnam War (Washington DC: Pergamon-Brasseys, 1987), pp.101-121; AJ. Joes, The War for South Vietnam, 1954-1975 (New York: Praeger, 1989), p.113. For a strategy based on massive US deployment South of the demilitarised zone and interdiction of the Ho Chi Minh trail, see Palmer, The 25 Year War, pp.182-88 and Richard M. Nixon, No More Vietnams (London: W.H. Allen, 1986), pp.80-82. Back
  67. See Hatcher, Suicide of an Elite; William Colby, Lost Victory (New York: Contemporary Books, 1989). Back
  68. Harry G. Summers, On Strategy (San Rafael: Presidio, 1982). See also N.C. Eggleston, “On Lessons”, in L.E. Grinter and P.M. Dunn, eds., The American War in Vietnam (New York: Greenwood, 1987), pp. 111-123. Back
  69. Ambrose, Rise to Globalism, p.214. Back
  70. See James W. Gibson The Perfect War. Technowar in Vietnam (Boston: Atlantic Monthly Press, 1986). Back
  71. See R.A. Pape, “Coercive Air Power in tile Vietnam War”, International Security, 15 (1990), pp.103-46; Mark Clodfelter, The Limits of Air Power. The American Bombing of Vietnam (New York: Free Press, 1989); Lewy, America in Vietnam, pp.412-14; and E.H. Tilford, “Air Power in Vietnam”, in Grinter and Dunn, eds., The American War in Vietnam, pp.69-83. Back
  72. See J. Dumbrell, “Congress and the Antiwar Movement”, in John Dumbrell, ed., Vietnam and the Antiwar Movement (Aldershot: Avebury, 1989), p.108; DeBenedetti, An American Ordeal, p.298; Lewy, America in Vietnam, p.434; Harris, The Anguish of Change, pp. 71, 75; VanDeMark, Into the Quagmire, p.163; John E. Mueller, War, Presidents and Public Opinion (New York: Wiley, 1973). Back
  73. DeBenedetti, An American Ordeal, p.322; Harris, The Anguish of Change, p.69. Back
  74. Kathleen J. Turner, Lyndon Johnson’s Dual War. Vietnam and the Press (Chicago: University of Chicago Press, 1985), p.6. Back
  75. DeBenedetti, An American Ordeal, p.390 (also p.177). Back
  76. T. Gitlin, The Whole World is Watching (Berkeley: University of California Press, 1980), p.285. Back
  77. See Melvin Small, Johnson, Nixon and the Doves (New Brunswick: Rutgers University Press, 1988). Back
  78. Statistics from: Turley, The Second Indochina War, pp.193-95; Lewy, America in Vietnam, pp.445-50; Gibson, The Perfect War, p.225; W. LaFeber, The American Age (London: Norton, 1989), p.608; Robert W. Stevens, Vain Hoes, Grim Realities: The Economic Consequences of the Vietnam War (New York: New Viewpoints, 1976), p.187; Myra McPherson, Long Time Passing: Vietnam and the Haunted Generation (Garden City: Doubleday, 1984), p.230; Justin Wintle, The Vietnam Wars (London: Weidenfeld and Nicolson, 1991), p.187. See also Richard A. Melanson, Reconstructing Consensus (New York: St. Martins, 1991). Back

Top of the Page

John White, Martin Luther King, Jr., and the Civil Rights Movement in America

BAAS Pamphlet No. 21 (First Published 1991)

ISBN: 0 946488 11 8
  1. Chronology
  2. The Civil Rights Impulse, 1932-1954
  3. Martin Luther King And Civil Rights In The South 1955-1965
    i. King and “Non-Violence”ii. Sit-Ins and Freedom Ridesiii. Albany and Birminghamiv. St. Augustine and Selma
  4. Beyond Civil Rights, 1966-1968
    i. Civil Rights and Civil Disordersii. Chicago and Black Poweriii. Vietnam and Poor People
  5. The Man And The Movement
  6. Guide to Further Reading
  7. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. Chronology

1929
Martin Luther King, Jr., born in Atlanta, Georgia, 15 January.

1933
Inauguration of Franklin D. Roosevelt.
Implementation of the New Deal.

1941
A. Philip Randolph calls for a March on Washington.
Franklin D. Roosevelt issues Executive Order 8802, banning discrimination in the defence industries. Japanese attack U.S. naval base at Pearl Harbour. U.S. Congress declares war on Japan. Germany and Italy declare war on U.S. Congress adopts war resolutions.

1942
James Farmer, secretary of the Quaker/Pacifist Fellowship of Reconciliation initiates the formation of the Congress of Racial Equality (CORE).

1946
Creation by Harry Truman of the Presidential Committee on Civil Rights.

1947
Truman addresses NAACP rally in Washington, D.C.

1948
Martin Luther King ordained as a Baptist minister.
King graduates with a BA in sociology from Morehouse College.
Truman issues Executive Order 9981, desegregating the armed forces.
Formation of the States’ Rights (Dixiecrat) Party, pledged to uphold racial segregation, with Strom Thurmond of South Carolina as its presidential candidate.

1953
Blacks boycott buses in Baton Rouge, Louisiana.
King marries Coretta Scott in Marion, Alabama.
Supreme Court in Terry v. Adams, rules that segregated primary elections violate the fourteenth amendment.

1954
Supreme Court decision in Brown v. Board of Education of Topeka, Kansas, rules that racial segregation in public schools is unconstitutional. Organization of white “Citizens Council” in Mississippi to oppose desegregation.
King takes up pastorship at Dexter Avenue Baptist Church, Montgomery.

1955
Supreme Court orders “prompt and reasonable start” toward school desegregation.
King receives Ph.D. in Systematic Theology from Boston University.
Mrs Rosa Parks, a black seamstress, is arrested on 1 December in Montgomery, Alabama, when she refuses to relinquish her seat to a white passenger.
Montgomery boycott begins. King elected president of the Montgomery Improvement Association (MIA).

1956
Bombing of King’s home in Montgomery.
King arrested on speeding charge.
MIA files suit against city’s bus segregation laws. Boycott leaders indicted by grand jury. 100 senators and congressmen issue the “Southern Manifesto” pledging to use “all lawful means” to overturn the Brown decision.
King convicted of leading an illegal boycott.
Federal district court rules in Browder v. Gayle that Alabama’s bus segregation laws are unconstitutional; decision upheld by Supreme Court in November.
End of Montgomery boycott, 20 December.

1957
Founding of the Southern Christian Leadership Conference (SCLC) with King as president.
Eisenhower uses paratroops during Little Rock school desegregation crisis.
1957 Civil Rights Act, guaranteeing blacks the right to vote.

1958
King arrested in Montgomery.
Publication of King’s account of the Montgomery bus boycott, Stride Toward Freedom.
King survives a stabbing in New York City.

1959
King visits India.

1960
Beginning of the “sit-in” movement, Greensboro, N.C.
Founding of the Student Nonviolent Coordinating Committee (SNCC) at Shaw University, Raleigh, North Carolina, addressed by King.
Civil Rights Act of 1960 provides for court enforcement of voting rights.
King arrested and jailed after a sit-in at an Atlanta store; freed after intervention by the Kennedy brothers.
Election of John F. Kennedy.

1961
CORE-sponsored “Freedom Rides” met with white violence in the South.
Arrest of King in Albany, Georgia.

1962
Federal troops sent to University of Mississippi after riots over admission of James Meredith.
FBI begins “Communist infiltration” surveillance of SCLC.

1963
SCLC demonstrations in Birmingham, Alabama.
King writes “Letter from Birmingham Jail”.
The march on Washington and King’s “I have a Dream” oration.
Voter registration drives in the South.
Assassination of John F. Kennedy in Dallas, Texas.
Bomb kills four black girls attending Sunday school in Birmingham, Alabama.

1964
Three civil rights workers-James Chaney (black), Andrew Goodman and Michael Schwerner (white)-abducted and murdered in Philadelphia, Mississippi.
SCLC campaign in St. Augustine, Florida.
Passage of 1964 Civil Rights Act.
King receives the Nobel Peace Prize in Norway.
J. Edgar Hoover publicly condemns King.
Riots in Harlem, Jersey City and Philadelphia.

1965
Assassination of Malcolm X.
SCLC voting rights campaign in Selma, Alabama.
“Bloody Sunday” on the Edmund Pettus bridge, 7 March.
Lyndon Johnson addresses the nation and declares “We Shall Overcome”.
Selma-to-Montgomery march.
Passage of 1965 Voting Rights Act.
Racial riot in Watts, Los Angeles.

1966
SNCC condemns U.S. involvement in Vietnam.
SCLC and King in Chicago.
Shooting of James Meredith on his “March Against Fear” from Memphis, Tennessee to Jackson, Mississippi.
Emergence of the “Black Power” slogan during SCLC, SNCC and CORE continuation of the Meredith march.
Major “civil disorders” in Cleveland and Chicago.

1967
King opposes the Vietnam war and joins the peace demonstrations.
Further “civil disorders” in Newark and Detroit.
King’s announcement of the projected “Poor People’s Campaign”.

1968
Lyndon Johnson declares that he will not seek re-election.
King leads sanitation workers march in Memphis, Tennessee. King returns to Memphis and is assassinated by James Earl Ray on the balcony of the Lorraine Motel, 4 April.
Ralph D. Abernathy succeeds him as SCLC president.
Riots across America, resulting in thirty-nine deaths and 14,000 arrests.
Civil Rights Act of 1968, prohibiting discrimination in housing. National Advisory Commission on Civil Disorders reports “white racism” as the cause of urban riots.
“Resurrection City” erected and dismantled in Washington, D.C. End of Poor People’s Campaign.

1977
Presidential Medal of Freedom awarded posthumously to Dr. King.

1986
Martin Luther King’s birthday observed as an American national holiday.

2. The Civil Rights Impulse, 1932-1954

I
The civil rights movement—the concerted effort to gain greater social, political and economic equality for black Americans—was one of the greatest reform impulses of the twentieth century.[1] Among its seminal victories were the Supreme Court decision of 1954, which declared segregated public schools unconstitutional, the Montgomery, Alabama bus boycott of 1955-56, subsequent demonstrations throughout the South against persisting racial discrimination, and the passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965. The civil rights coalition included the National Association for the Advancement of Coloured People (NAACP), the National Urban League (NUL), the Congress of Racial Equality (CORE), the Southern Christian Leadership Conference (SCLC) and the Student Nonviolent Coordinating Committee (SNCC). Operating at the local and national levels, these organizations (and their affiliates) employed various strategies-litigation and lobbying, the registration and mobilization of black voters, and various forms of direct action-aimed at effecting social change. During the 1960s, as civil rights strategy shifted from the legalism and pressure group tactics of the NAACP and the NUL to the direct action campaign of CORE, SCLC and SNCC, serious rivalries developed among what were, in effect, competing elements of an uneasy alliance of disparate groups, united only in their concern for racial advancement. The movement had its greatest impact in the South, where under the inspirational and unifying leadership of Dr. Martin Luther King, Jr., African-Americans (and their white allies) employed the tactics of “non-violent” confrontation against the enforcers of racial segregation and white supremacy. Initially concerned to achieve civil (and constitutionally sanctioned) rights for Negroes that were regarded as legitimate by many white Americans, some elements of the movement came to demand less readily attainable (and more resolutely resisted) measures involving housing, welfare and employment policies. Black militants also began to formulate incisive-and un­popular-critiques of American capitalism and militarism.

The career of Martin Luther King, Jr. exemplifies this shift from reformism to radicalism in the dynamic of a crusade which directly and indirectly inspired other notable protest movements in the 1960s and 1970s: the campaigns for Indian, Chicano, Gay and Womens’ rights; the anti-poverty and anti-war movements. The civil rights movement, often dated from the Montgomery, Alabama bus boycott of 1955-56, was, in fact, a consequence of the New Deal administrations of Franklin D. Roosevelt, and events during and immediately after World War II.

New Deal legislation and the proliferation of relief and welfare agencies—improvised but innovative federal responses to the crisis of the Great Depression—brought tangible benefits to blacks in the North and the South, secured their future loyalties to the Democratic party, and prompted them to demand such additional measures as improved recreational and educational facilities, federal legislation to outlaw lynching, the elimination of discrimination in the civil service and armed forces, and the unrestricted use of the suffrage. On the eve of American involvement in World War II, black protest organizations united in demanding full and equal participation in the military, and an end to discriminatory practices in the defence industries. Early in 1941, the black socialist leader and labour organizer, A. Philip Randolph, called for an all-black march to Washington, D.C., to exert mass pressure on the administration for an end to all forms of racial discrimination. Roosevelt, faced by this prospect, issued Executive Order 8802 in June 1941, which only stipulated that there should be no discrimination in the defence industries “because of race, creed, or national origin”. But the March on Washington Movement, an index of black militancy during the war years, anticipated later forms of black protest, as did the Detroit race riot of 1943, a portent of the urban “civil disorders” of the 1960s. Again, CORE, an interracial pacifist organization, founded in 1942, pioneered later forms of non-violent direct action when in 1943 it staged a “sit-in” at a Chicago restaurant which had refused to serve blacks. In 1947, CORE sponsored a “Journey of Reconciliation”—a forerunner of the 1961 Freedom Rides-through the Upper South, to test compliance with the Supreme Court’s 1946 ruling in Morgan v. Virginia, prohibiting segregated interstate bus transportation.

Within the South itself there were also signs of black assertiveness during the war years. A report by the Bureau of Agricultural Economics observed that in one Mississippi county, there was “a feeling of discontent and a growing consciousness of exclusion from social, economic, and political participation.” Black preachers were taking the lead and their churches had “become the means here and there for encouraging Negroes to resist the controls which the land lord has held over them. Ideas about ‘rights’ are being introduced in a few instances to Negro sharecroppers through Negro preachers and their educated white and Negro friends.”[2] Demographic shifts of the New Deal and World War II years underwrote and intensified African-American demands for change. The migration of blacks out of the rural South to the urbanized-industrialized Northern states increased their economic opportunities, and strengthened the more progressive wing of the Democratic party in sympathy with Negro aspirations. During the war, the black press in America adopted the “Double V” slogan-calling for victory abroad over the Axis powers, and victory at home over racial discrimination.[3] Most importantly, after 1945, demobilized black veterans (over 1 million served in the segregated armed forces between 1941 and 1945), were not prepared to return to the racial status quo, and took full advantage of the educational and welfare provisions of the Serviceman’s Readjustment Act of 1944-the “G.I. Bill of Rights”. But in the South, schools and most educational facilities were rigidly segregated. As Constance Baker Motley, an attorney for the NAACP Defence and Educational Fund, recalled:

The issue of segregation loomed large during the war and the war effort. Here we were as a nation involved in a war to make the world safe for democracy, and one of the embarrassing features was that blacks were segregated in our armed forces, and they resented it . . . After World War II, as a result of the activity of black servicemen, really, the whole attitude in the country about the race relations problem changed. The NAACP’s strategy for attacking segregation through the Legal Defense Fund was revitalized and extended after World War II. . . Eventually it led to the Supreme Court decision in the Brown case in 1954.[4]

II
World War II created a climate in which blacks (and some whites) perceived possibilities for decisive changes in the pattern of American race relations that had remained virtually frozen since the late nineteenth century. From the 1890s until the 1940s, southern blacks were, for all practical purposes, power less and frighteningly vulnerable to white aggression. Excluded from decision-making positions in all institutions and agencies which served the white community, they were segregated in all public facilities, deprived of the franchise by intimidation and fraud, and became the victims of lynch mobs. (Between 1882 and 1953, at least 3,275 blacks were lynched in the South). Whatever influence black spokesmen and women exercised was channeled through white intermediaries, who offered only minimal concessions to requests for better facilities and services. After 1945, black aspirations—which had risen higher and faster than actual black advances—began to embrace goals which ran directly counter to accepted southern white mores and customs.

Increasingly, southern blacks (and their northern allies) protested against segregation and the grosser forms of racial etiquette which relegated all blacks (regardless of age, sex or occupation) to a permanently subordinate position within a rigidly defined caste system. Returning veterans supported voting campaigns in the South, and between 1940 and 1947 the number of blacks registered to vote increased from 2% to 12% (although black voting was primarily an urban phenomenon). Outside the South, the black vote had become an important factor in national elections, and in 1940, the Democratic party platform contained a plank which addressed the issues of due process and equal protection under the law for Negroes. Postwar executive leadership was provided by Roosevelt’s successor, Harry Truman who, out of expediency rather than personal conviction, supported measures to abolish the poll tax, the passage of anti-lynching legislation and the creation of a Committee on Civil Rights. In its report, the Committee dennounced all forms of segregation and discrimination, and proposed laws to protect the rights of qualified voters and the creation of a Fair Employment Practices Commission. Truman—the first president to address the NAACP—endorsed these proposals but in the 1948 Presidential Election tried to appease southern white Democrats on racial issues. He also campaigned in Harlem, and won his (surprise) election with decisive black support in California, Illinois and Ohio.[5]

Truman’s commitment to civil rights was actually very thin. Despite his issuance of two executive orders in 1948—calling for an end to segregation in the military, and creating a fair employment board to eliminate segregation in the civil service-little was achieved in the short term. Despite southern influence in Congress (due to the seniority system, southern congressmen were regularly returned by their white constituents and chaired the most important committees), more could have been done under Truman to bring about the rapid desegregation of the armed forces and, through the use of justice Department attorneys, to prosecute violators of black civil rights in the South.

But, during the same period, the United States Supreme Court was beginning to take steps on behalf of blacks. In 1938, the Court had made an intitial move against the doctrine of “separate but equal” (enshrined in its 1896 Plessy v. Ferguson decision), when it ruled that, in failing to provide a law school for blacks, the state of Missouri was in violation of the fourteenth amendment to the Constitution; in a 1950 decision, the Court ordered the state of Texas to admit blacks to the university law school. NAACP attorneys, together with sympathetic (black and white) historians and sociologists, pushed the Court increasingly on the “separate but equal” principle. In 1954, Earl Warren, appointed as Supreme Court justice by President Eisenhower, handed down the historic Brown decision which held that separate educational facilities for white and blacks were “inherently unequal” and, in the following year, ordered school desegregation to proceed with “all deliberate speed”. These rulings provoked determined and well­organized opposition in the South, where Virginia took the lead in devising a programme of “massive resistance” to school desegregation. Southern conservatives opposed to racial change correctly saw the significance of Brown, a decision more sweeping in its implications than earlier Court rulings on desegregation in higher education. As the southern white historian C. Vann Woodward—who assisted the NAACP attorneys in the case—comments, Brown “appeared to remove the constitutional underpinnings of the whole segregation system and strike at the foundations of Jim Crow law. It was the most momentous and far-reaching decision of the century in civil rights.”[6]

Unfortunately, President Dwight D. Eisenhower was himself largely indifferent if not actively hostile to civil rights for blacks. In 1956, he campaigned in the South in an attempt to win segregationist votes in his (successful) bid for re-election. Ironically, given his lack of enthusiasm for the Brown decision, Eisenhower was compelled to use executive power to implement the ruling when, in September 1957, Governor Orville Faubus of Arkansas carried defiance of the Court to the point of using state militia to halt token integration at Little Rock High School. Faubus withdrew the troops on court order, but when hysterical white mobs forced the removal of nine black children, Eisenhower ordered in federal troops to enforce the law. Yet, despite the use of force, Little Rock high schools were closed in 1958-59, and blacks were never actually admitted until August 1959. During the last three years of Eisenhower’s administration, the number of southern school districts desegregating even in token ways fell sharply, as state governors employed a variety of means-providing state funds to enable any white student “threatened” with integration to attend a private school, allowing any school district to close its doors if integration occurred against community wishes—to circumvent and effectively nullify the Brown rulings.[7]

III
In contrast, blacks welcomed Brown and were encouraged to press not only for its full implementation but for other civil rights demands. Bayard Rustin, the first field secretary of CORE, and later adviser to Martin Luther King, Jr., believed that:

when the Supreme Court came out with the Brown decision in ’54, things began rapidly to move . . . What made ’54 so unusual was that the Supreme Court . . . decision established black people as being citizens with the rights of all other citizens. Once that happened, then it was very easy for the militancy, which had been building up, to express itself in the Montgomery bus boycott of ’55-’56 [8]

During the early 1950s, public opinion polls revealed that blacks were generally optimistic that their condition would improve markedly within a short time. From 1947 to 1954, according to a United States Census study, the median income of black families more than doubled, while increasing numbers of blacks attended college. Yet those gains were largely confined to the black middle class, who were also the main supporters of such established and “traditional” civil rights organizations as the NUL and the NAACP (Some critics remarked that the second acronym stood for the “National Association for the Advancement of Certain people”). Within the South, however, there were signs that a more militant black leadership was emerging that was also beginning to attract mass support.

Sociologists, historians and political scientists have offered differing interpretations of the underlying impulse behind the civil rights movement which accelerated (and became increasingly visible) after 1954. Richard King, for example, accepts the conventional explanations for the emergence of the civil rights movement after Brown: continuing out-migration of southern blacks, increasing prosperity and growing urbanization within the South itself, the greater involvement of black churches and colleges in civil rights issues, the militancy of a younger generation of student activists, the continuing pressures of the NAACP and CORE. However, he has argued persuasively that “what was unique about the civil rights movement was not just that it sought to destroy segregation and disfranchisement in the South. Rather, the uniqueness of the movement lay in its attempt to establish a new sense of individual and collective self among Southern black people through political mobilization and participation.” Psychological “freedom”—from fear, oppression and release from the “invisibility” forced on African­Americans by their separate but decidedly unequal status in the eyes of most whites—was “the animating impulse behind and within the actions of the movement.”[9] Other commentators have suggested that because of its ability to mobilize external resources—those of philanthropic foundations, organized labour, existing black organizations (particularly churches and colleges), political elites, the courts and, utimately the federal government—a civil rights coalition, employing a variety of strategies, emerged during the mid-1950s. Another view, premised on classical theories of collective behaviour, asserts that the civil rights movement was the consequence of strains and tensions within the existing socio-political system, which produced relatively spontaneous and disorganized action in response to particular situations.[10]

None of these explanations are mutually exclusive, and need to be applied to particular situations. Other scholars have demonstrated that at the local level, black movements developed and operated independently of the national civil rights organizations, producing their own self-reliant, indigenous leaders who pursued goals and formulated strategies often at variance with those of the nationally—known black leadership.[11] Again, the inception of the Montgomery bus boycott of 1955-56, which brought Martin Luther King to international attention, has frequently been credited entirely to his initiative. In fact, Mrs. Jo Ann Robinson, the assertive and active head of the Women’s Political Council in Montgomery, together with E. D. Nixon, president of the local chapter of the Brotherhood of Sleeping Car Porters, were the prime movers in calling for a one-day boycott of the city’s bus lines following the arrest of Mrs. Rosa Parks (herself a civil rights activist) on 1 December, 1955, when she refused the driver’s order to vacate her seat to a white man. A group of local ministers then formed the Montgomery Improvement Association (MIA) to direct and coordinate what became a 382-day boycott of the City Lines bus company, owned by the Chicago-based National City Lines. Robinson and Nixon shrewdly recognized that Montgomery’s blacks could be more effectively organized for mass protest through the indigenous black church-which bridged social classes and political factions, and provided meeting places and fund-raising facilities-than through a purely secular movement. But, in effect, Montgomery’s black clergy were presented with a fait accompli—a mimeographed leaflet calling for a boycott of the bus company had been distributed among the black community—and as Mrs. Robinson later reflected after her circular was made public:

It was then that the ministers decided it was time for them, the leaders, to catch up with the masses . . . Had they not done so, they might have alienated themselves from their cong­regations.[12]

After some discussion, Martin Luther King, Jr., a twenty-six-year-old minister of the black middle-class Dexter Avenue Baptist Church, who had arrived in Montgomery only a year before, was unanimously elected to preside over the MIA. A month before this momentous nomination, King (who was anxious to complete his doctoral dissertation at Boston University) had refused the presidency of the city chapter of the NAACP, had not engaged in any civil rights activity, and had not even met Mrs. Parks. Yet on several counts, as E. D. Nixon recognized, the young minister was an ideal choice: as a relative newcomer, he was not involved in the factionalism of local black politics, and had therefore not been compromised by any dealings with the white community. Again, he was seen to possess personal and educational qualities essential in a leader who would have to conduct negotiations with the white establishment. In other respects, however, King was an unknown quantity, and was certainly surprised to be chosen as the MIA’s chief officer. As he later wrote, the election “caught me unawares. It happened so quickly I did not even have time to think it through. It is probable that if I had, I would have declined the nomination”.[13]

King’s inspirational direction of the Montgomery bus boycott, and the events which followed from it, made him the most famous-and increasingly controversial-black leader of the civil rights movement in America.

3. Martin Luther King and Civil Rights in the South, 1955-1965

King and “Non-Violence”
Throughout the Montgomery bus boycott, King, as leader of the MIA—whose initial requests, reflecting those of the boycott two years earlier by blacks in Baton Rouge, were for the improvement rather than the abolition of segregated seating arrangements on the city’s buses—stressed that the protest, whatever the provocations, must be peaceful. Many commentators have observed that King fashioned his concept of non-violent resistence to unjust laws from his undergraduate reading of Henry David Thoreau’s classic essay Civil Disobedience and his later awareness of Mohandas K. Gandhi’s campaigns against British rule in India. These ideas and examples, it is suggested, were grafted on to King’s fundamental belief-partly based on the writings of the Social Gospel theologian Walter Rauschenbusch and the teachings of Benjamin E. Mays at Morehouse College in Atlanta-that the church should concern itself with social conditions, as well as with the salvation of souls. Other authorities argue that King’s philosophy of non-violent resistance derived essentially from his African-American Baptist heritage and belief that, through collective and redemptive suffering, blacks would demonstrate the morality of their cause and convert their oppressors.[14] Although non-violence was to prove effective as a strategy in limiting white violence against civil rights demonstrators, King’s insistence on the need to love one’s oppressor was misunderstood—or rejected—by many blacks. In the event, Montgomery’s buses were desegregated by the Supreme Court’s affirmation of a decision by the United States District Court that Alabama’s local and state laws requiring segregation on buses were unconstitutional. King viewed the decision as a victory for the strategy of non-violent protest, and in 1957, along with other black clergymen and with the advice of Ella Baker, Bayard Rustin and Stanley Levison (a white New York attorney), formed the Southern Christian Leadership Conference (SCLC) to spread and coordinate nonviolent civil rights protest across the South.[15] Rooted firmly in the black church, the SCLC was the institutional embodiment of King’s belief in non-violent protest, while the organization itself deliberately capitalized on his growing fame and prestige.

Sit-Ins and Freedom Rides
The SCLC’s original aim of spreading the Mongomery example by supporting similar boycotts in other cities met with little success, while in Montgomery itself, the MIA was largely ineffective in challenging other forms of discrimination. It was the “sit-in” movement, pioneered at Greensboro, North Carolina in Februrary 1960, when four black students sat down at a Woolworth’s lunch counter and demanded service, that inaugurated a new (and more aggressive) phase of civil rights struggle. The Greensboro strategy quickly spread, and there were “wade-ins” at municipal swimming pools and segregated beaches, “pray-ins” at segregated churches and “stand-ins” at theatres which refused admission to blacks. Many of the young activists had been inspired by the example of Montgomery’s blacks in sustaining their 382-day boycott, and had read King’s account of the protest, Stride Toward Freedom. The SCLC itself was unprepared for the sit-in protests, but their attendant publicity convinced King that other forms of direct action could be used to defeat segregation in the South. The founding of the Student Nonviolent Coordinating Committee (SNCC), formed by leaders of the student protest after consultation with King and SCLC leaders in April 1960, added another element to the movement and, for a time, appeared to indicate that his pacifist approach was endorsed by the younger generation of activists. The SCLC was again taken by surprise when in 1961 CORE sponsored and directed a series of “Freedom Rides” into the South—to test compliance with the Supreme Court’s decision in Boynton v. Virginia, which extended its earlier ruling against segregation on interstate transportation to cover terminal accommodations and facilities. The Freedom Riders met with white violence in Anniston, Birmingham and Montgomery, Alabama, and, in the last instance, prompted a reluctant Kennedy administration to mobilize 600 U.S. marshals to protect the demonstrators. James Colaiaco observes that:

The Freedom Rides supplied an important strategic lesson for King and the SCLC: in order to arouse public sympathy sufficient to pressure the federal govenment to enforce civil rights in the states and localities, white racists had to be provoked to use violence against non-violent protestors.[16]

The Freedom Rides also exacerbated growing tensions between SCLC and SNCC concerning the viability and efficacy of non-violent resistance in the face of white aggression.

Albany and Birmingham
During 1961-62, SCLC (already engaged in an ineffective voter registration drive under the leadership of its temporary executive director, Ella Baker), was invited to Albany, Georgia by Dr. William G. Anderson, the leader of the “Albany Movement”, to assist a local protest campaign aimed at ending segregation in all the city’s public facilities and securing fair employment opportunities for blacks. SNCC field workers, already active in Albany, did not welcome the SCLC presence and were openly critical of King’s “charismatic” leadership style. Uninformed about the situation in Albany, King and the SCLC were shrewdly outmanoeuvered by police chief Laurie Prichett, who exercised restraint against the protectors and, on several occasions, arranged for King’s release from jail, thus depriving the protest of vital publicity, and precluding any prospect of federal intervention. David Lewis notes that: “In Laurie Pritchett, King met a travestied image of himself-a nonviolent segregationist law officer.” As Andrew Young, who had recently joined the SCLC staff, later conceded:

The weakness of the Albany Movement was that it was totally unplanned and [SCLC] were totally unprepared. It was a miscalculation on the part of a number of people that a spontaneous appearance by Martin Luther King could bring change . . . It was the planning, the organizing, the strategy that he brought with him that brought change.[17]

The lessons of Albany were well-learned, and carefully implemented during the SCLC’s 1962-63 campaign in Birmingham, Alabama, the most rigidly segregated city in the South (and its largest industrial centre). When the Reverend Fred Shuttlesworth, leader of the Alabama Christian Movement for Human Rights, invited King to the city, the SCLC leadership, now under the able direction of Wyatt T. Walker, planned to create a crisis that would compel the city fathers to negotiate four basic demands: desegregation of lunch counters and store facilities, amnesty for protestors already jailed, the hiring of blacks in local government and businesses, and the formation of a biracial committee to devise a timetable for the complete desegregation of remaining segregated facilities. Boycotts and sit-ins of downtown stores were to be combined with disruptive marches, and Walker collected the names and addresses of over 300  Birmingham residents prepared to go to jail. A. G. Gaston, a local black entrepreneur, provided the campaign with rent-free accommodation at his hotel, nightly mass meetings were held in the city’s black churches, and outside support was organized by the actor Harry Belafonte. What the Birmingham campaign needed, however, was publicity and media attention. Eugene “Bull” Connor, the commissioner of police-who had earlier closed the city’s parks to blacks in order to prevent their integration-obliged with arrests of hundreds of demonstrators (including King) and vicious attacks with police dogs and fire hoses on the black schoolchildren recruited by SCLC’s James Bevel. American public opinion was outraged by the events in Birmingham depicted on television and, with the arrival of Burke Marshall, head of the Civil Rights Division of the Justice Department, a compromise settlement (opposed by Shuttlesworth but endorsed by King) was reached. The agreement fell short of the original SCLC demands; but there is strong evidence to suggest that Birmingham businessmen, fearful of the disruptive effects of continuing demonstrations, persuaded city leaders to agree to the gradual desegregation of facilities in shops and stores. Again, the SCLC’s campaign persuaded the Kennedy administration of the need for civil rights legislation. Lyndon Johnson, Kennedy’s successor, ultimately secured passage of the landmark Civil Rights Act of 1964, provisions of which gave the executive powers to withdraw federal funds from state and local governments practicing discrimination.

It was during the Birmingham demonstrations that King, in his famous “Letter from Birmingham jail”, castigated eight of the city’s white clergymen who had published a statement attacking the SCLC’s campaign as unnecessary and ill-timed. A classic explication of civil rights and nonviolence, King’s “Letter” reiterated his belief in resistance to unjust laws, denied that non-violence was synonymous with extremism, and warned that black disaffection had already produced such separatist groups as the Black Muslims [the Nation of Islam], composed “of people who have lost faith in America, who have absolutely repudiated Christianity, and who have concluded that the white man is an incurable ‘devil”‘. In contrast, King presented himself as a responsible moderate, standing between “these two forces saying that we need not follow the ‘do-nothingism’ of the complacent or the hatred and despair of the black nationalist.”[18] The Birmingham campaign increased King’s stature as perceived by whites and blacks. A Newsweek opinion poll revealed that 95% of blacks now regarded King as their most successful spokesman. King’s presence and his oratorical skills were both dramatically demonstrated during the March on Washington of August 1963, the climax (to its critics, the nadir) of the civil rights movement, when a quarter of a million people converged on the capital to pressure Congress to pass the Civil Rights bill. (Originally conceived by A. Philip Randolph as a two-day mass protest to dramatize black unemployment, the projected “March” became a one day rally which passed off without incident). On the steps of the Lincoln Memorial, King delivered his “I have a Dream” oration—one of the great speeches of the twentieth century. Although he had made use of the “dream” imagery in a speech in Detroit two months earlier, the cadences of King’s delivery, and the repetition of its central image-rather like that of a skilled jazz improviser—visibly moved his audience. William H. Johnson, a World War II veteran and New York City policeman, who acted as a security guard during the March, remembered:

I was enthralled by Dr. King’s speech . . . It made me reflect upon my army service. It made me angry about what I had suffered overseas at the hands of my white compatriots . . . But Dr. King brought to life the hope that . . . one day we could smooth out our differences.[19]

Others disagreed: Julius Lester, a field secretary of SNCC, later scathingly observed that the March on Washington and King’s keynote address were:

a great inspiration to those who think something is being accomplished by having black bodies next to white ones. The March was nothing but a giant therapy session that allowed Dr. King to orate about his dreams of a nigger eating at the same table with some Georgia cracker, while most black folks just dreamed about eating.[20]

In 1964 King appeared on the cover of Time magazine, and in the accompanying article was credited with “an indescribable empathy that is the touchstone of leadership”—an opinion which the Luce publication, in the light of King’s later critique of American involvement in Vietnam, would severely modify. In the same year, King was awarded the Nobel Peace Prize.

Significantly, in his Nobel acceptance speech King linked the American civil rights movement with the larger causes of world peace and human rights. Ironically, he was already under investigation by the FBI following J. Edgar Hoover’s directive to keep the SCLC under close surveillance because of its alleged infiltration by Communists (and because of King’s extra-marital activities). Attorney General Robert Kennedy, convinced that Stanley Levison, King’s associate and advisor, was an active member of the American Communist party, authorized FBI wiretraps on King.

St. Augustine and Selma
King’s “dream” of racial harmony received a rude awakening with the defeat of the proposed civil rights bill by a southern filibuster, and by the bombing of the Sixteenth Street Baptist church in Birmingham on 15 September, 1963 which killed four young black girls. But following the assassination in Dallas of President Kennedy on 22 November, his successor, Lyndon Johnson, asked Congress in January 1964 to pass the stalled civil rights bill as a memorial to the late President. The civil rights coalition resolved to put added pressure on Congress to adopt the measure by further demonstrations in the South. The SCLC selected St. Augustine, Florida—the oldest community in America—and already in the news because of its approaching 400th anniversary celebrations, as its target. As in Birmingham, SCLC’s strategy was to apply economic pressure that would force the business community and local authorities to desegregate the city’s public accommodations and institute fair employment policies—including the hirings of blacks by the police and fire departments. But the collusion of local law enforcement officers (some of whom were members of the Ku Klux Klan) with white mobs who attacked King and the Reverend Andrew Young as they led the protests, and the refusal of the Florida state governor to obey a federal court injunction protecting the right of peaceful protest in St. Augustine, effectively weakened the SCLC campaign. Again, despite his announced sympathies for civil rights goals, Lyndon Johnson feared that federal intervention in St. Augustine would hurt the Democratic party in the forthcoming national election at a time when the ultra conservative Republican candidate, Barry Goldwater, was gaining the support of Democrats in the Deep South.

Although the SCLC’s St. Augustine campaign did not achieve its immediate goals, it kept black protest at the forefront of national (and international) attention—Izvestia, the Soviet newspaper, featured a photograph of the violence in St. Augustine-and further dramatized the need for passage of civil rights legislation. King himself witnessed the signing of the historic Civil Rights Act by President Johnson in the White House on 2 July, 1964. Covering all areas of life and concerned with de facto as well as de lure segregation, the 1964 act—the public accommodations sections of which were confirmed as constitutional by a Supreme Court decision of 14 December—was regarded by the SCLC as a victory for direct action protest. It quickly became evident, however, that such devices as the poll tax, literacy tests and actual intimidation were making it difficult (and dangerous) for blacks attempting to register to vote in the South. Early in 1963, the various elements in the civil rights coalition joined with the National Council of Churches to form the Council of Federated Organizations (COFO) to help blacks register in Mississippi. The murders of three SNCC student workers in Philadelphia, Mississippi, served notice that many southern whites remained implacably opposed to political equality for African-Americans. It also increased tensions between SCLC and the more radical elements—SNCC and CORE—involved in voter registration campaigns, tensions that were to become increasingly evident during SCLC’s campaign in Selma, Alabama in 1965.

Already ridiculed by SNCC activists for what was regarded as his excessive religiosity, his insistence on non-violent protest and tendency to compromise, King was also regarded as remote from the local, grassroots protests that were escalating across the South. Again, the SCLC was alleged to promote confrontations between civil rights protestors and southern authorities, gain valuable publicity in the media, and then leave the scene with local demands unresolved.

In Selma, King and the SCLC hoped to provoke local law enforcement officials—notably the brutal and avowedly racist sheriff Jim Clark—into attacking and arresting demonstrators. After armed posseemen and Alabama State troopers charged marchers as they tried to cross the Edmund Pettus Bridge on “Bloody Sunday”, King issued an appeal for help, and hundreds of clergymen and lay persons from all over the country joined the protest. When King complied with a federal court injunction against a proposed march from Selma to Montgomery, SNCC field workers were appalled. When the injunction was lifted, and after President Johnson federalized the Alabama National Guard, King led the historic march to Montgomery, where on 25 March he spoke to 25,000 people from the steps of the state capitol. The violent events in the Selma campaign-including the fatal beating by whites of a Unitarian minister, James Reeb, a participant in the march, and the murder by four Klansmen of Mrs. Viola Liuzzo, a Detroit housewife and mother who had driven some marchers back to Selma, caused national outrage, and prompted President Johnson to convene a joint session of Congress and announce his intention of sending it a voting rights bill. His televised assertion that “it is not just Negroes, but really all of us who must overcome the crippling legacy of bigotry and injustice. And we shall overcome!” thus appeared to vindicate and endorse the strategy and objectives of militant non-violent direct action in Selma. With the passage of the Voting Rights Act of 1965, which eliminated literacy and other “tests”, southern blacks—95 years after the ratification of the 15th Amendment to the Constitution-were finally guaranteed the fundamental right to register and vote. The Selma campaign, in retrospect, was King’s finest hour, and as one of his biographers observes:

The march from Selma had brought the Negro protest movement full circle, since it had all begun with the Montgomery bus boycott a decade before.[21]

The legality, morality and justice of the movement’s struggle for civil rights and liberties, personified by the example of King’s leadership, appeared to many Americans as incontrovertible. Moreover, the unremitting hostility of southern whites to even minimal changes in race relations had brought the powerful support of the Lyndon Johnson administration to the cause of civil rights for African-Americans. But with their appeals for constitutional rights largely secured, blacks came to demand more fundamental changes which challenged both American domestic and foreign policies, and threatened the continued existence of the civil rights coalition itself. By 1965, some members of the SCLC appeared uncertain of what direction they should now take. James Bevel sums up this feeling with his observation that: “There is no more civil rights movement. President Johnson signed it out of existence when he signed the voting-rights bill.”[22] Public opinion polls indicated that increasing numbers of whites believed that blacks ought to be satisfied with their recent gains. Aware of these trends, King commented that: “The paths of Negro­white unity that have been converging crossed at Selma, and like a giant X began to diverge.”[23]

4. Beyond Civil Rights, 1966-1968

Civil Rights and Civil Disorders
By 1966, what Milton Viorst has called the “reformist phase” of the civil rights movement was over.[24] The formal practices of segregation had been ended by the Civil Rights Act, and the Voting Rights Act appeared to guarantee that blacks could no longer be denied the ballot in the South. But King was to complain that the Voting Rights Act did not receive adequate federal support and enforcement, while the SNCC became committed to independent black political organizations like the Mississippi Freedom Democrats. During the Selma protest, following the murder of a black youth, Jimmie Lee Jackson, in Marion, Alabama, King made a cutting reference to the “timidity of the federal government that is willing to spend millions of dollars a day to defend freedom in Vietnam but cannot protect the rights of its citizens at home.”[25] SNCC and CORE were also to denounce American policy in Vietnam; the older members of the coalition—the NAACP and NUL—regarded the war as irrelevant to the movement and (correctly) argued that opposition to it would alienate President Johnson’s support for civil rights. At the same time, King was also expressing concern about the poverty, despair and anger of urban blacks, who cited federal indifference, exploitation by white merchants and police brutality as major grievances.

In August 1965, following an incident in which a policeman attacked a bystander on the edge of a crowd which had gathered to protest a traffic violation arrest, the black ghetto of Watts, a suburb of Los Angeles, erupted in a racial riot. Thirty-four people were killed, 900 injured, and damage to property was estimated at over $40 million.

(When King visited the scene of the rioting, he discovered that many of Watts’ residents had never heard of him and regarded his attempts at mediation with hostility). The following year, forty-three more urban riots occurred, with the most serious disturbances in Cleveland and Chicago. In 1967, there were eight major riots, the worst of which were in Detroit, Michigan, and Newark, New Jersey. President Johnson’s National Advisory Commission on Civil Disorders, chaired by Otto Kerner, a former governor of Illinois, reported that the riots were not the result of organized conspiracy but of black unemployment, inadequate housing and educational facilities and oppressive police tactics. It recommended massive federal spending to improve the conditions and quality of life for ghetto residents, and asserted somberly:

What white Americans have never fully understood—but what the Negro can never forget—is that white society is deeply implicated in the ghetto. White institutions created it, white institutions maintain it, and white society condones it. Our society is moving toward two societies, one black, one white-separate but unequal.[26]

For the three remaining years of his life,. King concerned himself with the issues of persistent racism, “civil disorders”, urban decay, poverty and American militarism, all of which, he believed, were intimately related.

Chicago and Black Power
In 1966, despite the disapproval of many of his advisers, King decided to take the SCLC into Chicago to stage a nonviolent demonstration against segregated slum housing, de facto school segregation, black unemployment and job discrimination. (Three years earlier, the Coordinating Council of Community Organizations, a coalition of civic, religious and civil rights groups, had challenged persisting segregation in Chicago’s public schools). But, on all counts, the SCLC was unprepared for the formidable problems it would face in Chicago. Ill-prepared and inadequately briefed, SCLC workers did not even possess appropriate clothing for the severe Chicago winter. Richard J. Daley, the city’s autocratic mayor, was a consummate politician, and a major figure in the national Democratic party. As the Reverend Arthur Brazier, leader of the South Side Woodlawn Association observed:

King decided to come to Chicago because he thought Chicago was unique in that there was one man, one source of power, who you had to deal with. He knew this wasn’t the case in New York or in any other city. He thought if Daley could be persuaded on the rightness of open housing and integrated schools that things would be done .[27]

King’s perception of Daley’s power was correct; his faith that Daley was amenable to reason was misplaced. Although he treated King with every outward sign of respect and cordiality, Daley was not prepared to see his city disrupted by SCLC demonstrations, with all their attendant publicity. Andrew Young later reflected that initially SCLC did not regard Daley as an enemy, but quickly began to realize that his interests were fundamentally opposed to those of the Chicago Movement:

Mayor Daley was trying to keep together a political machine. We were trying to get more registered voters. He saw too many registered [black] voters as being more than he could control. He saw the movement as a direct threat to his machine. We saw the machine as the basis of the slums, of the poverty, of the exploitation of black folk.[28]

When King, in an attempt to dramatize Chicago’s housing crisis, moved into a dilapidated and rat-infested apartment block, Daley sent in building inspectors who handed out slum violation notices to landlords-regardless of their political affiliations. When King led a march into Chicago’s blue-collar suburbs and encountered violent opposition from white residents, Daley charged SCLC with encouraging rioting. If King was unprepared for Daley’s obfuscating tactics, he was also unprepared for the intensity of racial animosity which SCLC demonstrations provoked. His courage here, as on other occasions, did not falter. Ralph Abernathy, who would succeed King as head of the SCLC, later recalled:

In Chicago, where we encountered the largest and most hostile crowd in our long experience, it was Martin who overrode the fears of the other staff members and moved to the head of the line to lead the march into the suburb of Gage Park.[29]

Again, despite some successes by SCLC’s Jesse Jackson in forming “Operation Breadbasket”-consumer boycotts organized with the help of local black ministers against Chicago employers practicing racial discrimination-SCLC discovered that urban preachers lacked the influence and prestige they traditionally enjoyed in the South, and that the black church was less effective as a command and organizing centre for protest movements. Above all, the sheer scale and complexity of urban problems eclipsed the resources, human and financial, of the SCLC. Although the Chicago Freedom Movement ostensibly succeeded in persuading Daley to concede an open housing agreement with the city’s real estate and banking interests, it achieved little in practice, and was disavowed by Daley after his election to a fourth term of office in 1967.[30]

King’s apparent failure in Chicago caused him to reconsider his beliefs about American capitalism and the processes of change. It also brought hirri renewed criticism from younger elements of the civil rights coalition, including James Farmer of CORE and Stokely Carmichael of SNCC. The SCLC’s strategy of passive resistance was contemptuously dismissed as myopic, and both CORE and SNCC were beginning to move away from purely civil rights issues to demands for economic fights, self-determination for African-Americans and rejection of alliances with white liberals. In this they were influenced by former Black Muslim minister Malcolm X’s newly-formulated programme of black nationalism and political activism and its rejection of integration as either an attainable or even desirable goal. Persistent violence against blacks intensified the disaffection of many younger activists with what they saw as the conservatism—if not the cowardice—of the movement’s older, established leadership in general, and of King in Particular.

The growing division between SCLC and SNCC became vividly apparent during James Meredith’s one-man “March Against Fear” in June 1966. In 1962, Meredith had become the first black student to enroll at the University of Mississippi, but only after riots on the campus which resulted in the loss of two lives, the injury of 375 people, and President Kennedy’s Use of 600 United States marshals and 15,000 federalized national guardsmen to restore order. In 1966, Meredith (always a lone figure), decided to walk from Memphis, Tennessee to Jackson, Mississippi to demonstrate that, if he could cover this distance safely, black southerners need not fear walking much shorter distances to the polling booths during what was a primary election week. After Meredith was shot arid wounded by a white sniper, King joined with Stokely Carmichael Meredith’s Power” to characterize the emphasis on independent political action by the movement’s militants produced an enthusiastic response when he informed a crowd at Greenwood, Mississippi:

The only way we gonna stop white men from whuppin us is to take over … We been sayin’ “Freedom” for six years and we ain’t got nothin’. What we gonna start sayin’ now is . . . BLACK POWER.[31]

King declared himself opposed to the new slogan because of its connotations of racial separatism and apparent acceptance of violence. As he later wrote:

I pleaded with the group to abandon the Black Power slogan. It was my contention that a leader has to be concerned about problems of semantics. Each word, I said, has a denotative meaning-its explicit and recognized sense-and a connotative meaning-its suggestive sense. While the concept of Black Power might be denotatively sound the slogan “Black Power” carried the wrong connotations. I mentioned the implications of violence that the press had already attached to the phrase.[32]

Although agreement was reached between King, Carmichael and McKissick not to invoke the opposing slogans of “Freedom Now” and “Black Power” for the remainder of the march, the dispute was further indication of serious internal dissensions within the civil rights coalition.

Vietnam and Poor People
Those dissensions became even more acute in 1964 as the United States became increasingly involved in Vietnam. Student protests against the war began with a “teach-in” at the University of Michigan-where, ironically, a year before Lyndon Johnson had delivered his “Great Society” address pledging his administration to a programme of massive domestic reform-and quickly spread to other college campuses. White students who had been active in civil rights protests now directed their energies into the growing anti-war movement. CORE and SNCC condemned U.S. involvement in Vietnam as diverting funds and attention from America’s domestic problems, and as a colonial war against people of colour, and they encouraged draft evasion. Although King had spoken out against the war in 1965, he was also aware that there was opposition within SCLC to any identification with or support for the peace movement. But as the American presence in Vietnam escalated and he became aware of the sufferings of civilians in the fighting, King (encouraged by his receipt of the Nobel Peace Prize) joined anti-war demonstrations. On 4 April, 1967, he delivered an address at the Riverside Church in New York City, in which he expressed sympathy for the Vietcong and for Third World revolutionary movements, denounced the costs of the war in human and economic resources, and compared American practices in Vietnam with those ofthe Nazis during World War II:

We have destroyed their two most cherished institutions: the family and the village. We have destroyed their land and crops . . . We have supported the enemies ofthe peasants ofSaigon. We have corrupted their women and children and killed their men . . . What do they think as we test our latest weapons on them, just as the Germans tested out new medicine and new tortures in the concentration camps of Europe?[33]

Later the same month, King was the principal speaker for the “Spring Mobilization to End the War in Vietnam,” organized by James Bevel of SCLC, when over 125,000 protectors marched from Central Park to the U.N. Plaza in New York. Other notable participants included Dr. Benjamin Spock, Stokely Carmichael, Floyd McKissick and Harry Belafonte. But King, now more cautious in his new militancy, refused to sanction the burning of draft cards, and was not a signatory of the Spring Mobilization manifesto, which charged the American government with genocide. If King’s opposition to the Vietnam war aligned him with the younger elements—CORE and SNCC—in the movement, it also brought condemnation from Whitney Young of the Urban League and Roy Wilkins of the NAACP. Young cautioned that Lyndon Johnson would not tolerate criticism of his foreign policy from blacks: “If we are not with him on Vietnam, then he is not going to be with us on civil rights.” Wilkins asserted that civil rights spokesmen did not “have enough information on Vietnam, or on foreign policy, to make it their cause.”[34] The news magazines and the daily press attacked King’s anti-war pronouncements. Carl Rowan, a black journalist who had met King during the Montgomery boycott, argued in the Reader’s Digest that King’s intercession in a conflict between the United States and a Communist power would raise suspicions concerning his loyalties, and endanger future civil rights legislation. King’s response to such criticism was to argue that there was a causal relationship between poverty and racism and American militarism and imperialism-a conviction that hardened after the urban riots in Newark and Detroit. Kwame Ture (Stokely Carmichael) later reflected on King’s opposition to the Vietnam War: “It was clear that his philosophy made it impossible for him not to take a stand against the war in Vietnam.”[35] The FBI, under J. Edgar Hoover, kept President Johnson informed of King’s anti-war activities, and further intensified its surveillance of the SCLC.

King’s conviction that American society needed fundamental redistributions ofits priorities, wealth and economic power was embodied in his concept of a “Poor People’s Campaign”-an interracial alliance of the dispossessed, which would engage in an orchestrated act of civil disobedience designed to paralyse the functioning of the nation’s capital. Such a movement, he also hoped, would bridge divisions within the civil rights movement and vindicate the efficacy of non-violent protest which had achieved such positive results in the South. Michael Harrington, author of The Other America (1962), an expose of poverty in the United States, who advised King on a projected march on Washington by the poor of all races, believes that the Poor People’s Campaign—the immediate objective of which was to pressure Congress into enacting King’s proposed Bill of Rights for the Disadvantaged, a massive federally-funded anti-poverty programme—was:

certainly no repudiation by Dr. King of his opposition to the war . . . it was an attempt to . . . go back and refocus on basics, and perhaps more importantly, to mobilize a mass movement.[36]

In February 1968, 1,300 sanitation workers (nearly all of whom were black) went on strike in Memphis, Tennessee to win union recognition, improved wages and working conditions and gained the support of the local black community, including its ministers. When James Lawson, a member of SCLC and pastor of Centenary Methodist Church in Memphis invited King to address a rally in support of the strikers, he accepted. One commentator suggests that:

As described by James Lawson . . . the strike had all the classic features of the supposedly moribund civil rights movement: packed mass meetings, church-based leadership, and a spirit of non-violence. The issue in dispute [the dignity of labour] … was one of the questions King sought to dramatize in the Poor People’s Campaign.[37]

But the violence that erupted during King’s participation in a demonstration in Memphis brought predictions of its repetition on a larger scale in Washington, D.C. A Memphis newspaper commented sharply that:

Dr King’s pose as a leader of a non-violent movement has been shattered. He now has the entire nation doubting his word when he insists that his April project can be peaceful.[38]

Deeply disturbed by events in Memphis and FBI-inspired media comment on his own alleged culpability, King (who conceded that he had gone there inadequately briefed) was encouraged by Lyndon Johnson’s surprise announcement (prompted by domestic turmoil over Vietnam) that he would not seek re-election in 1968.King hoped that the anti-war and anti-poverty movements might coalesce if the Democrats were successful in the election. But the assassination of Robert F. Kennedy, and the defeat of Hubert Humphrey, both ofwhom supported civil rights, brought the Republican, Richard Nixon, to the White House, pledged to the restoration of “law and order” and opposed to further social reforms.

On his return to Memphis, King, after delivering an address in which he referred to the increasing number of threats on his life, was killed by a white sniper—James Earl Ray—as he stood on the balcony of his motel. King’s assassination touched off a wave ofviolence in more than 130 cities in 29 states, resulting in 46 deaths, over 7,000 injuries and 20,000 arrests, with damage to property estimated at $100 million. Ironically, King’s assassination may have assisted the resolution of the Memphis strike. Following his murder, Memphis businessmen began to press for a settlement of the dispute. And a week after the assassination, Congress in a wave of sympathy passed a civil rights bill which incorporated fair housing proposals (impossible to implement effectively) absent from the earlier legislation.

But King’s violent death also induced a mood of deep pessimism among whites and blacks. Newsweek magazine (which, like its competitors, had earlier been critical of King’s perceived radicalism on the issues of poverty and American involvement in Vietnam), editorialized:

King’s martydom on a motel balcony did far more than rob Negroes of their most compelling spokesman, and whites of their most effective bridge to black America. His murder, for too many blacks, could only be read as a judgement upon his non-violent philosophy-and a license for retaliatory violence.[39]

Staff Sargeant Don F. Browne, serving in Vietnam, remembered:

When I heard that Martin Luther King was assassinated, my first inclination was to run out and punch the first white guy I saw. I was very hurt. All I wanted to do was to go home. I even wrote Lyndon Johnson a letter. I said I didn’t understand how I could be trying to protect foreigners in their country with the possibility of losing my life wherein in my own country people who are my hero [es] like Martin Luther King, can’t even walk the streets in a safe manner.[40]

The SCLC, riven by factionalism following King’s murder but under the nominal leadership of Ralph Abernathy, decided to stage the Poor People’s Campaign. “Resurrection City”, a canvas and plywood encampment was erected near the Washington Monument. Poorly organized and inadequately funded, it quickly disintegrated in a sea of mud and mutual recriminations (the National Parks Service served the SCLC with a bill for $71,000) and failed to arouse mass support or participation. At its peak, Resurrection City housed 2, 500 protestors, the majority of whom were black. It was dismantled on 24 June, 1968, and Jesse Jackson, who had served as the city’s unofficial “mayor”; later recalled:

When Resurrection City closed down there was a sense of betrayal, a sense of abandonment. The dreamer had been killed in Memphis and there was an attempt now to kill the dream itself, which was to feed the hungry . . . to bring the people together, and rather than come forth with a plan to wipe out malnutrition, they were wiping out the malnourished . . . They drove us out with tear gas . . . They shot Dr. King. Now they were gassing us . . . I left there with an awful sense of betrayal and abandonment.[41]

In 1971, Jackson—who regarded himself as King’s successor—resigned from SCLC and began to pursue an independent political career, contending (unsuccessfully) for the Democratic party’s presidential nomination in 1984 and again in 1988.

5. The Man and the Movement

Many Civil rights campaigns of the 1950s and 1960s began at the grass roots level, initiated by local leaders determined to destroy (or at least to modify) the customs and practices which relegated them to a subordinate socio-economic status. The ultimate success or failure of such protests depended, to a large extent, on their attracting national attention and positive responses from one (or all three) branches of the federal government: the judiciary, congress and the executive. From his emergence during the Montgomery bus boycott in 1955 to his death in Tennessee thirteen years later, Martin Luther King, Jr. was widely regarded as the leader ofthe civil rights movement, the individual best able to dramatize a situation by his words and actions, and to communicate black aspirations to sympathetic whites. In effect, King was a catalyst, able to focus attention and support on campaigns usually begun by others. As Ella Baker, a former staff member of SCLC, has argued persuasively: “The movement made Martin rather than Martin making the movement.”[42]

Certainly King was at first reluctant to expend his energies in the service of civil rights. Commentators differ as to when and why King was prepared to assume a leading role in the movement. Andrew Young, a former executive director of the SCLC, believes that King “never wanted to be a leader”:

. . . everything he did he was pushed into. He went to Montgomery in the first place because . . . he wanted a nice quiet town where he could finish his doctoral dissertation . . . and got trapped into the Montgomery Improvement Association . . . He never would get involved in the Freedom Rides . . . He just did not want to assume leadership of the entire Southern struggle or of the entire national struggle . . . it wasn’t until the time ofBirmingham [1963] that he kinda decided that he wasn’t going to be able to escape that, that he was going on.[43]

On the other hand, one of King’s biographers believes that he underwent a moment of spiritual illumination in 1956 during the Montgomery campaign and at a time when he felt inadequate to continue the struggle: “I heard the voice of Jesus saying still to fight on. He promised never to leave me . . . never to leave me alone.”[44] Other commentators agree that King’s motivation and inspiration were essentially religious, and derived principally “from his deep faith in the Christian God as defined by the black Baptist and liberal Protestant traditions.”[45] That he was able to communicate these beliefs to blacks and whites is no small part of King’s achievement as the preacher and practitioner of a new social gospel attuned to the post-World War II era. During its southern phase, the civil rights movement effected profound changes in that section’s race relations: the overturning of the daily humiliations for African-Americans in a society pledged to the maintenance of the customs, laws and symbols of white supremacy. The Voting Rights Act of 1965 marked a new stage in southern black/white relations, as the former champions of racial segregation like governor George Wallace of Alabama began, if only reluctantly, to respond to the needs and demands of their newly­ enfranchised constituents. (In 1985, Wallace received Jessee Jackson in the governor’s mansion at the conclusion of the 20th anniversary march from Selma to Montgomery).

It was the signal achievement of the civil rights movement to exorcize and expiate the evils of institutionalized racism in the South. Black southerners, by their courage, tenacity and the undeniable morality of their cause, brought a measure of racial reconciliation (and increasing prosperity) to a section that, in the 1990s, no longer seems wholly obsessed with the precepts and practices of white supremacy. Again, Martin Luther King, Jr. should be accorded unqualified recognition for his role in reconciling southern whites to the claims of their black fellow citizens for equal treatment before the law. Adam Fairclough observes that King “was the first black leader of any stature deliberately to invite arrest while seeking out and confronting the most vicious racists, risking death as a way of life.[46]

Judged only by his example and his oratory, his articulation of Christian and democratic principles, King will be remembered as the greatest black visionary leader of the twentieth century. That the majority of Americans-black and white-were unable to endorse his critiques of the failings of American free enterprise, and his support for human rights throughout the world, suggests that he was, at the end of his life, a prophet largely without honour in his own country.

It was, therefore, entirely appropriate that Benjamin E. Mays, King’s former teacher and mentor, should remind the congregation at his funeral service in Atlanta, Georgia on 9 April, 1968 that Martin Luther King’s example had:

contributed largely to the success of the student-sit in movements in abolishing segregation in downtown establishments . . . that his activities contributed mightily to the passage of the Civil Rights legislation of 1964 and 1965 . . . He died striving to desegregate and integrate America . . . non-violence to King was total commitment not only in solving the problems of race in the United States but in solving the problems of the world .[47]

For King, the movement he helped to inspire at home was always something more than a struggle for “civil rights”. It was, as he often declared, a struggle “to redeem the soul of America”, to bring its republican and democratic principles into greater congruence with its human and power relationships. King’s strength and vision came from his Christian faith rather than from any systematic study of philosophy and ethics. As James H. Cone has observed, “Black people followed King, because he embodied in word and deed the faith of the black church which has always claimed that oppression and the Gospel of Jesus do not go together.”[48]

But King’s most fitting tribute was the citation which accompanied his posthumous award of the Presidential Medal of Freedom on 4 July, 1977:

Martin Luther King, Jr., was the conscience of his generation. A Southerner, a black man, he gazed upon the great wall of segregation and saw that the power of love could bring it down. From the pain and exhaustion of his fight to free all people from the bondage of separation and injustice, he wrung his eloquent statement of his dream ofwhat America could be . . . He spoke out against a war he felt was unjust as he had spoken out against laws that were unfair . . . His life informed us, his dreams sustain us yet.[49]

As Steven F. Lawson comments, King’s great strength lay “in his ability to adapt old ideals to changing situations.” His critique of American involvement in Vietnam and of neocolonialism after 1965 had been anticipated in a sermon which he delivered during the second year of the Montgomery bus boycott, when he predicted “the birth of a new age” for oppressed peoples of colour throughout the world who had “lived for centuries under the yoke offoreign power.”[50] If, as Ella Baker believes, the civil rights movement in America was the making ofMartin Luther King, it was King, more than any other leader, who fused the concepts of civil, economic and human rights, and so transformed the movement itself.

6. Guide To Further Reading

The impact of the New Deal on blacks is discussed in the following: Raymond Wolters, “The New Deal and the Negro,” in John Braeman, Robert H. Bremner and David Brody, eds., The New Deal: The National Level (Columbus: Ohio State University Press, 1975), pp. 170-217; Nancy J. Weiss, Farewell to the Party of Lincoln: Black Politics in theAge of FDR (Princeton, New Jersey: Princeton University Press, 1983); and Harvard Sitkofh, A New Deal for Blacks: The Emergence of Civil Rights as a National Issue, Volume 1, The Depression Decade (New York: Oxford University Press, 1978).

Black participation in and responses to World War II have been extensively treated. See especially: Neil A Wynn, The Afro-American and the Second World War (London: Paul Elek, 1976); A. Russell Buchanan, Black Americans in World War II (Santa Barbara, California: ABC-Clio Press, 1977); Phillip McGuire, Taps for a Jim Crow Army: Letters From Black Soldiers in World War II (Santa Barbara, California: ABC-Clio Press, 1983); and the following articles: Richard M. Dalfiume, “The ‘Forgotten Years’ of the Negro Revolution,” Journal of American History, 55 (1968), pp. 90-106; Clayton R. Koppes and Gregory D. Black, “Blacks, Loyalty, and Motion Picture Propaganda in World War II,” Journal of American History, 73 (1986), pp. 383-406; and Robert Korstad and Nelson Lichtenstein, “Opportunities Found and Lost: Labor, Radicals, and the Early Civil Rights Movement,” Journal of American History, 75 (1988), pp. 786-811. The March on Washington Movement and its chief architect are considered in Paula A. Pfeffer, A. Philip Randolph: Pioneer of the Civil Rights Movement (Baton Rouge: Louisiana State University Press, 1990). On the Congress of Racial Equality see: August Meier and Elliott Rudwick, CORE: A Study in the Civil Rights Movement, 1942-1968.[22] There has been an enormous body of writing-as well as audio-visual presentations-on all aspects of the post-World War II civil rights movement. Many of these sources also treat Martin Luther King, Jr, his relationship to the Southern Christian Leadership Conference (SCLC) and the other elements of the civil rights coalition. Convenient starting points are provided by three recent review articles: George Rehin, “Of Marshalls, Myrdals and Kings: Some Recent Books about the Second Reconstruction,” Journal of American Studies, 22 (1988), pp. 87-103; Adam Fairclough, “Historians and the Civil Rights Movement,” Journal of American Studies, 24 (1990), pp. 387-398; and Steven F. Lawson, “Freedom Then, Freedom Now: The Historiography of the Civil Rights Movement,” American Historical Review, 96 (1991), pp. 456-471. These can be supplemented by an interesting collection of essays edited by Charles W. Eagles, The Civil Rights Movement in America reference in the (Jackson and London: University of Mississippi Press, 1986); but see also: William H. Chafes essay “The Civil Rights Revolution, 1945-1960: The Gods Bring Threads to Webs Begun,” in Robert H. Bremner and Gary W. Reichard, eds., Reshaping America: Society and Institutions (Columbus: Ohio State University Press, 1982), pp. 68-100, an extremely perceptive piece, which appears in revised form in Chafes excellent text, The Unfinished Journey: America Since World War II 2nd ed., (New York and Oxford: Oxford University Press, 1991). Manning Marable, Race, Reform and Rebellion: The Second Reconstruction in Black America, 1945-1982 (London: Macmillan Press, 1984), others a trenchant analysis of the civil rights movement in the context of wider American domestic politics. Harvard Sitkofl’s The Struggle for Black Equality, 1954-1980 (New York: Hill and Wang, 1981), is a useful account and assessment. The complexities and diversity of the civil rights movement are ably conveyed in Robert Weisbrot’s Freedom Bound: A History of America’s Civil Rights Movement (New York and London: W. W. Norton, 1990).

C. Vann Woodward’s The Strange Career of Jim Crow (1955; 3rd rev. ed., New York: Oxford University Press, 1974), contains perceptive comments on the aims and objectives of (and the growing tensions within) the civil rights movement during the 1950s and 1960s. The latter topic is also the organizing principle of a collection of essays edited by John H. Bracey, Jr., August Meier, and Elliot Rudwick, Conflict and Competition: Studies in the Recent Black Protest Movement (Belmont, California: Wadworth Publishing, 1971). The Supreme Court’s seminal 1954 Brown decision is examined most comprehensively in Richard Kluger’s Simple Justice: The History of Brown v. Board of Education and Black America’s Struggle for Equality, 2 vols. (New York: Knopf, 1975, 1976);and more succinctly by Daniel M. Berman, It Is So Ordained: The Supreme Court Rules on School Segregation (New York: W. W. Norton, 1966). The Court’s decision is viewed retrospectively in Raymond Wolters’ The Burden of Brown: Thirty Years of School Desegregation (Knoxville: UniversityofTennessee Press, 1984). Eisenhower’s (limited) perceptions of civil rights are discussed in Robert Frederick Burk, The Eisenhower Administration and Black Civil Rights (Knoxville: University of Tennessee Press, 1985); and by Michael S. Mayer, “With Much Deliberation and Some Speed: Eisenhower and the Brown Decision,” Journal of Southern History, 52 (1986), pp. 43-76. The NAACP’s long campaign for equal educational opportunities is ably charted by Mark V. Tushnet, The NAACP’s Legal Strategy Against Segregated Education, 1925-1950 (Chapel Hill: University of North Carolina Press, 1987). Catherine A. Barnes, Journey From Jim Crow: The Desegregation of Southern Transit (New York: Columbia University Press, 1983), provides a detailed account of the struggle to end segregated transportation in the South, and conclusively demonstrates that “federal action came in response to black protest and pressure.” Southern opposition to the civil rights movement is treated by Numan V. Bartley, The Rise of Massive Resistance: Race and Politics in the South During the 1950s (Baton Rouge: Louisiana State University Press, 1969); and Neil R. McMillen, The Citizens’ Council: Organized Resistance to the Second Reconstruction, 1954-1964 (Urbana: University of Illinois Press, 1971). See also: David Alan Horowitz, “White Southerners’ Alienation and Civil Rights: The Response to Corporate Liberalism, 1956-1965,” Journal of Southern History, 54 (1988), pp. 173-200. The oaring responses of entrepreneurs to the southern phase of the movement are analyzed in Elizabeth Jacoway and David R. Colburn, eds., Southern Businessmen and Desegregation (Baton Rouge: Louisiana State University Press, 1982). As the editors observe, collectively these essays “suggest that the response of the southern [white] leadership to the desegregation challenge was an accommodation to what was perceived as inevitable change.” See also Anthony J. Badger’s review essay, “Segregation and the Southern Business Elite,” Journal of American Studies, 18 (1984), pp. 105-109.

That the civil rights movement did effect fundamental changes in the South is the persuasive thesis of David R. Goldfield, Black, White, and Southern: Race Relations and Southern Culture 1940 to the Present (Baton Rouge and London: Louisiana State University Press, 1990). Goldfield’s concern is not simply to relate the history of the civil rights movement, but rather to chart the grotesqueries of “racial etiquette,” and the “redemption” of the section from “the sin of white supremacy” to substantiate his conviction that the great achievement of the struggle “was its restorative effect on [southern] culture.” Clayborne Carson’s In Struggle: SNCC and the Black Awakening of the 1960s (Cambridge, Massachusetts: Harvard University Press, 1981), deals perceptively with the most “radical” of civil rights organizations, and there is useful material in an earlier study: Howard Zinn, SNCC: The New Abolitionists, (2nd. ed., Boston: Beacon Press, 1965). Nancy J. Weiss, Whitney M. Young, Jr. and the Struggle for Civil Rights (Princeton: Princeton University Press, 1989), is a sympathetic account of one of King’s notable contemporaries, and leader of the National Urban League.

The movement at the grass roots, community level has only recently begun to be studied. Particularly recommended are: William H. Chafes Civilities and Civil Rights: Greensboro, North Carolina, and the Black Struggle for Freedom (1980),[11] Robert J. Norrell, Reaping the Whirlwind: The Civil Rights Movement in Tuskegee (1985);[11] David R. Colburn, Racial Change and Community Crisis: St. Augustine, Florida, 1877-1980 (New York: Columbia University Press, 1985); and James W. Button, Blacks and Social Change: Impact of the Civil Rights Movement in Southern Communities (1989),10 a comparative study of several Florida towns. A major civil rights confrontation in the North-the SCLC and Coordinating Council of Community Organizations (CCCO) 1966 campaign in Chicago-receives comprehensive treatment and analysis in: Alan B. Anderson and George W. Pickering, Confronting the Colour Line: The Broken Promise of the Civil Rights Movement in Chicago (1986).[30]

Sociologically-derived analyses of the civil rights movement-stressing their localized nature-are provided by Doug McAdam, Political Process and the Development of Black Insurgency, 1930-1970 (1982);[10] and Aldon D. Morris, The Origins of the Civil Rights Movement: Black Communities Organize for Change (1984).[10] Two more recent books treat the composition and dynamics of the civil rights coalition: Jack M. Bloom, Class, Race, and The Civil Rights Movement (Bloomington: Indiana University Press, 1987); and Herbert H. Haines, Black Radicals and the Civil Rights Mainstream (Knoxville: University of Tennessee Press, 1988); both devote considerable attention to Martin Luther King. Valuable “eye-witness” accounts by participants in the civil rights struggle (with frequent references to and anecdotes about King) can be found in: Sheyann Webb and Rachel West Nelson, Selma, Lord, Selma:. Girlhood Memories of the Civil Rights Days as Told to Frank Sikora (Tuscaloosa: University ofAlabama Press, 1980); Howell Raines, ed., My Soul Is Rested: Movement Days in the Deep South Remembered (1977, 1983);[13] David J. Garrow, ed., The Montgomery Bus Boycott and the Women Who Started It: The Memoir of Jo Ann Gibson Robinson (1987);12 Cynthia Stokes Brown, ed., Ready From Within: Septima Clark and the Civil Rights Movement (Navarro, California: Wild Tree Press, 1986); Alice Walker, In Search of Our Mother’s Gardens: Womanist Prose (New York: Harcourt Brace Jovanovich, 1984); and in the two companion volumes to the award-winning television series, Eyes on the Prize: Juan Williams, ed., Eyes on the Prize: America’s Civil Rights Years 1954-1965 (New York: Viking Penguin, 1987); and Henry Hampton, Steve Foyer and Sarah Flynn eds., Voices of Freedom: An Oral History of the Civil Rights Movement from the 1950s through the 1980s (1990).[4] On the value (and location) of oral history collections see: Kim Lacy Rogers, “Oral History and the History of the Civil Rights Movement,” Journal of American History, 75 (1988), pp. 567-76.

A neglected aspect of the civil rights struggle has belatedly begun to receive serious attention. Vicki L. Crawford, Jacqueline Rouse, and Barbara Woods, eds., Women in the Civil Rights Movement: Trailblazers and Torchbearers, 1941-1965 (New York: Carlson Publishing, 1990), is a valuable collection of essays treating the roles (and problems) of black women-as organizers, activists and churchgoers—“in leading and sustaining the movement in local communities throughout the South.” In addition to profiles of such notable black women as Septima Clark, Fannie Lou Homer, Jo Ann Robinson and Ella Baker, this collection includes valuable pieces on lesser-known female participants, and an overview essay by Anne Standley, “The Role of Black Women in the Civil Rights Movement,” which demonstrates convincingly that: “Black women directed voter registration drives, taught in freedom schools, and provided food and housing for movement volunteers [and] were responsible for the movement’s success in generating popular support for the movement among rural blacks.” Again, Standley’s quotations from the accounts of black women activists strongly support her contention that “the movement gave women as well as men a sense of empowerment.”

MARTIN LUTHER KING, Jr.

There is now a considerable-but uneven-literature on King and his roles) in the civil rights movement. Biographical and semi-biographical studies provide one exploratory avenue. David Levering Lewis, King: A Biography (2nd rev. ed., Urbana: University of Illinois Press, 1978), treats the major stages in King’s life, and has some astringent comments on his personality, but adopts a rather patronizing tone, and is poorly written. Lewis is more effective in shorter compass. See his essay, “Martin Luther King, Jr., and the Promise of Nonviolent Populism,” in John Hope Franklin and August Meier, eds., Black Leaders of the Twentieth Century (1982),[17] pp. 277-303. Stephen B. Oates, Let The Trumpet Sound: The Life of Martin Luther King, Jnr. (1982),[21] covers similar ground, but lacks any serious analysis of King’s function and stature, and frequently lapses into saccharine prose. Like Lewis, Oates is more effective as an essayist. See his article, “The Intellectual Odyssey of Martin Luther King,” Massachusetts Review, 22 (1981), pp. 301-320. David J. Garrow has established himself as one of the leading authorities on King, and has written three important books: Protest at Selma: Martin Luther King, Jr. and the Voting Rights Act of 1965 (New Haven: Yale University Press, 1978), is a close analysis of the notable SCLC campaign and its consequences; The FBI and Martin Luther King: From ‘Solo’ to Memphis (New York: W. W. Norton, 1981), concludes that by the last years of his life King had become a radical figure, and a perceived threat to the established order; Bearing the Cross: Martin Luther King, Jr., and the Southern Christian Leadership Conference (1986),[42] offers a massively detailed narrative account of the man and the movement. King’s private life (and the FBI’s prurient interest in his alleged extra-marital activities), tensions within the SCLC and its major campaigns all receive exhaustive treatment. Garrow is particularly concerned to prove that King was motivated more by his African-American Baptist faith than by any reading of Walter Rauschenbusch or Mahatma Gandhi.

Garrow is also the editor ofthe recently published 18 volume set Martin Luther King, Jr., and the Civil Rights Movement (New York: Carlson Publishing, 1989). The cost of this series will be beyond the means of most individuals and libraries, but the first three volumes are particularly recommended: Martin Luther King, Jr.: Civil Rights Leader, Theologian, Orator, an invaluable collection of (previously published) essays and articles, drawn from an impressive range of periodicals. Aside from Garrow himself, contributors include Allan Boesak, James Colaiaco, August Meier, and three articles by Adam Fairclough, the most perceptive (and prolific) British commentator on King: “Was Martin Luther King a Marxist?” “Martin Luther King and the War in Vietnam,” and “Martin Luther King, Jr., and the Quest for Nonviolent Social Change.” Steven F. Lawson’s historiographical essay (cited earlier) includes an assessment of all 18 volumes comprising Martin Luther King, Jr. and the Civil Rights Movement.

Fairclough’s monograph To Redeem the Soul of America: The Southern Christian Leadership Conference and Martin Luther King, Jr. (1987),25 devotes little attention to King’s religious motivation and, instead, stresses the SCLC’s achievements as a flexible and loosely ­organized/disorganized protest movement, together with considered estimates of King’s strengths and weaknesses. Fairclough’s major conclusions concerning King and the SCLC are summarised in two articles: “The SCLC and the Second Reconstruction, 1957-1963,” South Atlantic Quarterly, 80 (1981), pp. 177-94, and “The Preachers and the People: The Origins and Early Years of the SCLC, 1955-1959,” Journal of Southern History.15See also Fairclough’s recent brief and judicious biography, Martin Luther King (1990).[46]

Taylor Branch, Parting the Waters: America in the King Years 1954-73 (New York: Simon and Schuster, 1988), the first of a projected two­volume study, places King firmly in the forefront of the civil rights movement, and argues persuasively that “King’s life is the best and most important metaphor for American history in the watershed postwar years.” His book is both an impressive and valuable addition to King historiography, and includes incisive profiles of participants in (and critics of) the movement. James A. Colaiaco, Martin Luther King, Jr.: Apostle of Militant Nonviolence (1988),’6 is a brief but informative biography. Colaiaco, like other commentators, believes that toward the end of his life King became a radical figure, in sympathy with both Marxist and Black Power critiques of American militarism and capitalism.

King’s intellectual and spiritual development receive careful attention in Hanes Walton, Jr., The Political Philosophy of Martin Luther King, Jr. (1971)14 and in John G. Ansboro, Martin Luther King, Jr.: The Making of a Mind (1982),14 which is especially informative on King’s attitudes to American involvement in Vietnam. King’s intellectual biography is also treated in the following articles: John E. Rathbun, “Martin Luther King : The Theology of Social Action,” Atlantic Quarterly, 20 (1968), pp. 38-53; Warren E. Steinkraus, “Martin Luther King’s Personalism and Nonviolence,” Journal of the History of Ideas 34 (1973), pp. 97-111; and Mohan Lal Sharma, “Martin Luther King: Modern America’s Greatest Theologian of Social Action,” Journal of Negro History 53 (1968), pp. 257-63. On King’s relationship to the black messianic tradition, see: Wilson J. Moses, Black Messiahs and Uncle Toms: Social and Literary Manipulations of a Religious Myth (University Park and London: PenriSYlvania State University Press, 1982), pp. 178-82.

King’s function as a “mutable symbol” and his depiction in the three leading American news magazines—Time, Newsweek, andU.S. News & World Report—isexamined by Richard Lentz in Symbols, The News Magazines, and Martin Luther King (1990).33 The obsessive concern of J. Edgar Hoover with the allegedly subversive nature of the black protest movement is ably treated by Kenneth O’Reilly, “Racial Matters”. The FBI’s Secret File on Black America, 1960-1972 (New York: The Free Press, 1989) and includes a serio-comic chapter, “Black Dream, Red Menace: The Pursuit of Martin Luther King, Jr. “ See also two articles by Gerald D_ N4cKnight: “The 1968 Memphis Sanitation Strike and the FBI: A Case Study in Urban Surveillance,” South Atlantic Quarterly, 83 (1984), 138-56; and “A Harvest of Hate: The FBI’s War Against Black Youth-Domestic Intelligence in Memphis, Tennessee,” ibid., 86 (1987), pp 1-21.

August Meier’s influential essay of 1965, “On the Role of Martin Lutrier King,” typifying him as a “Conservative Militant” is reprinted in the Bracey, Meier, Rudwick anthology already cited, pp. 84-92. David J. Gartow, Clayborne Carson, James H. Cone, Vincent G. Harding and Nathan I. Huggins were the participants in a rewarding symposium: “A Round Table: Martin Luther King, Jr.,” published in the Journal of American History, 74 (1987), pp. 436-81. An earlier but still useful symposium, edited by C. Eric Lincoln, Martin Luther King Jr.: A Profile (NeW York: Hill and Wang, 1970), has contributions from James Baldwln, August Meier, L. D. Reddick, Carl T. Rowan and Ralph D. Abernathy. Shortly before his death, Abernathy published And the Walls Cartre Tumbling Down: An Autobiography (1989),[29] which devotes two chapters to the Montgomery bus boycott and King’s subsequent career; but see also Coretta Scott King, My Life With Martin Luther King, Jr. (New York: Holt, Rinehart & Winston, 1969). King’s celebrated 1965 Playboy interview with Alex Haley is reprinted in G. Barry Golson, ed., The Playboy Interview (New York: Playboy Press, 1981), pp. 112-135. Kirig is also one of the subjects treated in John White, Black Leadership in America: From Booker T. Washington to Jesse Jackson (2nd ed., London and New York: Longman, 1990), pp. 109-144.

Finally, interested students should read King’s own works which reveal a great deal about his concerns. His major writings are: Stride Toward Freedom: The Montgomery Story (1958);13 Why We Can’t Wait (1964);18 Where Do We Go From Here: Chaos or Community? (1967);[23] and Trumpet of Conscience (New York: Harper and Row, 1968). Clayborne Carson is editor of the Martin Luther King Papers, a projected 14-volume series of King’s writings, the first two volumes of which are scheduled for publication in 1992. Controversy already surrounds the enterprise, following the discovery and revelation that part of King’s doctoral thesis and other academic papers contain instances of definite plagiarism. The Reverend Joseph E. Lowery, currently president of the Southern Christian Leadership Conference reportedly commented on the affair: “Dr. King as a young fellow may have overlooked some footnotes, but history is caught up in his footprints, and will be hardly disturbed by the absence of some footnotes.” That King’s conscious departures from the agreed rules of scholarship need to be both acknowledged and placed in context is the focus of an important recent symposium: “Becoming Martin Luther King, Jr.—Plagiarism and Originality: A Round Table,” Journal Of American History, 78 (1991), pp. 11-123. David J. Garrow, one of the contributors, concedes that although King’s “extensive plagiarism” is “a crucial issue” for his biographers, “it will amount to only a brief footnote in the expanding historiography of the freedom struggle of the 1950s and 1960s.” Like other sympathetic commentators, Garrow also believes that King saw a Ph.D. thesis as only a means to an end, and “was far more deeply and extensively shaped by the black church tradition in which he grew up than by the readings and instructors he encountered in seminary and graduate school.” (David J. Garrow, “King’s Plagiarism: Imitation, Insecurity, and Transformation,” ibid, p.86.)

7. Notes

  1. The terms “Black,” “Negro” and “African-American” are used interchangeably in this pamphlet. Back
  2. Pete Daniel, “Going Among Strangers: Southern Reactions to World War II,” Journal of Southern History, 77 (1990), p. 893. Back
  3. Differing estimates of the “Double-V” slogan and the extent of black assertiveness on the home front can be found in Neil A. Wynn, “Black Attitudes Toward Participation in the American War Effort,” Afro-American Studies, 3 (1972), pp. 13-19; and Lee Finkle, “The Conservative Aims of Militant Rhetoric: Black Protest During World War II,” Journal of American History, 60 (1973), pp. 692-713. Back
  4. Henry Hampton, Steve Fayer and Sarah Flynn, eds., Voices of Freedom: An Oral History of the CivilRzghtsMovement From the 1950s Through the 1980s (New York: Bantam Books, 1980), pp. xxv-vi. Back
  5. See Harvard Sitkoff, “Harry Truman and the Election of 1948: The Coming of Age of Civil Rights in American Politics,” Journal of Southern History, 37 (1971), pp. 597-616; and Monroe Billington, “Civil Rights, President Truman and the South,” journal of Negro History, 58 (1973), pp. 127-39. Back
  6. C. Vann Woodward, Thinking Back: The Perils of Writing History (Baton Rouge: Louisiana State University Press, 1986), p. 84. Back
  7. See, for example, Bob Smith, They Closed Their Doors: Prince Edward County, Virginia, 1951-1964 (Chapel Hill: University of North Carolina Press, 1965). Back
  8. Voices of Freedom, p. xxvii. Back
  9. Richard H. King, “Citizenship and Self-Respect: The Experience of Politics in the Civil Rights Movement,” Journal of American Studies, 22 (1988), pp. 8-9. Back
  10. The motivation of civil rights protests is treated in the following sociologically-based studies: Doug MacAdam, Political Process and the Development of Black Insurgency, 1930-1970 (Chicago: University of Chicago Press, 1982); Aldon D. Morris, The Origins of the Civil Rights Movement (New York: The Free Press, 1984); and James W. Button, Blacks and Social Change: Impact of the Civil Rights Movement in Southern Communities (Princeton, New Jersey: Princeton University Press, 1989). Back
  11. See Robert J. Norrell, Reaping the Whirlwind: The Civil Rights Movement in Tuskegee (Tuscaloosa: University of Alabama Press, 1983); and William H. Chafe, Civilities and Civil Rights: Greensboro, North Carolina and the Black Struggle for Freedom (New York: Oxford University Press, 1980). Back
  12. David J. Garrow, The Montgomery Bus Boycott and the Women Who Started It (Knoxville: University of Tennessee Press, 1987), pp.45-6. The best analysis of the boycott is J. Mills Thornton III, “Challenge and Response in the Montgomery Bus Boycott of 1955-56,” in Sarah W. Wiggins, ed., From Civil War to Civil Rights: Alabama 1860-1960 (Tuscaloosa: University of Alabama Press, 1987), pp. 463-519. Back
  13. Martin Luther King, Jr., Stride Toward Freedom: The Montgomery Story (London: Gollancz, 1959), p. 54. Back
  14. Hanes Walton, Jr., The Political Philosophy of Martin Luther King, Jr. (Westport, Connecticut: Greenwood Publishing Company, 1971); John J. Ansbro, Martin Luther King, Jr., The Making of a Mind (New York: Orbis Books, 1982). Back
  15. Adam Fairclough, “The Preachers and the People: The Origins and Early Years of the SCLC, 1955-1959,” Journal of Southern History, 52 (1986), pp. 403-40. Back
  16. James A. Colaiaco, Martin Luther King, Jr.: Apostle of Militant Non­violence (New York: St. Martin’s Press, 1988), p. 39. Back
  17. David L. Lewis, “Martin Luther King, Jr., and the Promise ofNon­violent Populism,” in John Hope Franklin and August Meier, eds., Black Leaders of the Twentieth Century (Urbana: University of Illinois Press, 1982), p. 282; Voices of Freedom, p.113. Back
  18. “Letter From Birmingham Jail,” in Martin Luther King, Jr. Why We Can’t Wait (New York: Signet Books, 1963), pp. 86-7. Back
  19. Voices of Freedom, p.168. Back
  20. Julius Lester, Look Out, Whitey! Black Power’s Gon’Get YourMama (New York: The Dial Press, 1968), p. 104. Back
  21. Stephen B. Oates, Let the Trumpet Sound: The Life of Martin Luther King, Jr. (New York: Harper and Row, 1982), p. 362. Back
  22. August Meier and Elliot Rudwick, CORE: A Study in the Civil Rights Movement, 1942-1968 (Urbana: University of Illinois Press, 1975), p. 330. Back
  23. Martin Luther King, Jr., Where Do We Go From Here: Chaos or Community? (New York: Harper and Row, 1967), p. 4. Back
  24. Milton Viorst, Fire in the Streets: America in the 1960s (New York: Simon and Schuster, 1979), p. 345. Back
  25. Adam Fairclough, To Redeem the Soul of America: The Southern Christian Leadership Conference and Martin Luther King, Jr. (Athens and London: University of Georgia Press, 1987), p. 241. Back
  26. Report of the National Advisory Commission on Civil Disorders (New York: E. P. Putnam, 1968), p. 1. Back
  27. Mike Royko, Boss: Richard. Daley of Chicago (New York: Dutton, 1971), p. 141. Back
  28. Voices of Freedom, p. 302. Back
  29. Ralph D. Abernathy, And the Walls Came Tumbling Down: An Autobiography (New York: Harper and Row, 1989), p. 492. Back
  30. On SCLC’s (and King’s) Chicago experiences, see Alan B. Anderson and George W. Pickering. Confronting the Color Line: The Broken Promise of the Civil Rights Movement in Chicago (Athens and London: University of Georgia Press, 1986), pp. 172-88; 182-88; 203-7. Back
  31. Fire in the Streets, p. 374. Back
  32. Where Do We Go From Here, p. 30. For other critiques of Black Power, see Report of the National Advisory Commission on Civil Disorders, pp.233-35, and Harold Cruse, The Crisis of the Negro Intellectual (New York: W. W. Morrow, 1967), pp. 426-7; 544-5; 547-53. Back
  33. Richard Lentz, Symbols, The News Magazines, and Martin Luther King (Baton Rouge: Louisiana State University Press, 1990), p. 237. For other reactions to the speech, see: Colaiaco, Martin Luther King, Jnr., pp. 179-82. Back
  34. Adam Fairclough, “Martin Luther King, Jr. and the War in Vietnam,” Phylon, 45 (1984), p. 25. Back
  35. Voices of Freedom, p.340. Back
  36. Ibid., p. 448. Back
  37. To Redeem the Soul of America, p.371. Back
  38. Ibid., p. 377. Back
  39. Newsweek, 15 April, 1968. Back
  40. Wallace Terry, Bloods: An Oral History of the Vietnam War By Black Veterans (New York: Random House, 1984), p. 172. Back
  41. Voices of Freedom, p.481. Back
  42. David J. Garrow, Bearing the Cross: Martin Luther King, Jr., and the Southern Christian Leadership Conference (New York: Morrow, 1986), p. 625. Back
  43. Howell Raines, My Soulls Rested: Movement Days in the Deep South Remembered (New York: Viking Penguin Inc., 1983), p. 426. Back
  44. Bearing the Cross, pp. 57-8. Back
  45. James H. Cone, “Martin Luther King, Jr., and the Third World,” Journal of American History, 74 (1987), p. 458. Back
  46. Adam Fairclough, Martin Luther King (London: Sphere Books, 1990), p. 127. Back
  47. Benjamin E. Mays, “Eulogy of Dr. Martin Luther King, Jr.,” The Morehouse College Bulletin, 36 (1968), pp. 8-12. Back
  48. James H. Cone, “Martin Luther King, Jr., Black Theology-Black Church,” in David J. Garrow, ed., Martin Luther King, Jr.: Civil Rights Leader, Theologian, Orator, Vol. 1 (New York: Carlson Publishing, 1989), p. 207. Back
  49. Martin Luther King, Jr.: The Making of a Mind, p.v. Back
  50. Steven F. Lawson, “Freedom Then, Freedom Now: The Historiography of the Civil Rights Movement,” American Historical Review, 96 (1991), pp. 462-63. Back

Top of the Page

Jay Kleinberg, Women in American Society 1820-1920

BAAS Pamphlet No. 20 (First Published 1990)

ISBN: 0 946488 10 X
  1. Overview
  2. Women in the Preindustrial United States
  3. The Industrial Revolution
  4. Education
  5. Spirituality
  6. Ante-Bellum Reform Movements
  7. The Changing Nature of Women’s Education and Employment
  8. The Woman Movement
  9. Guide to Further Reading
  10. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. Overview

Until the 1960s and 1970s, analysts of American life and culture were strangely silent as to how the grand forces of the age affected women and how women had an impact upon their society. History had been conceived of as the study of elites, of the men who governed, of laws, battles, and treaties, events which while they affected women were mostly not of them or by them.[1] There is an ongoing historical debate over the extent to which writing women back into history implies, as Berenice A. Carroll suggested, “not only a new history of women but also a new history.”[2] What does the inclusion of women in the historical record mean, for example, to the traditional periodisation of history, a question raised by Gerda Lerner in 1975. Did events affect women in the same way as men or have the same meaning for them?[3]

There has been a proliferation, an explosion, in the number of subjects about which historians research and write. Traditional history has been broadened by the inclusion of questions of class, race, gender, ethnicity, and place. Carl Degler, himself a perceptive analyst of gender and family relations, described this “fragmentation” and “splintering” of historical discourse, and proposed that the theme of “national identity” should become the focus of American History. But as Joan Wallach Scott points out history consists of many irreconcilable stories. “Any master narrative—the single story of the rise of American democracy—or Western civilisation—is shown to be not only incomplete but impossible of completion in the terms it has been written. For those master narratives have been based on the forcible exclusion of Others’ stories.”[4]

The past is no more homogeneous than the present. As with the present, it is vastly complicated and rewriting women or any excluded group back into the historical narrative is a complex exercise, particularly when the nuances and interrelationships between groups are considered. Women are a sex, but they belong to different class, ethnic, racial, cultural, and religious groups. Thus writing women back into the historical discourse entails reconsidering the place of many groups simultaneously, itself an intricate task, and exploring the conflicts between those groups. While no short history can hope to do justice to what Scott termed “the Others’ stories,” this pamphlet will outline the major economic, social, and political shifts, the images with which women were portrayed and the materials they created, in order to explore the changing nature of women’s roles in the era between women’s entrance into the labour force and their acquisition of the vote.

The interplay of social and economic forces led to a number of disjunctures in women’s lives in the nineteenth century, sweeping away the continuity and similarity in the lives of preindustrial married and unmarried women. Younger women moved from household care tasks within their parents’ homes, where they worked with their mothers producing goods within the home and caring for younger siblings, to their husband’s homes after marriage. As children they were part of one family enterprise, as adults another. But whether single or married, women engaged upon household manufacture and productive labour. Industrialisation broke the similarity between married and single women’s activities as young women undertook,jobs for wages outside the home, while most married women (slaves excepted) stayed at home, devoting themselves to family and childcare. They produced a narrower range of goods and bought more. Industrialisation thus resulted in significant discontinuities in the lives of women based upon their marital status although it had no such effect upon men.[5]

Nancy Cott has suggested that in the early national period women’s political and legal disadvantages became more conspicuous as suffrage evolved from the province of the elite (and in which women occasionally shared) to universal, white manhood suffrage from which women, regardless of wealth, were excluded by their sex. The egalitarianism of the Age ofJackson highlighted female exclusion from politics, the professions, and advanced education.[6] It also underscored married women’s legal disabilities with regards to property.

Women’s economic and social roles altered as more goods were manufactured outside the home, giving mothers more time to devote to their children. There was also a decrease in the amount of work done by older daughters within the home as the tasks they previously helped their mothers perform such as carding wool, spinning, and weaving became factory operations. These young women became the first factory workers. Many became involved in the waves of religious enthusiasm which swept over the United States during the Second Great Awakening (1790-1840). This religious revival led to a surge in devotion among women and to women’s taking a more public role in religious and charitable affairs. Women as well as men viewed these new movements as a contentious departure from accepted behaviour patterns which might upset the gendered basis of social interaction.[7]

It is against a background of industrialisation and religious enthusiasm that many American historians would set the marked increase in women’s public participation in the nineteenth century and the growing tension between women’s private and public activities. Individualism flourished in the new economic order, but women were seen not as individuals in the market place, but as members of the family, as wives and mothers. The emphasisis on individual moral responsibility had many manifestations in the ante-bellum era including the temperance movement and the anti-slavery crusade. Women’s growing restlessness with their limited roles led a small minority to articulate a public role for women in a woman’s rights movement from the 1840s onwards. Their participation in organized religion, temperance or abolitionism challenged the established order, yet it reflected the special contributions they believed they could make as women.[8]

New concepts of womanhood emerged in an attempt to adjust social convention and reality. As men turned to commerce, women shouldered the burden of educating the very young into the new political morality and providing a suitable environment in which republican values could flourish. The Republican Mother, a legacy of the American Revolution, became responsible for the inculcation of civic morality. As Linda Kerber describes it, Republican Motherhood integrated political values into domestic life, while the education of the young filled some of the time previously occupied by domestic manufactures.[9]

Barbara Welter labelled the early nineteenth century emphasis on domesticity and women’s responsibility for domestic and parental responsibility “the Cult of True Womanhood.” As enunciated in women’s magazines and religious tracts, the Cult of True Womanhood viewed women as hostages in the home to provide stability in a rapidly changing world. The pious, pure, submissive, and domestic woman built a private world to which men could return from their day’s labours in the countinghouse secure in the knowledge that religion, the home, and the children were being looked after.[10] The world outside the home became a male preserve, while the home became the circumscribed sphere in which women cared for their families, educated their children (in republican virtues), and made their influence felt indirectly.[11]

The dominant paradigm in the historical analysis of women’s social and political activities has been that of separate spheres. Women of all political persuasions believed in the unique female responsibility for domestic affairs and sought to protect those interests in society generally.[12] According to Ellen DuBois, reformers believed domestic activities were as naturally feminine as childbirth, but should not prevent female political participation. One of the crucial issues of the nineteenth century was the extent to which women should express their opinions outside the family circle or before mixed audiences.[13]

Increasingly women were associated with motherhood and domesticity, rather than the production of commodities within the home. Although birth rates fell in the nineteenth century, motherhood became a full time occupation for women, rather than one task amongst many for both parents. Women remained largely outside political life, but enjoyed wider access to education and developed their own institutions and a distinctive female discourse. Over the course of the century many women used the private sphere to which they had been relegated as a springboard to public participation in a variety of voluntary and church associations, moving in ever-widening circles which touched upon the political and social concerns of the day. They accepted women’s responsibility for domestic concerns and made those concerns the cornerstone of their public activities. They sought a public voice in order to protect the home. Motherhood became central to women’s endeavours and provided the vocabulary and conceptual framework for their efforts.

In 1820, women had no public voice and rarely had,jobs. They could not, if married, exercise sole control over their property. By 1920, women formed an important part of the Progressive movement, received the vote, had property rights and worked in unprecedented numbers in a variety of employments. Yet it would be a mistake, I think, to see in these changes a Whiggish view of history as progress. The content of women’s lives diversified in these hundred years, but the extent to which women’s fundamental place in society altered is open to question. Did their lives become any less controlled by gender, that is by social conventions about women’s roles? Did women gain meaningful control over their own behaviour and destinies in this century? To what extent did the sharp divisions between public and private spheres for women break down in this era? And to what extent did women themselves wish to see their roles change or support altered relations between the sexes?

2. Women in the Preindustrial United States

The new nation was founded on the premises that “all men are created equal”, but this equality extended neither to women, blacks, nor Native Americans. The American Revolution had not been a revolution for women although Abigail Adams requested that her husband John’s new code of laws “would remember the ladies and be more generous and favourable to them than were your ancestors. Do not put such unlimited power into the hands of husbands. Remember all men would be tyrants if they could.”[14] John Adams’ response indicated the limits of rebellion. “As to your extraordinary Code of Laws, I cannot but laugh . . . Depend upon it, we know better than to repeal our Masculine systems.”[15] The new republic thus, did not change women’s political status, although it may have established the terms of the debate upon which women later were to incorporate themselves into the polity.

Mary Beth Norton argues persuasively that the Revolution itself provided women with a vocabulary which was to frame their arguments for political rights in the mid-nineteenth century. Nevertheless, there is little evidence that women were conscious of themselves as a group in this era. Elaine F. Crane describes the Revolution as having little meaning for women as women. To the extent that they participated in the events of the era. they did so as members of their families and communities, not as members of their sex.[16]

Yet modifications in women’s social and economic roles paralleled those of men’s and the nation’s as a whole, even if women’s political status lagged behind men’s. During this century the United States developed from a rural, agricultural nation to an urban, industrial one, and family sizes began to decline. In 1820 most people still lived and worked on farms or in small workshops where the production of goods and daily family life blended together. This meant that family members were drawn into the family enterprise and sustenance at an early age, with tasks allocated by gender. Boys from a rather young age worked with their fathers on the land or in craft manufacturing, providing a major source of labour particularly on farms and in smaller enterprises. Girls helped their mothers, but few had jobs outside the domestic sphere. Before the coming of the factories, if girls were employed at all, they hired out as “help” in other women’s households, assisting with the dairying, spinning, weaving, sewing, and childcare.[17]

Women made important contributions to the family’s survival through their reproductive and productive endeavours, although historians debate both the social significance of those economic contributions and the extent to which they participated in their families’ economic activities.[18] Women bore and raised the children who were an important source of labour for family farms and enterprises. In the early American economy women in their own homes did all the spinning, weaving, knitting and sewing, typically making all the clothes worn by the family. They also helped in the cobbling of shoes. They grew and processed the family’s food, baked bread and cake, pickled fruits, vegetables, and meats with either store bought or homemade vinegar, made candles, soap, butter, and cheese. Women looked after the chickens and collected the eggs. Men butchered large animals, but women sorted out the remains, cleaned the entrails for sausage casing, rendered the lard, prepared the meat, made headcheese, and smoked or salted any meat which would not be used immediately.

After finishing the winter butchering a Minnesota farmwife commented “it is a good job over with.”[19] Suzanne Lebsock, writing of the women of Petersburg, Virginia, suggests that “for the majority of housewives, women who had no slave help or very little, the satisfaction derived from productive tasks was offset, even cancelled by the tremendous energy and time they exacted; all too often the tasks melted into a blur of drudgery.”[20] One nineteenth century household advice manual underscored the toil inherent in housekeeping when it opined that “there is no romance or poetry in making boiled soap, only patient hard work.”[21] Thus, housework and household manufacturing can be seen either as rewarding or debilitating, depending upon the author’s point of view. Regardless, these activities were necessary for the household’s survival and prosperity.

Although the Homestead Act of 1862 made it possible for women household heads to acquire land under its provisions, most women who moved to the western frontier in the nineteenth century journeyed with their families. The initial venturers to the ever-weltering frontier were men, but they were quickly joined by wives and daughters and some women who sought opportunities for themselves. It has been suggested that women were reluctant pioneers who did not wish to leave the comforts of eastern civilisation, yet many contemporary diaries imply that women as well as men welcomed the challenge and opportunities of the unsettled areas. For women missionaries the frontier represented an opportunity to put their faith into practice, while many women married to army personnel followed their husbands’ postings west. Eveline M. Alexander’s husband served in the Third Calvary in the Southwest in the 1860s. It saddened her to leave her family back east, but she used her time in the west to found Sunday schools, raise funds to support mission churches, and make notes upon the western scene.[22]

The novels of Willa Cather, among other western authors, depict women’s lives in the frontier farming regions, their adaptation to the natural environment, and their contributions to their families and communities. My Antonia, O Pioneers!, and The Song of the Lark all have strong heroines of immigrant stock who succeed even when they move beyond the conventional boundaries of women’s roles. Westering women and the writers who captured their experience saw the new continent as a land to be adapted to rather than conquered.[23]

Frontier women bore a wide range of responsibilities. The women of the wagon trains made substantial preparations for the journey. They wove the cloth top for the wagon, prepared food, medicines and clothing for the trip and managed the cooking and cleaning en route. Once they arrived at their destinations, women frequently helped to build their new homes, break the soil, and harvest the crops. Their own cash crops, butter, eggs, or lard, could be vital to sustaining their families in the early days. As agriculture became more. mechanised and specialised women’s physical work on the land decreased, but they could still be counted on as an auxiliary labour force in case of emergencies. They fed the harvest crews in the wheat belt and churned much of the butter on dairy farms.[24] Commercial agriculture marginalised women’s economic contributions among white westerners by the end of the century. Native American women and black southern women still constituted a large share of the agricultural labour force.

Some of the work done by farm and preindustrial women blended beauty with functionality. The quilts made by rural women well into the nineteenth century combined necessity with an opportunity for self-expression. Women told stories, remembered important events, and brought beauty into their workaday lives in these bits of cloth, sewn into elaborate arrangements. Complex designs such as the Star and Tree of Life were pieced together from tiny scraps of material then sewn with complicated stitchery patterns onto backing fabric. (wilting bees provided an opportunity for sociability as well as making light work of this timeconsuming operation. Frances Trollope in Domestic Manners of the Americans wrote that “the ladies of the Union are great workers, and, among other enterprises of ingenious industry, they frequently fabricate patchwork quilts. When the external composition of one of these is completed, it is usual to call together their neighbours and friends to witness, and assist at the quilting, which is the completion of this elaborate work. These assemblings are called “quilting frolics,” and they are always solemnised with much good cheer and festivity.”[25] Groups of women made Friendship Quilts, where each woman contributed several squares of her own composition.

The quilts frequently followed established designs, but could incorporate current events. The buff and blue of Washington’s ragtag army uniforms was emulated by Revolutionary quilters in bedquilts of homedyed indigo on one side and saffron yellow on the other. Patriotic quitters even changed the name of some of their patterns to indicate their independence from the Mother Country. Queen Charlotte’s Crown became Indian Spring; Burgoyne Surrounded commemorated the general’s defeat; and the Tea Leaf celebrated the famous affair in Boston’s harbour. Despite having no formal say in the affairs of the new nation, women’s crafts indicated their awareness of the political issues of their day. Women took sides in them, using domestic art to express their feelings.

Old Tippacanoe, Whig Rose Quilt, Lincoln’s Platform, and Union Star memorialised political opinions. Westward expansion of settlement across the continent and beyond gave rise to regional variations of the star pattern in Virginia, South Carolina, North Carolina, Tennessee, Ohio, Missouri, California, Iowa, and Kansas. Many quilts depicted the natural environment and its inhabitants. Goose Tracks, Flying Geese, Peony, Bear’s Tracks, Cactus Flower, Sweet Gum Leaf, Blazing Star, Delectable Mountains, Blazing Sun, Sunflower and Kansas Sunflower celebrated women’s surroundings as they joined the migration and settled in new homes. The Rocky Road to Kansas showed how women felt about the trek while Log Cabin was a coloured representation of their early dwellings. Some quilters used designs from the man’s world, suggesting it was more familiar to them than has been thought. Mill Wheel, Chips and Whetstone, and Circular Saw represented in scraps of cloth the tools men used and women observed. Slave quilt makers sometimes incorporated African motifs as well as scenes from rural life.[26]

3. The Industrial Revolution

The industrial revolution had a profound effect on the work performed by women, its location and content. It transformed how and where most items were made. When production moved into the factories, work and home became sharply differentiated. Work now meant employment outside the home and paid wage labour, while the household itself gradually turned into a place of consumption. Married women became the agents who purchased goods and services, while young women became the first industrial labour force in the United States. The inhospitable New England geography encouraged early diversification away from agriculture. Alexander Hamilton, George Washington’s Secretary of the Treasury, stated in his Report on Manufactures that the farmer would benefit “from the increased industry of his wife and daughters” if they were employed in the nascent textile industry. Cotton mills proliferated in the region north of Boston in the decades following the War of 1812. Women comprised 85 and 95 per cent of the operatives of some of these mills in the 1820s.[27]

The use of machinery and the shift from home to factory-based manufacturing nearly eliminated married white women as producers of goods and meant that they could not combine domestic and money­earriing activities. In the shoe industry, for example, the introduction of the sewing machine and the movement of production into factories meant married women no longer stitched uppers to soles in between other chores. Their displacement by factory workers made them dependent upon their husbands for support and reinforced the growing assumption that married women’s place was in the home, although the rest of the world moved into workplaces outside it. Mary Blewett’s study of the shoemaking industry in New England explores the relationship between married women outworkers who bound shoes in their kitchens and single factory workers who undertook the same task in mechanised factories. She found that their interests were not the same. The female homeworkers, most of them married to shoemakers, identified with their husband’s interests rather than those of the women in the factories. Blewett concludes that “the patriarchal ideology of artisan culture and the sex structuring of labouring in the New England shoe industry worked together to prevent women workers from contributing to the most vital tradition of collective protest among the workers of early nineteenth-century New England.”[28]

Many of the operatives in the textile mills and shoe factories were children or but a few years older. Some, like Lucy Larcom, went into the mills because their families could not make ends meet otherwise. Her newly widowed mother moved to Lowell in order to run a boarding house for mill-girls. At the age of eleven Lucy started as a bobbin doffer on the spinning frames, resolving to earn enough money to get sufficient education to become a teacher. Like many of her contemporaries she alternated stints at work (in the mills or teaching) with periods of study. Larcom’s poverty set her apart from her contemporaries in the mills, most of whose families were of average wealth. But, like Larcom, the first generation of textile workers desired independence, the stimulation of an urban setting, and wages which were higher than other female employments such as domestic service.[29]

Women and men enjoyed different prospects in the mills. Men’s wages reflected the rates they were paid elsewhere, but women’s alternative employments paid poorly. Their jobs were part of a family economy and not recompensed directly (as in shoemaking), or were regarded as an extension of women’s natural role and training for mother—and wife-hood (as in domestic service.) Servants lived in their employers’ households, with room and board forming part of their remuneration, which lowered their cash wages and depressed women’s wages generally.

Conditions in the mills deteriorated following the depression of 1837 as employers speeded up the machines and assigned each worker more of them to tend. Women protested against the deterioration in working conditions, lowered piece rates, and long hours of increasingly intensive labour in the 1830s and 1840s, but to little avail. Militant action on the part of mill workers failed for a complicated set of reasons. Most female operatives believed they were only temporarily in the labour force; 85 per cent of the Lowell operatives married after they left the mills. Others questioned whether women should express themselves in public, believing in a traditional interpretation of women’s roles. As the ethnic composition of the mills diversified following the Irish Potato Famine, Labour organizations also had to transcend ethnic and religious boundaries.[30]

Carole Turbin found that older women workers and widowed women generally had more success organising, both because they were relatively independent of men and because they were more experienced as workers. Female labour activists tended to be either married women (a small minority of the female workforce), widowed, or selfsupporting women. She also found that women were more successful in their organising efforts where they did not compete with men for jobs. Cities such as Troy, New York which contained completely separate labour markets for women and men (the shirt collar and iron industries, respectively) were hotbeds of labour activism, partially because families could depend upon women’s earnings when the iron industry struck and upon men’s when unrest hit the collar workers.[31]

Even though women comprised a significant portion of the industrial labour force in the decades before the Civil War, most women workers toiled in other women’s kitchens or in the cotton fields. The single largest group of women working on the land were black slaves in the ante-bellum south. There was little sexual differentiation in the field work done by slaves. One former slave reminisced that “I had to do everythin’ dey was to do on de outside. Work in de field, chop wood, hoe corn, till sometime I feels like my back surely break. I done everythin’ ‘cept split rails. I never split no rails.” Others told similar tales. “I done ever thing on a farm what a man done ‘cept cut wheat.”[32] Sojourner Truth described women’s lives under slavery poignantly and passionately when she challenged a Woman’s Rights convention in Akron, Ohio in 1851 to “look at my arm. I have ploughed and planted and gathered into barns, and no man could head me. And ain’t I a woman? I could work as much and eat as much as any man—when I could get it—and bear the lash as well. And ain’t I a woman? I have borne children and seen them sold into slavery, and when I cried out with a mother’s grief, none but Jesus heard me. And ain’t I a woman?”[33] The lot of Truth and other slave women was one of unremitting toil at others’ command.

Most enslaved women were field hands or domestics, but a few specialised in spinning, dyeing, weaving and sewing, nursing and midwifery. While there were a number of skilled positions available to black men on the plantations including those of slave driver, carpenter, cooper, mason, and smith; slave women rarely acquired positions of authority. Black women had domestic and family duties in addition to being workhorses in the field and, frequently, sexual objects for their masters.

White women also had domestic and family duties, typically made their own and their slaves’ clothing, nursed the plantation ailing, provided food and housing, and looked after the physical and spiritual needs of both their white and black families. Slavery extended household management, the number of animals to be slaughtered and food to be grown and preserved against the winter, but it did not give the plantation mistress power. The patriarchal plantation society reserved that for white men.

The issue of slavery illuminates the complexity of racial and gender interactions. As Elizabeth Fox-Genovese points out, southern slaveholding women accepted their position of dependence within the household more readily than did slave women. The patriarchal plantation system may have oppressed white women to some extent, but they benefited from slavery. Hence race divided black and white women far more than their common sex united them. White women complained about specific features of slavery, but did not identify a common, gender-based oppression with slave women. Indeed, they frequently resented both their female slaves and their unfaithful husbands. Mary Boykin Chesnut’s diaries suggest the extent to which she benefited from slavery as the wife of a wealthy plantation owner. She described southern society as male dominated and condemned the prevailing racial/sexual ethos which permitted slave owners to take slave women as concubines. Nevertheless she did not advocate equality between the black and white women of her society. Affluent, educated southern white women, according to Fox-Genovese, defended their class privileges and superiority over lower-class whites as well as black women.[34] For these groups of women, class and race divided them more than gender united them.

After emancipation, domestic service became the province of blacks and immigrants as native born white women moved into the factories and (after the Civil War) shops and offices. Farm women continued to work along side their hired help, but urban women detailed the most arduous tasks to them, while reserving the more pleasant aspects of house and childcare for themselves. As the century wore on, more items could be purchased and fewer had to be fabricated at home. Nevertheless, domestic servants endured long hours of hard work as they fed the fires with coal or wood, carried and heated water, removed human wastes, washed, ironed, cooked, and baked. Domestic service was a residual occupation into which women went if they could find no other work.[35]

 

4. Education

Throughout the nineteenth century women fought for the right to an education and to occupations which enabled them to use their literacy. A few women using religious justification echoed Abigail Adams’ appeal to improve women’s situation. Writing under the pen name of “Constantia”, Judith Sargent Murray published an essay entitled “Equality of the Sexes” in 1790. She asked whether it was reasonable “that a candidate for immortality, for the joys of heaven, an intelligent being, who is to spend an eternity in contemplating the works of Deity, should at present be so degraded, as to be allowed no other ideas than those which are suggested by the mechanisms of a pudding or the sewing [ofJ the seams of a garment?”[36] Constantia recognised and railed against the limited educational opportunities for women in her native New England, as did Abigail Adams. By the time of the Revolution only 50 per cent of all New England women could sign their names, although 80 per cent of men had achieved this modest token of literacy. [37]

Linda Kerber dates the growth in educational opportunities to the period between 1790 and 1830 when political and economic needs coincided. At that time it was argued that since mothers were the educators of young children, the interests of the Republic were best served by women who could educate their children in the rudiments of literacy as well as moral values and republican virtues.[38] As the economy shifted from subsistence agriculture to an industrial and commercial base the nation needed a literate population. Education became a mass rather than elite requirement, for females as well as males, even though the justifications for educating the sexes and the expected outcomes were different.

Girls attended dame schools and state funded schools at this time, but had no access to the academies and colleges which offered advanced education to boys. Lucy Larcom’s first school was kept by a neighbour whom everybody called “Aunt Hannah”. It took in all the children from her village, no matter how young they were, provided they could walk and talk and were considered capable of learning their letters. The mothers of large families used school as a means of keeping their little ones out of mischief while they attended their domestic duties. Little Lucy went off to school at the age of two. Aunt Hannah used her kitchen or her sitting room for a schoolroom, as best suited her convenience, combining the education of the young’ with her own domestic chores. Boys then went on to academies, but girls’ formal education ended at ten or twelve.[39]

Spurred in part by the Second Great Awakening which emphasised woman’s importance as wife, mother, and teacher, academies for young women proliferated in the early nineteenth century, especially in the northeast. Benjamin Rush opened his Young Ladies Academy in Philadelphia in 1787 for the express purpose of educating girls to be republican mothers able to impart the rudiments of learning and patriotism to their own children. Four years later, Sarah Pierce stressed that females as well as males needed “the discipline of the mind.” Emma Willard’s Troy Seminary, opened in 1821, followed a liberal arts curriculum similar to that of men’s colleges. In the next few years Catherine Beecher, Mary Lyon, Zilpah Grant, Almira Phelps and countless others opened educational institutions for women, training and inspiring thousands of pupils. Some of these institutions still survive, for example Mary Lyon’s Mount Holyoke College. Their importance was fourfold: they educated women, provided teacher training at a time when the nation’s educational needs expanded rapidly, inspired belief in the ability of women to achieve, and employed women in a professional capacity.[40]

In the 1830s and 1840s, Catherine Beecher advocated moral education for women in order that they might have hegemony in the home. Women’s education was not an end in itself but a service to their families. Beecher believed that “the proper education of a man decides the welfare of an individual; but educate a woman, and the interests of a whole family are secured.”[41] Women gained power within their home but at the expense of participation in the world outside it. Beecher’s very successful Treatise on Domestic Economy reconciled women’s civil inequality with men by eulogizing women’s domestic role. At the same time it provided sound advice on home management, diet, meal preparation, household equipment and kitchen organisation.[42] Although the justification for the expansion of education was essentially conservative (service to family), the education itself expanded women’s horizons, for the knowledge required for household management, childrearing, and companionship could be quite wide-ranging.

Few women’s academies received public funds. This meant that their pupils either depended upon indulgent and affluent parents or worked and attended school alternately, as had many of the first generation of mill workers. Lucy Stone, the abolitionist, taught school in order to earn her tuition at Oberlin College (the first coeducational institution of higher learning in the United States.) Her father paid for all of her brothers’ schooling, but made her pay first for her own school books and then for her own higher education. Other women encountered opposition from educational establishments who refused to admit them. Elizabeth Blackwell was rejected by a number of medical schools in her quest to become a doctor, matriculating at Geneva College in 1847 when the students there voted to admit her. Blackwell had the support of her family in her application to medical colleges, but little financial backing. She taught school in order to accumulate her tuition fees. Despite her unconventional goal, gender circumscribed her behaviour. Blackwell refused to walk in her own graduation procession because she considered it unladylike. Because no hospital would hire her she set up her own practice in an immigrant quarter of the city, founding the New York Infirmary for Women and Children in 1857.[43]

Women encountered less resistance in their desire to become nurses. Nursing began its slow trek towards professionalisation and professional recognition in the second half of the nineteenth century. Both the Civil War shortage of male nurses and the proliferation of hospitals (with their need for inexpensive labour) led to an increase in the number of female nurses. With daughters needed less at home, young women were available for employment. Nursing provided a safe way for rural women to move to the city, since hospitals provided both a job and living accommodation. As with so many nineteenth century women’s occupations, the profession of nursing reflected the transposition of a domestic task into the labour force. Previously nursing had been a private rather than a public concern which women undertook as part of their caring for family or neighbours. Women also had primary responsibility for the safe delivery of babies. Until their virtual ouster by physicians at the beginning of the twentieth century, midwives brought babies into the world, relying upon training received from older midwives rather than schooling.

The experience of nursing during the Civil War led middle and upper class reformers to found hospital affiliated training schools following the example set by English nursing pioneer Florence Nightingale. The Nightingale model emphasized character development and rigid discipline. The work performed by nurses and doctors was kept completely separate. The nurses had their own hierarchy but deferred to doctors. They were expected to stand up when a doctor entered the room, to carry out orders faithfully, and provide a cheap labour force for the hospitals which trained them. They achieved autonomy within their own ranks, but always remained subservient to doctors and hospital boards.

At the turn of the century student nurses received nominal wages, room, and board as remuneration for working an 8 hour day and attending classes for four hours a day. They lived in hospital residences, with a 10 p.m. curfew. They cleaned the wards to withstand Matron’s white glove inspections, served meals, and dispensed medicine. The prejudices of the training schools were such that almost all nurses were white, the majority native born.[44]

Before the Civil War few black women in the north and virtually none in the south received any education at all. Attempts to educate black and white girls together had met with hostility. Mob violence forced Prudence Crandall to close her school in Canterbury, Connecticut after she admitted a black girl in 1831. Charlotte Forten, daughter of a prosperous family of Philadelphia sailmakers, was sent to school in Massachusetts in 1854 when Philadelphia high schools refused to accept her because of her colour. Steeped in the abolitionist sentiments of her parents and grandparents she went south during the Civil War to teach the newly emancipated slaves in the South Carolina Sea Islands. She wrote in her diary that “part of my scholars are very tiny,—babies I call them—and it is hard to keep them quiet and interested while I am hearing the larger ones.” She taught adult members of the community in the evenings.[45] Forten also nursed for a short time during the Civil War in Beaufort, South Carolina, while working for the Port Royal Relief Association. Following the Civil War, Forten returned to Philadelphia where she settled into a peaceful existence, writing articles on the Port Royal experiment for the Atlantic Monthly. Other articles appeared in various New England magazines. She thus joined that group of women stigmatised by Nathaniel Hawthorne as “that damn mob of scribbling women.”

With the commercialisation of the press in the 1830s American women took to their pens to edit and write for new periodicals such as Godey’s Lady’s Magazine, as well as authoring novels, poems, and essays. Sarah Hale, editor of Godey’s Lady’s Magazine, Harriet Beecher Stowe, author of Uncle Tom’s Cabin, Catherine Beecher, founder of Hartford Female Seminary and author of domestic advice manuals, and novelists Catherine Sedgwick, Caroline Hentz, Caroline Howard Gilman all idealised domesticity. Their writing stressed women’s special qualities of piety, virtue, submissiveness, and, of course, domesticity.[46]

These authors depicted women as civilising, refining, and Christianising influences upon the men for whom they maintained homes as a refuge from the industrial and commercial world. Mary Ryan describes the domestic fiction of this era as having three main canons: women must marry; husbands must be superior to their wives; women must find power and action in their apparently inferior position.[47]

House and family became women’s vocation when the factory production of goods undermined household manufacturing functions. The cult of domesticity and true womanhood which the literary domestics upheld in the nineteenth century substituted this vocation for the work women previously performed at home. These authors emphasised that women were the guardians of the home and the family and the repositories of moral virtue. They laid the foundations for much of the reform activity which followed, believing that women could save men from the impurities of the workaday world and educate their children in the ways of righteousness. Dominion over the home gave women both rights and responsibilities; it justified activism outside the home, but only in the name of the home and women’s special mission. Thus the dichotomy between public and private gave women a stake in external activities as the legitimate voice of domesticity and the private world. Daniel Scott Smith believes that ‘women asserted themselves within the family much as their husbands were attempting to assert themselves outside the home.” In doing so they extended their autonomy first within and then beyond the family circle, in a movement Smith labelled “domestic feminism.”[48]

Where earlier writers viewed motherhood as but one of women’s tasks, the propounders and publicists of the Cult of Domesticity saw motherhood as the most important female responsibility. Nineteenth century motherhood necessitated immolation upon the altar of domesticity. Puritans had viewed children as little devils who were innately corrupt and could only be saved by joining the Church. But in the nineteenth century, they were viewed as innocents over whom eternal vigilance should be exercised, even at the cost of great self­denial. Lydia Maria Child wrote in the Mother Book that the care of children required a great many sacrifices and self-abnegation. “The woman who is not willing to sacrifice a good deal in such a cause does not deserve to be a mother.”[49] Previously if a child did not turn out satisfactorily his or her inner devils and lack of faith were deemed responsible. In the nineteenth century, the mother bore this onus because maternality replaced conversion as the salvation of the world. Motherhood provided scope for women to act from their position of inferiority to achieve domestic bliss.

5. Spirituality

Powerful forces of gender socialisation kept men’s and women’s roles distinct, whether on the frontier or in the more settled parts of the country. Worthy women received encomiums for their housekeeping and maternal skills and also for being deferent and devout. Barbara Welter describes the True Woman as one who did not let her interests waver from her family. Church work was particularly favoured as it did not make her less domestic or submissive. As a member of the Missionary Society of Tuscaloosa, Alabama wrote “No sensible woman will suffer her intellectual pursuits to clash with her domestic duties” and so she will concentrate on religious work “which promotes these very duties.”[50]

The relationship of women to organised religion in North America reflected the complexities of the theological controversies instrumental in the European colonisation of North America. The Puritan Divines who expelled Anne Hutchinson and her followers from the Massachusetts Bay Colony for heretical and unseemly behaviour were dismayed as much by her gender and public speaking on religious issues as by her theology. Although a few groups such as the Society of Friends (the Quakers) and the Shakers accepted women as religious equals, most denominations believed they should be submissive to men in religious as well as household affairs.[51]

The religious fervour which swept over the United States during the Second Great Awakening powerfully affected women. This revival movement struck a responsive chord in the collective female bosom of New England and the Mid-Atlantic states when young women, excluded from formal theology and church affairs, listened to the message preached by Charles Grandison Finney and other revivalists that faith and Christian behaviour brought salvation. Female converts outnumbered males by a ratio of three to two, the single largest group being young, unmarried women. Harriet Martineau described American women of this era pursuing religion as an occupation. It gave meaning to their lives and brought them together with like­minded contemporaries from a similar social background, the emerging middle class.[52]

The very diversity of American religions enabled women to choose those which they found most amenable to their interests. According to Jill Conway, this enabled women to use religious affiliation to question male authority. Many young female participants in Methodist camp meetings and revivals challenged traditional sources of religion and asserted their individuality in their search for God. But while such women may have been moved by the fervour of the Second Great Awakening, they generally remained within the acceptable gendered precepts of organised religion, listening to male preachers, supporting them rather than supplanting them. Evangelical theology reinforced submissiveness in women, according to Anne Firor Scott who found that southern women “sought diligently to live up to the prescriptions, to attain the perfection and the submissiveness demanded of them by God and man.”[53]

Nevertheless, the emphasis within evangelical Protestant churches upon women’s moral superiority sometimes led women to transgress accepted social and religious roles. There were a few women preachers among those sects where visible manifestations of the Spirit mattered more than clerical education credentials. But as the Methodist, Freewill and Christian churches became more institutionalised they, too, emphasised social respectability and professional ministerial training, and women were relegated to the female missionary societies and benevolent associations. The number of women preachers or exhorters had never been large. Even at their peak, Louis Billington estimates that between one and five per cent of the Freewill Baptist and Christian ministers were women. By the 1840s the number of itinerant female evangelists fell sharply.[54]

Some of the alternative religions and utopian communities which developed in the early and middle decades of the nineteenth century questioned traditional gender roles as they attempted to reform the relationship between the individual and the community. These attracted large numbers of female members, but varied in their attitudes towards women’s roles. Some, like the Oneida Community, believed in free sexual liaisons between members. The Shakers, founded by Anne Lee, had parallel lines of authority for women and men and frequently had women leaders. While women still performed household tasks within the Shaker communities, celibacy freed them from maternity and enabled them to work on an equivalent basis with men. By no means all new religions in the United States accorded women a measure of equality in their society. Women formed a sizeable proportion of the Mormon Church, but although they had central roles in the welfare of their communities, they were rigidly excluded from the hierarchy of the Church of Jesus Christ of Latter Day Saints, the only American church to practice polygamy.[55]

Although New England ministers preached against women assuming the pulpit, they increasingly relied upon women for their congregations and directed their participation into activities appropriate for females. Maternal associations, moral reform societies, and female mission societies flourished on the rough ground of New England and the richer pastures of the Burned Over District of western New York State and the Western Reserve. Ministers encouraged women to wield their influence to raise the standard of male behaviour to protect domestic purity and religion, and to support religion in public and private. Encouraged by clergymen anxious to increase their flocks, women founded and joined church missionary societies and other religious organisations.[56]

6.Ante-Bellum Reform Movements

Women’s social reform, voluntary, and political activities grew out of the reform movements of the nineteenth century and the developing Cult of True Womanhood which held that women were uniquely endowed with kindness, virtue, and religious devotion. While contemporary society questioned the legitimacy of women’s speaking out in public and any extra-household activity could contravene social norms, church and charity-based activities were generally deemed to be socially acceptable. But the farther women moved from traditional domestic roles (often challenging authority in public), the more controversial their activities became. Although hampered by gender, that is, by the social constructs of women’s roles, women saw themselves as moral agents. This moral agency, the desire to speak out and do good, led women to participate in temperance, anti-slavery, and social reform crusades. It ultimately resulted in a women’s rights movement which reflected the frustration of many reformers (particularly after the Civil War) with the limitations placed upon women’s public expression of, and action upon, their beliefs.

Female reform activities initially took a religious approach. In 1817, the Reverend Matthew LaRue Perrine told the first annual meeting of the Female Missionary Society for the Poor of the City of New York that it would become them “to exhibit a pattern of Christian modesty, meekness, and submission” which was the ornament of their sex.[57] Women were active in the Sunday School movement. Here the Cult of Domesticity helped women to expand the scope of their activities, for the emphasis on women’s special suitability for the instruction of the young helped them to bypass the Pauline doctrine that women should keep silent in the churches. As it spread across the United States the Sunday School movement provided opportunities for women to teach, study the Bible, and become intimately involved with the workings of their churches.

Some women’s piety inspired them to cross class or racial boundaries as did Anne Clay who organized a Sunday School for slave children on her brother’s Georgia plantation. Women formed mite societies to buy Bibles and religious tracts for distribution to the unconverted abroad, in remote districts, and in the increasingly heterogeneous urban areas. Others formed female missionary societies, mostly to raise funds to support male missionaries; although some undertook proselytising work themselves, preaching to the poor and unconverted.

Religiously inspired philanthropy might lead to departures from traditional female roles in several significant ways. Initially women only assisted other women who conformed to socially accepted roles: indigent widows, aged spinsters, and orphaned children. Eventually, the genteel reformers in urban areas sought to reform the behaviour of strangers from diverse ethnocultural backgrounds rather than just lend a helping hand to neighbours down on their luck or members of their own church. Members of moral reform societies took their activities a step further by visiting the wayward and fallen, singing hymns and praying outside brothels, and petitioning state legislatures for reform legislation. Female reformers justified their unconventional activities through reference to domestic piety, women’s special abilities and responsibilities. They posited a kind of social motherhood in which they mothered society as a whole, not just their own children. As the moral guardians of society, women went beyond the domestic circle, thus subverting notions of “True Womanhood” and domesticity in order to carry out their moral duty to reform society.[58]

Many women began their pursuit of moral reform by advocating restrictions upon alcohol consumption. Female involvement in temperance reform also grew out of the evangelical impulses of the Second Great Awakening. Temperance societies proliferated in the 1820s and 1830s, prompted partially by revivalist ministers such as Lyman Beecher who advocated abstinence along with other roads to salvation. Temperance was also a nativist reaction to the higher levels of alcohol consumption prevalent among recent Irish and German immigrants. The American Temperance Society garnered a million members in some 5,000 branches by the mid-1830s.

Inspired by the same concerns as their male counterparts, some women joined men’s temperance societies, but this did not permit them an active role. Their frustration at being kept on the sidelines led them to hold small gatherings in their own homes, forming exclusively female temperance societies in which they could set policy and hold office. Women formed Martha Washington societies to complement the largely male Washington Temperance groups. In 1848, schoolteacher and temperance advocate Susan B. Anthony called a Daughters of Temperance meeting in Albany, New York because the newly founded Sons of Temperance excluded women. The meeting was poorly attended since public opinion frowned upon women speaking in open gatherings. Using another method, reform-minded women in the Burned Over District began a newspaper to spread the temperance message.

The career of Amelia Jenks Bloomer, the editor of that temperance newspaper, The Lily, illustrates the way that a religiously based moral fervour could lead the adherent into broader reform issues. The Lily was a pioneering venture in a conservative cause. Bloomer had penned occasional articles for local temperance papers such as The Water Bucket and Temperance Star, but it was unheard of for a woman to run a newspaper. The Seneca Falls Ladies’ Temperance Society justified their activities in The Lily’s first editorial in 1848:

It is woman  that speaks through The Lily. It is upon an important subject, too, that she comes before the public to be heard. Intemperance is the great foe to her peace and happiness. It is that above all which has made her home desolate and beggared her offspring. It is that above all which has filled to its brim her cup of sorrow and sent her moaning to the grave. Surely she has a right to wield the pen for its suppression. Surely she may, without throwing aside the modest retirement which so much becomes her sex, use her influence to lead her fellow-mortals away from the destroyer’s path.[59]

The ladies of Seneca Falls saw their temperance advocacy in terms which initially accorded well with the Cult of True Womanhood. They engaged in public writing in defence of their homes and their maternal duties, emphasising modesty and purity. The Lily evolved into a forum for women’s writings, not only on temperance, but also on abolition, women’s rights, and dress reform. Amongst these significant moral issues, dress reform seems an anomaly. Yet Bloomer, her frequent correspondent Elizabeth Cady Stanton, and Susan B. Anthony all believed that women’s garments were so impractical and cumbersome that improvements were needed. Women’s fashions in the 1840s and 1850s featured trailing skirts, multiple petticoats, and tightly laced whalebone corsets. They were heavy, impeded breathing, and were inconvenient for active lives. The Lily carried patterns for Turkish pantaloons with an overdress which subsequently became known as Bloomers, although Elizabeth Cady Stanton’s cousin, Elizabeth Smith Miller, wore them first. The ridicule which greeted this costume caused Bloomer, Stanton, and Anthony all to abandon it after a few years in order that their innovative dressing did not detract from their reform work. Bloomers passsed out of sight, except for occasional usage by farm women, until the turn of the century when heightened emphasis upon health and physical well being led many women to take exercise riding bicycles or playing tennis and dress in a less restrictive fashion when doing so.[60]

Some of the women involved in temperance reform also became involved in the antislavery movement. The abolitionist crusade attracted many women, blending a crusade for the rights of the weak with a concern for family life. In the greatest of the anti-slavery novels, Uncle Tom’s Cabin, Harriet Beecher Stowe uses an essentially domestic plot to highlight the evils of slavery. The pain of slavery came from its abuse of patriarchal authority, the separation of loving families, and the corruption of power. Freedom in Stowe’s words was the right of a man “to call the wife of his bosom his wife, to protect her from lawless violence; the right to protect and educate his child; the right to have a home of his own, a character of his own, unsubject to the will of another.” The novel also challenged slavery by elevating motherhood and domesticity, regardless of race, to a higher sphere. Thus when Marie St. Claire denies Mammy to her own children she is condemned for being an unnatural mother.[61] Stowe condemned slavery because it threatened the primacy of the family, coming between mother and child and husband and wife.

Women who participated in the abolitionist movement encountered hostitility from male anti-slavery workers. The World Anti-Slavery Congress in London in 1840 refused to seat American women delegates, which caused consternation and raised both women’s and men’s awareness of how gender roles limited women’s action in worthy causes. Because she was a women the convention forced delegate Lucretia Mott from Philadelphia to sit in the gallery, along with Elizabeth Cady Stanton, wife of a New York State delegate. Blanche Hersh’s study of feminist abolitionists highlights the extent to which anti-slavery senitment acted as a catalyst to feminism in the mid-nineteenth century. She found that women prominent in the movement came from New England families with a tradition of radicalism, both in religion and in politics. They were committed to social reform and had a sense of their own special mission. They were also frustrated by the lack of scope women had for acting upon their deeply held beliefs which in turn led them to question the strictures placed upon their sex.[62]

It is not surprising then that many of the leaders of the women’s rights movement of the mid and late nineteenth century had been active in both the temperance and abolitionist movements. Elizabeth Cady Stanton and Lucretia Mott organised the Seneca Falls Women’s Rights Convention in 1848. They used the Declaration of Independence for their model, calling for equality of the sexes before the law, an end to male oppression of women, better educational facilities, greater employment opportunities, and women’s suffrage. The only resolution which did not pass unanimously was the one which requested female suffrage. Many women at the convention opposed women’s suffrage, believing it unseemly. The convention galvanised women’s rights campaigns in the north and west, but not in the south, where the close connection between abolitionism and women’s rights made the latter anathema to white male southerners.[63]

The Civil War ended slavery and also ripped apart the feminist abolitionist alliance. Many women who had struggled valiantly to end slavery opposed the Fourteenth Amendment which would grant the franchise to all male citizens. For the first time the Constitution specifically provided a privilege to one sex which it denied the other. Abolitionist men and many women did not believe they could overcome the opposition to enfranchising black men if, at the same time, they sought the vote for all women. This led to the development of two post war organisations to fight for women’s suffrage, the National Woman Suffrage Association, formed in 1869 by Elizabeth Cady Stanton and Susan B. Anthony, and the American Woman’s Suffrage Association. The NWSA was the more radical of the two groups, seeking to force Congress to enfranchise women, for which the AWSA campaigned on a state by state basis.

7.The Changing Nature of Women’s Education and Employment

By the end of the nineteenth century, women’s educational levels rose as more and more women attended high schools, providing new opportunities to take white collar jobs in the offices and shops which were an increasingly important sector of the economy. Women’s college attendance also began to increase at the turn of the century. In 1870, there were 582 colleges and universities in the United States; 59 per cent of these admitted men only and 12 per cent were women’s colleges. In 1890, the proportion of men’s colleges fell to 37 per cent while the number of higher educational institutions increased to 1082. The proportion of women’s colleges peaked at this time at 20 per cent. By 1910, only 27 per cent of American colleges barred women, 15 per cent took women only and 58 per cent were coeducational.[64] Many state universities funded under the provisions of the Morrill Land Grant Act resisted women’s attempts to matriculate, but found it difficult to maintain this stance in the face of concerted protests against public funds being used only for men’s benefit. By the beginning of the twentieth century, almost all public colleges and universities admitted women, leaving single sex education primarily in the private sector.

Coeducation could work to the disadvantage of women as Mary Roth Walsh has  argued in Doctors Wanted: No Women Need Apply. Sexual Barriers in the Medical Profession, 1835-1975. The admission of women to men’s medical colleges such as Johns Hopkins University led paradoxically to a decline in opportunity for women since once the universities opened their doors to women, almost all separate medical colleges for women went out of business. Public and private universities such as Johns Hopkins, Boston University and the University of Michigan imposed quotas of about 5 per cent upon the number of women they would accept. Thus women’s enrolments fell from about 10 per cent of all medical students in the 1880s and 1890s to about 5 per cent after the turn of the century. Women encountered hostility from the male medical profession in employment even after they managed to qualify as doctors. Fewer than 10 per cent of all hospitals surveyed in 1920 by the American Medical Association would hire female physicians. Women interested in medicine were routinely advised to become nurses where they could combine healing with traditional female virtues. Doctors were (male) authority figures while nurses were their (female) handmaidens. Nursing schools proliferated at the same time that opportunities for women as physicians remained static or declined. In 1920, about 96 per cent of all nurses were women, compared with about 5 per cent of all doctors. [65]

Between 1870 and 1920, more women joined the labour force and the variety of jobs they undertook expanded, but as this review of women in the medical professions suggests, certain fundamental characteristics of female employment and female employees remained the same. Most women workers were segregated into occupational ghettoes and remained in the labour force for a relatively brief time. The interaction of these factors coupled with hostility from male workers and employers kept women’s wages low and working conditions usually poor.

The proportion of employed women rose from 14 per cent in 1870 to 23 per cent in 1920. Different groups in the population varied dramatically in the proportions of women who held jobs. In 1890, about 15 per cent of all native born white women were in the labour force compared with 20 per cent of women born abroad and 40 per cent of all black women. By 1920, the proportion of foreign born women in the labour force remained static, but that of white women born in the United States increased to 23 per cent, while black women’s rose to 44 per cent.[66]

The tendency of white women not to work after marriage changed marginally during this era, while black women were far more likely to stay in the labour force regardless of marital status. Working for pay remained overwhelmingly the province of single women in this era among whites. Most families preferred women to contribute their domestic labours to their own households rather than continue to work at poorly paid jobs after marriage. The containment of women’s employment to one phase of the life cycle reflected the interaction between the objective conditions of their work (poorly paid jobs with few promotion prospects) and the gender ideologies of the era that women’s special place was the household and men’s was to support it. At the turn of the century about 3 per cent of all married white women had jobs, compared with 26 per cent of their black counterparts, while in 1920 7 per cent of the white and 32 per cent of the black married women were employed. Some of the white women who continued to work after marriage had professional careers, but many of the others worked in shops and offices.

Sociologists Roslyn L. Feldberg and Evelyn Nakano Glenn have objected that the employment models and explanations for male and female behaviour used by most historians and sociologists have a differential basis. Men’s work is explained through a job model which examines the actual nature of the work. In contrast, women’s is explained either through a gender model which examines correct social roles or through a family economy model which looks at female employment through the prism of the family rather than the dynamics of the available jobs.[67] In evaluating which model best explains the level and nature of women’s employment, one needs to consider the extent to which gender and job opportunities interacted, as well as matters of race and ethnicity which powerfully affected female opportunities.

According to Elizabeth H. Pleck, poverty alone did not explain the higher rates of black women’s labour force participation in this era. She found that black women’s employment levels were higher than those of Italian immigrants of comparable income levels. Italian families relied upon the labour contributions of their children, particularly their sons, to offset the low incomes earned by the men. Black families had a different survival strategy, utilising women’s labour in the face of uncertain employment for men and the narrow range of opportunities for young black workers. Virginia Yans McLaughlin similarly attributed the low labour force participation levels of Italian women in Buffalo, New York to cultural preferences and ascribed gender roles within the Italian community. However, the actual structure of employment opportunity within any community also had a strong influence on women’s work rates. As I have argued elsewhere, women’s employment levels were higher amongst all ethnic and cultural groups when female employment opportunities were plentiful. Thus Italian women in New York City had higher employment levels than their sisters in Buffalo because the larger metropolis had more numerous openings for women than the smaller one.[68]

The nature of opportunity shifted as the century ended and that shift disadvantaged immigrant and non-white women who had less access to education. This in turn limited their ability to develop their full potential and move freely within the job market. Women with lower educational levels competed unfavourably for the white collar occupations which comprised a growing segment of women’s employment. In 1870, over half of all women workers were domestic servants, about one-fourth were agricultural labourers, one-fifth were factory .hands, and the rest worked in trade and transportation (typically as sales clerks) and in the professions (primarily teaching). By 1920, the proportion of women working as domestics and field hands had dropped by about half while that of industrial workers remained about the same. In 1870, these three categories accounted for 94 per cent of all women workers, declining to about 60 per cent by 1920. In that year about 16 per cent of the working women had sales and office jobs and 10 per cent were professionals (primarily teachers and nurses).

Women’s jobs were evaluated by workers themselves and society generally on a standard that had little to do with wages. A domestic servant who lived in her employer’s household might earn more than a factory hand or a sales clerk (assuming that the cash value of the servant’s room and board is included in the equation), but her job had less status because of its servile, nonmodern connotations and because it was done for other women. The comparison between most white collar jobs and factory work shows that clerking was more prestigious than operating a sewing machine but generally worse paid. Notions about appropriate locations for women’s employment led to an emphasis on “genteel” work for women, characterised by clean surroundings, white blouses, and the absence of foreign born or nonwhite workmates.

Employers anxious to keep wage levels down manipulated women’s notions of gentility to their own advantage. Department store owners refused to hire foreign born or black women. Elizabeth Butler found that more than four-fifths of the women working in Baltimore’s retail trade at the turn of the century were born in the United States as were a comparable level of Pittsburgh’s department store workers. The demand for such jobs was high enough that wages were lower than in the factories or homes of these cities. As Susan Porter Benson notes in her investigation of women’s employment in the burgeoning retail sector of the economy, employers used notions of gender to depress women’s wages and segregate them into lower paying jobs within stores.[69]

Moreover, the women who took these jobs had little chance for advancement. Women rarely held supervisory positions. They tended to work in occupations in which work was subdivided into small tasks where proficiency might bring lower piece rates rather than promotion to more advanced tasks. Despite the expansion of the range of jobs undertaken by women, domestic service was still the single largest job category for women in this era. Given that most households employed only one servant and that wages were set by customs rather than individual merit, the prospects for advancement were virtually nonexistent. In other settings women were discouraged by gendered assumptions about hierarchies and the motives ascribed to women workers. Although women workers knew they were in the labour force because they and their families needed the money, employers used the pernicious notion that they worked only for “pin money” as an excuse to pay women poorly. There were few occupations in which the sexes did the same work, teaching being one of them. Yet, women teachers received lower wages than men, were rarely promoted, and were concentrated in the more poorly paid elementary schools. Sex role conventions thus limited the type of work women could do, their remuneration, and their prospects for promotion. The net result was that most women in this era still viewed work as a interlude rather than a career.

Women’s lack of bargaining power hampered their struggle to improve their wages, working conditions, and promotion prospects. They found it more difficult than men to organise into trade unions. The fragmented and personalised nature of domestic service, the conflation of racial and ethnic hierarchies and the poverty of the workers all inhibited unionising efforts. In industry, male co-workers viewed women with suspicion. Their lower wages meant that they had smaller savings upon which to rely in times of industrial turmoil. Many male union members wanted to be paid a higher wage which would enable them to support their families without help from women or children. To them women’s employment, associated as it frequently was with the introduction of machinery, undermined the position of male workers. They favoured women’s removal from the labour force rather than their organisation. Women did achieve some signal successes in forming protective organisations, particularly in the garment industry where so many employees were female. Nevertheless, even where women were active in the unions they were underrepresented among the leadership.

Alice Kessler-Harris attributes women trade unionists’ indifferent successes to ambivalent cultural patterns and antagonism from union men. Leslie Woodcock Tender emphasises women workers’ own characteristics, their youth and acceptance of gender roles in her explanation of women’s underrepresentation in the union movement. Roger Waldinger examines the changing nature of the garment trade and finds within its increasing reliance upon small outside contractors an explanation for the failure to organise women. The ephemeral nature of such firms made enjoyment unstable. This in turn lowered women’s attachment to their jobs and ability to organise.[70] Another reason for women’s relatively low level of unionisation is that the areas in which women’s employment expanded most rapidly, white collar and professional occupations, proved difficult to organise.

Although trade unions achieved some successes during this era, they protected a minority of male industrial workers and very few women. In order to provide some protection for the large majority of unorganised working women, a few women formed an alliance across the social classes. Formed in 1903, the Women’s Trade Union League pressured employers into improving working conditions and wages while sustaining strike efforts through the support of affluent women. The WTUL moved away from direct support of women’s organising activities towards an emphasis on legislative reform within a decade of its founding. It thus embodied the new tendency towards legislative rather than union protection of women workers’ interests.[71]

Attempts had been made throughout the nineteenth century to enlist the power of the state on the side of labour, but the courts threw out such legislation claiming that it violated the rights of workers to freely contract their labour. At the end of the nineteenth century, however, state legislatures moved from an emphasis on protecting all workers to a gender and age-based protection, arguing that the state had a special interest in preserving the health and well-being of women and children. Many states forbid women to take jobs which would supposedly endanger their moral purity (working in saloons), reproductive capacity (working 12-hour days), or bring them into danger through direct contact with strangers (delivering telegrams, reading electricity and gas meters, or driving taxis). [72]

In 1908, the Supreme Court accepted that states could legitimately protect women, but not men, in Muller v Oregon. In doing so it enshrined the perception of women as mothers first and individual wage earners second. In that landmark case the Court differentiated between men’s right to contract the hours of their labour, with which it was unconstitutional to interfere, and women’s, where the state had an obligation to protect women.

Woman’s physical structure and the performance of maternal functions place her at a disadvantage in the struggle for subsistence … Differentiated by these matters from the other sex, she is properly placed in a class by herself, and legislation designed for her protection may be sustained, even when like legislation is not necessary for men and could not be sustained. It is impossible to close one’s eyes to the fact that she still looks to her brother and depends upon him . . . Her physical structure and a proper discharge of her maternal functions-having in view not merely her own health, but the well-being of the race justify legislation to protect her from the greed as well as the passion of man.[73]

The Supreme Court used assumptions about gender roles, women’s supposed weakness, and the vested interest of the state in preserving the vigour and strength of the race to differentiate between the sexes. It sought to protect (potential) motherhood at the expense of individual interests, needs, or capabilities. It thus made sex a valid basis of classification which subsequently kept women off juries, out of some state supported colleges and some state licensed occupations. By viewing women primarily as mothers, the Supreme Court foreclosed job opportunities and limited women’s citizenship rights.

8. The Woman Movement

The term “Woman Movement” has been used by historians to describe the upsurge of female activism in the late nineteenth century. It was a complex, multilayered movement replete with shifting alliances and internal contradictions, not the least of which was brought about by the tension between arguing that women should have rights because they were citizens and claiming that they should have rights because of their special feminine talents and insights. As Nancy Cott has noted in her incisive history, The Grounding of Modern Feminism, by the end of the nineteenth century the woman movement wished both to eliminate sex-specific restrictions and preserve those special feminine qualities embodied in the notion of a separate women’s sphere.[74]

Many, but not all, of the concerns around which women organised in the decades between the Civil War and World War One were outgrowths of antebellum issues. The campaign for women’s suffrage is the most obvious of these, but agitation for temperance also continued under the auspices of the Women’s Christian Temperance Union, founded in 1874. The WCTU was comprised primarily of married, middle-class women from the towns and smaller cities. Most members were native born whites who attended evangelical churches, especially Baptist, Methodist, and Presbyterian denominations. Many members were heavily involved in church choirs, home and foreign missionary societies, needlework guilds, sewing circles and mothers groups. Their activities were an outgrowth of their church work and family orientation. The main focus was temperance, but they also devoted themselves to a range of charitable activities.

WCTU members distributed flowers to aged ladies and sick persons, and clothing and food to the poor. They supported missionaries, organised a children’s temperance group, and pressed for temperance instruction in the schools. The WCTU campaigned at the local level for more rigourous Sabbath observance and strict fidelity to existing limitations on the sale of alcohol and cigarettes. At the national level their main concern was the passage of a constitutional amendment prohibiting the sale of alcohol. At both the state and national level, the crusade for temperance gave the women involved experience in organising political campaigns. They circulated petitions, picketed saloons, published magazines, and lobbied legislatures and town councils. Increasingly they supported women’s suffrage as a means to accomplish temperance ends.[75]

Rural women fought for the rights of farmers and country dwellers in the Grange, Farmers Alliance, and Populist movements. For women living on isolated farms these movements provided social contact as well as serving the economic and political interests of farm families. Among the more notable Populist campaigners, Mary Lease lectured in western districts, telling farmers they needed to raise less corn and more hell. The Populist Party supported women’s suffrage as well as other measures which accorded well with its doctrine of popular rule including the direct election of senators, a system of referenda and recall, and the “Australian” or secret ballot.

Women’s organisations proliferated at the turn of the century. Not all had the same direct political content of Mary Lease’s populism. A number of women’s organisations which favoured suffrage did so instrumentally; that is, for what the vote could accomplish, rather than having enfranchisement as their primary aim. Others were reluctant to support women’s suffrage for fear of alienating conservative members. This was true of the General Federation of Women’s Clubs formed in 1890 to provide a national organisation for women’s literary clubs scattered throughout the cities and towns. Initially the GFWC opposed women’s suffrage at the same time that it urged its members to be active in their communities. It also excluded black women in order not to offend white sensibilities.[76] Shut out by prejudice, black women formed their own National Association of Coloured Women in 1896. According to Eleanor Flexner’s pioneering history of the Women’s Rights Movement, these clubs had an importance and content which set them apart from white women’s literary and educational societies. The NACW reflected the economic and social realities of black society. It crusaded against lynching and “for the benefit of all humanity,” in the words of Josephine St. Pierre Ruffin, a founding member.[77]

Women activists of this era increasingly relied upon arguments about women’s special virtues rather than their natural rights in order to justify their endeavours and enlarge the scope of their activities. The merger of the two suffrage associations into the National American Woman Suffrage Association in 1890 reinvigorated the national campaign for the vote, but with different intellectual underpinnings. This shift occurred because many of the specific legal disabilities suffered by women at the middle of the nineteenth century had been corrected by its end. Married women could now own property and make binding contracts, for example. Women’s educational levels rose. More women had entered the labour force and the nature of that labour force had changed as middle class women took jobs, for example, as settlement house and social workers. The generation of suffragists led by Harriet Stanton Blatch (Elizabeth Cady Stanton’s daughter), found a strong connection between women’s new economic roles and their justification and need to vote.[78] Better educated and more comfortable in the world outside the home, these women were less willing to be silent in public.

The changing nature of the argument can be seen both in the Progressive movement and the campaign for the vote. The women of the Progressive movement spearheaded the battle for the reform of municipal government, the improvement of urban living conditions, and the protection of those members of society less able to defend themselves, including children, working women, and the poor. Concerned as were Progressives generally by the unequal relations between labour and capital, they formed the National Consumers’ League in order to lobby affluent Americans into using their buying power to improve the working conditions of shop assistants and textile workers. The National Consumers’ League fused economic and legislative pressure, publishing lists of stores which conformed to their standards of treatment for employees and pressing for reform legislation. They recognised that their legislative efforts would be more effective if they had the vote to back up their campaigns.

The Settlement House movement embodied many of the strains and tensions of the turn of the century Woman Movement. Kathryn Kish Sklar’s analysis of Chicago’s Hull House in the 1890s found that the power of women settlement workers came both from their having a separate female institution and from their access to male spheres of influence.[79] Jane Addams, the founder of the Hull House Settlement, believed that women should have the vote so that they could legislate a more just world. In her 1910 essay, “Why Women Should Vote”, published in the popular women’s magazine The Ladies Home Journal, Addams wrote that women’s first duty was to their own households. In the complex urban world emerging at the turn of the century, they failed in that duty unless they broadened their sense of responsibilities to the world outside the households, to the environment in which they lived and its social conditions.

Women’s purview extended to the education and welfare of their children. They had to ensure that their-children were provided with good schools, kept free from the vicious influences of the street, and were protected by adequate child labour legislation if—they worked. “More than once,” Jane Addams wrote,

Woman has been convinced of the need of the ballot by the futility of her efforts in persuading a business man that young children need nurture in something besides the three r’s. Perhaps, too, only women realise the influence which the school might exert upon the home if a proper adaptation to actual needs were considered. An Italian girl who has lessons in cooking at the public school will help her mother to connect the entire family with American food and household habits. That the mother has never baked bread in Italy—only mixed it in her own house and then taken it out to the village oven—makes it all the more necessary that her daughter should understand the complications of a cooking stove. [80]

Her argument justified women’s suffrage in the most socially conventional terms: those of motherhood and patriotism. It played upon fears of unassimilable foreigners and broadened women’s remit from their own families to all families. As children attended school at younger ages and more goods were manufactured outside the home, women’s interests should follow their children into the schools and the goods they purchased into the factories where they were made and the shops in which they were sold. Echoing the Supreme Court decision of Muller v Oregon written two years earlier, Addams believed that older women should see to it that younger ones were not incapacitated for family life because they were forced to work exhausting hours under unsanitary conditions. As with many reformers of her era, Addams believed that women needed the vote in order to preserve their homes.

A few women wanted the vote but nevertheless rejected the maternalist argument for suffrage. Charlotte Perkins Gilman favoured redesigning housing to lessen the burden of housework and childcare upon women. In two of her major books, Women and Economics and The Home, she advocated the removal of domesticity as women’s central concern. Most female activists and women generally accepted its crucial place in women’s lives however much they differed on the political consequences of maternal devotion. Gilman, on the other hand, discarded sentimentalised views of motherhood. She claimed that all advances in children’s education and health were due to professional intervention rather than maternal instinct. She supported women’s suffrage, arguing that communal kitchens, cleaning services, and creches would free women from their domestic burdens and enable them to participate fully in politics. Social progress depended upon “the smooth development of personal character, the happy fulfilment of special function. The home, in its ceaseless and inexorable demands, stops this great process of specialisation in women, and checks it cruelly in men.”[81] In order for women in particular and humanity in general to advance, women had to be freed from the burdens of the home and become full citizens.

The problem suffragists faced was how to persuade men to extend the vote to them. Two strategies emerged: a campaign to amend the voting legislation in each state and another for a federal constitutional amendment, which would need to obtain two-thirds approval of both houses of Congress and three-quarters of the state legislatures. Wyoming Territory applied for statehood in 1889 with a constitution which contained a clause providing women’s suffrage. By the time of its admission, nineteen states granted limited voting rights to women, usually to widows with school age children, in school board elections and on tax and bond issues.

There were nearly 500 campaigns in 33 states to get women’s suffrage before the voters between 1870 and 1910. Of these, only 17 resulted in referenda and only two resulted in men voting to share the ballot with women. By the turn of the century, four western states (Wyoming, Colorado, Utah, and Idaho) had women’s suffrage. By 1913, five more states had been added to the suffrage column. Attempts to gain the ballot elsewhere ran into a stubborn combination of hostility from the liquor interests (who connected women’s suffrage with temperance and concluded that both were bad for business) and political and social conservatives.

Inspired by the militance of British suffrage advocates and stymied by the intransigence of state legislatures, both the National American Woman Suffrage Association and the newly formed Congressional Union (under the leadership of Alice Paul) began a fresh onslaught on an amendment to the federal constitution. The Congressional Union developed new tactics, including holding the party in power responsible for suffrage. It worked for the defeat of Democratic candidates in those states where women had the vote. The herculean efforts to obtain the vote in the large industrial states of Massachusetts, New York, New Jersey, and Pennsylvania met a resounding defeat in 1915. After these defeats Carrie Chapman Catt rallied the NAWSA to change its tactics and work wholeheartedly for a constitutional amendment.

The Congressional Union, now formed into the National Women’s Party, continued its pressure on the Democratic Party and President Wilson by picketing the White House with banners querying “How Long Must Women Wait for Liberty?”[82] The pickets endured attack, arrest, and force feeding in jail, which made martyrs of them and attracted sympathy to the suffrage cause. A year after the picketing began the House of Representatives passed a suffrage amendment to the Constitution, but the Senate rejected it. The NWP then started picketing on Capitol Hill.

When the United States entered World War One, the NAWSA offered its services to the President, while the NWP (with many Quaker members), continued its singleminded dedication to the suffrage amendment. The majority of American women did not emulate the pacifist stance of the Women’s Party. Many worked for the war effort in foundries, blast furnaces, and railroad yards. The brief nature of the United States’ participation in the war limited the consequences of these innovations. When the men returned women lost their jobs as bus conductors and on the railroads. Employers used the state legislative provisions against female night work to lay off the women they had hired during the war in manufacturing jobs. Women did retain their jobs in the expanding sectors of the economy, in office jobs, shops, and communications.[83] The war thus had little impact on the highly gendered basis of women’s employment. Women still worked with other women and for men.

It is possible to view the Nineteenth Amendment which finally gave women the vote in 1920 as the most lasting legacy of the war. At the start of the war women enjoyed full suffrage in eleven states, all but one of these west of the Mississippi. Except for the primary vote in Arkansas, there was no women’s suffrage in the south, where white supremacists opposed enfranchisement because they feared the possibility of black women’s voting. Carrie Chapman Catt, president of the National Association, and Nettie Rogers Schuler, its corresponding secretary, believed that many big business interests objected to the Nineteenth Amendment because they feared women’s penchant for reform activities.[84] Before the proposed federal amendment came to a vote President Wilson spoke to the Senate urging its passage because “we have made partners of women in this war; shall we admit them only to a partnership of suffering and sacrifice and toil and not to a partnership of privilege and right?”[85]

According to Eleanor Flexner, Wilson’s intervention had little positive effect. The grounds of the argument had shifted from rights to expediency. Few opponents still shared the concerns of Senator McCumber from North Dakota who wished to preserve mothers from excitement and strife. They now argued that the issue was one of states’ rights and that the decision to enfranchise women should be taken at the local level rather than by the federal government. In 1918, three more states passed suffrage amendments, but the Senate still voted against sending the constitutional amendment to the states. Only in 1919 did it finally join the House of Representatives in passing the Nineteenth Amendment by the required two-thirds majority. The requisite thirty-six states ratified the amendment but with several cliff hanging votes so that the outcome was in doubt until the last.[86]

The ratification of the suffrage amendment dissipated the feminist energies which had been narrowly focused upon this single goal for the past two decades. The NAWSA became the League of Women Voters, dedicated to voter education, while the Woman’s Party began its long and apparently futile drive to further alter the constitution with an equal rights amendment. At least initially, Progressive supporters of women’s suffrage were disappointed: women did not use their votes very differently from men.

Proponents of women’s suffrage had made few specific claims as to the outcome of female enfranchisement, although there was an underlying assumption that women’s suffrage would somehow lead to a better, more just world. Most women themselves perceived their lives in terms of their class, religious, racial, or ethnic interests, and frequently, as with men, as a combination of those interests. They tended not to differentiate between their needs as women and as members of their respective groups; thus they voted in much the same way as the men. Moreover, voter participation generally declined in the 1920s both because of more complex voter registration procedures and the rise of administrative rather than elective government, particularly at the local level. In order to succeed in electoral politics, women had to work through the same political parties which had for so long denied them the vote and which prized loyalty as the cardinal political virtue. As a result party politics also repressed the articulation and action upon separate women’s issues.[87]

The new order emerging in the 1920s owed less to women’s presence in the voting booths than to consumerism, mass communications, falling birth rates, and rising employment levels. In Popcorn Venus Majorie Rosen argues that “the birth of the movies coincided with—and hastened—the genesis of the modern woman.”[88] But the modern woman, as portrayed by Hollywood, rarely differed in her essential characteristics from the old fashioned one. The film industry created and consolidated a vivid image of women which limited them to a few roles even as it endowed some of them with a hitherto unknown glamour. The silent movies dealt in readily recognisable icons rather than complex political treatises. As Victorian morality plays they presented women in a few roles: the vamp, the virgin, and the mother protecting her young. Minority women’s roles were limited to servant parts. Career women were a rarity although spunky women abounded. Early movie heroines such as Mary Pickford were unthreatening child women rather than complex characters.

Before Hollywood’s own conventions rigidified there were some women scriptwriters and a few directors, but the expansion of women’s roles was, according to Molly Haskell, more social than political. The new morality of these films consisted of “a vicarious splurge for women who wanted to look and feel daring without actually doing anything, who wanted to shock the world by coming home after midnight—but no later.”[89] Few films showed any awareness of women in a wide range of roles or, despite the battles raging over suffrage, as political, social, or economic equals. Even when women starred in the movies they rarely had control over them. Thus the mass media constrained rather than expanded women’s sphere. Like the women of this era, Hollywood heroines had star billing but decidely circumscribed roles. They moved to the fore of one part of the stage, but gender circumscribed their parts in the movies, at home, in the labour force, in politics, and in society generally.

This era closed with more women in the labour force, but still severely constrained by gendered assumptions about what jobs women could do. Family sizes had fallen, but home and domesticity continued to be women’s primary focus. Women achieved suffrage, but were not fully incorporated into politics. After 100 years of political and social activism women had acquired a legitimate public voice, but did not yet know how to use it to their fullest advantage. For all the campaigns of the nineteenth and early twentieth centuries women’s lives were still strongly controlled by social assumptions about appropriate activities rather than by individual talents or interests. They had obtained the vote, but equality was yet to come.

10. Guide to Further Reading

Given the incredible proliferation of scholarship on women in the last decade it is impossible to do more than scratch the surface of the available material. I have tried to avoid repeating the works cited in the footnotes in order to cover more material.

General histories of women in the United States between 1820 and 1920 include Catherine Clinton, The Other Civil War. American Women in the Nineteenth Century (New York; Hill and Wang, 1984), Carl Degler, At Odds. Women and-the Family in America from the Revolution to the Present (New York: Oxford University Press, 1980), Gerda Lerner, The Majority Finds its Past. Placing Women in History (New York: Oxford University Press, 1979), Anne Firor Scott, Making the Invisible Woman Visible (Urbana: University of Illinois Press, 1984) and Carroll Smith­Rosenberg, Disorderly Conduct. Visions of Gender in Victorian America (New York: Alfred A. Knopf, 1985).

Biographical and bibliographical sources include: Edward T. James, Janet Wilson James, and Paul S. Boyer, Notable American Women, 1607-1950 (Cambridge, Mass: Harvard University Press, 1971); and Cynthia E. Harrison, Anne Firor Scott, Pamela R. Byrne, Women in American History: A Bibliography (Santa Barbara, California: ABC-Clio Press, 1979). Individual biographies illuminate the way in which particular women constructed their lives with regard to or in spite of gender constraints. In addition to biographies of feminist leaders and reformers, those of women pathbreakers in the professions provide insights into daily life, obstacles, defeat, triumph, and coping.

Studies limited chronologically to the beginning of the era covered here include Linda K. Kerber, Women of the Republic. Intellect and Ideology in Revolutionary America. (Chapel Hill: University of North Carolina Press, 1980), Nancy F. Cott, The Bonds of Womanhood. “Women’s Sphere” in New England, 1780-1835 (New Haven, Conn.: Yale University Press, 1977), Linda Grant DePauw, Founding Mothers: Women in the Revolutionary Era (Boston: Houghton, Mifflin, 1975) and Mary Beth Norton; Liberty’s Daughters: The Revolutionary Experience of American Women, 1750-1800. (Boston: Little Brown, 1980). Marylynn Salmon, Women and the Laze: of Property in Early America (Chapel Hill, North Carolina: University of North Carolina Press, 1986). Mary P. Ryan, Cradle of the Middle Class: The Family in Oneida County, New York, 1790-1865 (Cambridge: Cambridge University Press, 1981) examines a slightly later period.

Writings about domesticity and feminine culture include Ann Douglas, The Feminization of American Culture (New York: Alfred A. Knopf, 1977) Barbara Leslie Epstein, The Politics ofDomesticity. Women, Evangelism and Temperance in Nineteenth Century America (Middletown, Corm.: Wesleyan University Press, 1981), and Mary P. Ryan, The Empire of the Mother: American Writing about Domesticity, 1800 to 1860 (New York: Institute for Historical Research and Haworth Press, 1982). Ellen Carol Dubois, MariJo Buhle, Temma Kaplan, Gerda Lerner and Carroll Smith—Rosenberg, “Politics and Culture in Women’s History: A Symposium” (Feminist ,Studies 6,1 (Spring, 1980), pp.26-64 explores the connection between domesticity and politics. Mary Kelley, Private Lives, Public Stage: Literary Domesticity in.Yineteenth Century America (New York: Oxford University Press, 1984) also discusses the ideology and expression of domesticity.

General works on women’s writing include Nina Baym, Women’s Fiction. A Guide to Novels by and about Women in America, 1820-1870 (Ithaca, N.Y.: Cornell University Press, 1978), Patricia Stubbs, Women and Fiction. Feminism and the ,,Vovel, 1880-1920 (Brighton: Harvester Press, 1979), Judith Fryer, The Faces of Eve. Women in the .Nineteenth Century American Novel (New York: Oxford University Press, 1976) and Sally Allen MeNall, Who Is In the House. A Psychological ,Study of Two Centuries of Women’s Fiction in America, 1795 to the Present (New York: Elsevier, 1981). Carole McAlpine Watson, Prologue. The,Yovels of Black American Women, 1891-1965 (Westport, Conn.: Greenwood Press, 1985), Kristin Herzog, Women, Ethnics, and Exotics. Images of Power in Mid-Nineteenth Century Fiction (Knoxville: University of Tennessee Press, 1983), Susan J. Rosowski and Helen Winter Stauffer, Women and Western American Literature (Troy: N.Y. Whitson Publishing Co., 1982), and Anne Goodwyn Jones, Tomorrow is Another Day. The Women Writer in the South, 1859-1936 (Baton Rouge: Louisiana State University Press, 1981) all present specialised regional and ethnic studies of women writers.

Books on women in the South include Suzanne Lebsock, The Free Women of Petersburg. Status and Culture in a ,Southern Town, 1784-1860 (New Year: Norton, 1984), jean Friedman, et al, Sex, Race and the Role of Women in the South (Jackson: University Press of Mississippi, 1983), Anne Firor Scott, The Southern Lady from Pedestal to Politics, 1830-1930 (Chicago: University of Chicago, Press, 1970), Elizabeth Massey, Bonnet Brigades (New York: Alfred A. Knopf, 1966), Bell Irvin Wiley, Confederate Women (Westport, Conn.: Greenwood Press, 1975), Catherine Clinton, The Plantation Mistress: Women’s World in the Old South (New York: Pantheon, 1982).

Many of the books on black women also examine women’s lives in the south since most black women until the end of the nineteenth century lived south of the Mason Dixon Line. Elizabeth Fox­Genovese, Within the Plantation Household: Black and White Women of the Old South (Chapel Hill, North Carolina: University of North Carolina Press, 1988) illuminates class, gender, and race relations primarily in the south, but also in the ante-bellum United States generally. Other works include: Dorothy Sterling, We Are Your Sisters: Black Women in the Nineteenth Century (New York: Norton, 1984), Paula Giddings, When and Where I Enter. The Impact o f Black Women on Race and Sex in America (New York: Morrow, 1984), Jacqueline Jones, Labor of Love, Labor of Sorrow: Black Women, Work and the Family from Slavery to the Present (New York: Basic Books, 1985), and Gloria T. Hull, Patricia Bell Scott, and Barbara Smith, All the Women Are White and All the Blacks Are Men, but Some of Us Are Brave. Black Women’s Studies (Old Westbury: N.Y., Feminist Press, 1982). Gerda Lerner edited a wide-ranging collection of black women’s writings in Black Women in White America (New York: Pantheon, 1972). Bert James Lowenberg and Ruth Bogin, Black Women in Nineteenth Century American Life. Their Words, Their Thoughts, Their Feelings (University Park: Pennsylvania State University Press, 1976) covers a narrower chronological span. Trudie Harris, From Mammies to Militants. Domestics in Black American Literature (Philadelphia: Temple University Press, 1982) and Judith Rollins, Between Women. Domestics and Their Employers (Philadelphia: Temple University Press, 1985) overlap somewhat in their focus.

Studies of women in the west tend to focus on how women responded to the westward journey and pioneer life. Typical of this approach are Julie Roy Jeffrey, Frontier Women. The Trans-Mississippi West, 1840­1880 (New York: Hill and Wang, 1979), John Mack Faragher, Women and Men on the Overland Trail (New Haven, Conn.: Yale University Press, 1979) and Sandra L. Myers, Westering Women and the Frontier Experience, 1800-1915 (Albuquerque: University of New Mexico Press, 1982). Faragher’s ,Sugar Creek. Life on the Illinois Prairie (New Haven, Conn.: 1986) is a finely textured study of Native and Anglo-Americans and the community which developed when the whites moved into Central Illinois. He examines women’s work, and political and cultural roles. Lillian Schliessel has edited women’s diaries in Women’s Diaries of the Westward Journey (New York: Schocken Books, 1982) which lets readers determine for themselves how women felt about moving west. Christine Fischer (ed.), Let Them Speak for Themselves. Women in the American West 1849-1900 (Hamden, Conn.: Archon Books, 1977) takes a similar approach. Joan Jensen in Loosening the Bonds. Mid-Atlantic Farm Women, 1750-1850 (New Haven, Conn.: 1986) and With These Hands. Women Working on the Land (Old Westbury, N.Y.: Feminist Press, 1981) considers the work done by rural women for their families and for market. Susan Armitage and Elizabeth Jameson, The Women’s West (University of Oklahoma Press, 1987) and Glenda Riley, Women and Indians on the Frontier, 1825-1915 (Albuquerque: University of New Mexico Press, 1984) examine an underexplored area of scholarship.

There are many studies of the origins of feminism. Nancy Cott, The Grounding of Modern Feminism (New Haven, Conn.: Yale University Press, 1987), Barbara J. Berg, The Remembered Gate. The Origins of American Feminism (New York: Oxford University Press, 1978), Bell Hooks, “Ain’t 1 a Woman?” Black Women and Feminism (Boston: South End Press, 1981), and Keith Melder, Beginnings of Sisterhood. American Woman’s Rights Movement, 1800-1850 (New York: Schocken Books, 1977) all examine feminism and women’s rights. Ellen Carol DuBois, Feminism and Suffrage. The Emergence of an Independent Women’s Movement in America, 1848-1869 (Ithaca, N.Y.: Cornell University Press, 1978), Eleanor Flexner, Century of Struggle. The Woman’s Rights Movement in the United States (Cambridge, Mass.: Harvard University Press, 1959), Aileen S. Kraditor, The Ideas of the Woman Suffrage Movement, 1890­1920 (New York: Columbia University Press, 1965), and Anne Firor Scott and Andrew M. Scott, One Half the People. The Fight for Woman Suffrage (Philadelphia: L1ippincott, 1975) all focus upon the development of the women’s rights movement and the fight for the vote. Abigail Scott Duniway, Pathbreaking. An Autobiographical History of the Equal Suffrage Movement in the Pacific Coast States (New York: Source Book Press, 1970) and Carrie Chapman Catt and Nettie Rogers Shiner, Woman Suffrage and Politics (New York, Charles Scribner’s Sons, 1926) provide eye witness accounts of the fight for the vote.

Other studies of women political crusaders include Blanche Glassman Hersh, The Slavery of Sex. Feminist Abolitionists in America (Urbana: University of Illinois Press, 1978), Alma Lutz, Crusade for Freedom. Women and the Antislavery Movement (Boston: Beacon Press, 1968), Alan P. Grimes, The Puritan Ethic and Woman Suffrage (New York: Oxford University Press, 1967); Jack S. Blocker, “Give to the Winds Thy Fears”. The Women’s Temperance Crusade (Westport, Conn.: Greenwood Press, 1985), and Ruth Birgitta Anderson, Woman and Temperance. The Quest for Power and Liberty, 1873-1900 (Philadelphia: Temple University Press, 1981).

The general topic of women and reform includes sexual, social, economic, and educational reform. Readers are directed to the following investigations. Estelle B. Freedman, Their Sisters’ Keepers: Women’s Prison Reform in American, 1830-1930 (Ann Arbor: University of Michigan Press, 1981), Barbara Kuhn Campbell, The “Liberated” Woman of 1914. Prominent Women in the Progressive Era (Ann Arbor, Mi.: UMI Research Press, 1979) Kathryn Kish Sklar, Catherine Beecher. A Study in American Domesticity (New York: W. W. Norton, 1973), Linda Gordon, Woman’s Body, Woman’s Right. A .Social History of Birth Control in America (New York: 1976), Mari Jo Buhle, Women and American Socialism, 1970-1920 (Urbana: University of Illinois Books, 1981), Karen J. Blair, The Clubwoman as Feminist: True Womanhood Redefined, 1868-1914(New York, Holmes and Meier, 1980), and Barbara Solomon, In the Company of Educated Women. A History of Women and Higher Education in America (New Haven, Corm.: Yale University Press, 1985).

The relationship between women and their families are the subject of Carl Degler, At Odds (New York: Oxford University Press, 1980), Herbert Gutman, The Black Family in Slavery and Freedom (New York, Pantheon, 1976), S. J. Kleinberg, The Shadow of the Mills. Working Class Families in Pittsburgh, 1870-1907 (Pittsburgh: University of Pittsburgh Press, 1989), Virginia Yans McLaughlin, Family and Community: Italian Immigrants in Buffalo, 1880-1930 (Ithaca, N.Y.: Cornell University Press, 1977), and Jacqueline Jones, Labour of Love, Labour of Sorrow. Black Women, Work, and the Family from Slavery to the Present (New York: Basic Books, 1985). Elizabeth Pleck, Domestic Tyranny: The Making of American Social Policy against Family Violence from Colonial Times to the Present (Oxford: Oxford University Press, 1987) and Linda Gordon, Heroes of Their Own Lives: The Politics and History of Family Violence (London, Virago Press, 1989) discuss the difficult subject of wife and child abuse.

Almost all of the titles mentioned in the preceeding paragraph also contain material on women’s work inside and outside the home. There are many specialised analyses of women in the labour force, one of the best being Alice Kessler-Harris, Out to Work. A History of Wage-Earning Women in the United States (New York: Oxford University Press, 1982). Milton Cantor and Bruce Laurie (eds.), Class, Sex, and the Women Worker (Westport, Conn.: Greenwood Press, 1977) contains valuable articles. Mary H. Blewett, Men, Women and Work: Class, Gender, and Protest in the New England Shoe Industry, 1780-1910 (Urbana: University of Illinois Press, 1988); Susan Porter Benson, Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890­1.940 (Urbana, University of Illinois Press, 1988); Joanne J. Meyerowitz, Women Adrift: Independent Wage Earners in Chicago, 1880­1930 (Chicago: University of Chicago Press, 1988); Christine Stansell, City of Women: Sex and Class in New York, 1789-1860 (New York: Alfred Knopf, 1986); Margery N. Davies, Woman’s Place is at the Typewriter. Office Work and Office Workers, 1870-1930 (Philadelphia: Temple University Press, 1982) and David M. Katzman, Seven Days a Week. Women and Domestic Service in Industrialising America (New York: Oxford University Press, 1978) explore modern and nonmodern occupations in this era. Susan Estabrook Kennedy, If All We Did Was to Weep at Home. A History of White Working Class Women in America (Bloomington: Indiana University Press, 1979), Sarah Eisenstein, Give (Is Bread but Give Las Roses, Too. Working Women’s Consciousness in the United States, 1890 to the First World War (London, Routnedge & Kegan Paul, 1983), and Maurine Weiner Greenwald, Women, War and Work. The Impact of World War 1 on Women Workers in the United States (Westport, Conn.: Greenwood Press, 1980) look at the way in which women’s employment and women’s consciousness developed.

Hollywood has been the subject of numerous studies. Two are particularly relevant: Marjorie Rosen, Popcorn Venus (London: 1975) and Molly Haskell, From Reverence to Rape. The Treatment of Women in the Movies (London: Penguin Books, 1974). Marguerite Ickis, The Standard Book of Quilt Making and Collecting (New York: Dover Publications, 1959) and Patricia Mainardi “(wilts: the Great American Art” (Radical America, Vol. 7, No. 1, 1973) both discuss this women’s art form but from very different perspectives. Gladys-Marie Fry, Stitched from the Soul: Quilting in the Ante-Bellum South (New York: Dutton, 1989) discusses the quilts made by slaves.

9. Notes

  1. For an overview of the debate into what is history and women’s place within it see “AHR Forum” in American Historical Review,  94  (June, 1989), pp.581-698. Back
  2. Berenice A. Carroll, Liberating Women’s History (Urbana: University of Illinois Press, 1976), p.89. Back
  3. Gerda Lerner, “Placing Women in History: Definitions and Challenges” Feminist Studies,  3 (1975), pp.5-15. Back
  4. Carl Degler, “In Pursuit of an American Dream”, American Historical Review, 92  (1987), pp. 1-2. Joan Wallach Scott, “History in Crisis? The Others’ Side of the Story”, American Historical Review,  94  (1989), pp.689-90. Back
  5. For an overview of economic change as it affected women in this era see S. J. Kleinberg, “Women in the Economy of the United States from the American Revolution to 1920” in S. J. Kleinberg (ed.), Retrieving Women’s History: Changing Perceptions of the Role of Women in Politics and Society (Oxford: Berg/Unesco, 1988). Back
  6. Nancy Cott, Bonds of Womanhood: Woman’s Sphere in New England, 1780­1835 (New Haven, Conn.: Yale University Press, 1977), pp.5-6. Paula Baker, “The Domestication of Politics: Women and American Political Society, 1780-1920” American Historical Review,  89  (1984), pp.620-647. Back
  7. Mary Beth Norton, “The Evolution of White Women’s Experience in Early America”, American Historical Review, 89 (1984), pp.593-619. Back
  8. For contemporary women’s own words on the subject see Aileen S. Kraditor, Up From the Pedestal: Landmark Writings in the American Woman’s .Struggle for Equality (Chicago: Quadrangle Books, 1968). Back
  9. Linda K. Kerber, II’omen of the Republic: Intellect and Ideology in Revolutionary America (Chapel Hill, North Carolina: University of North Carolina Press, 1980), pp. l 1-12. Back
  10. Barbara Welter, “The Cult of True Womanhood” American Quarterly XVIII, (1966), pp.151-76. Back
  11. Ruth H. Bloch, “American Feminine Ideals in Transition: The Rise of the Moral Mother, 1785-1815” Feminist ,Studies 4 (1978) pp.101-126. Back
  12. Elizabeth Fox-Genovese, Within the Plantation Household (Chapel Hill, North Carolina: University of North Carolina Press, 1988), pp.78-79. Back
  13. Ellen DuBois, “The Radicalization of the Woman Suffrage Movement”, Feminist Studies 3 (1975), p.65. Back
  14. Ruth H. Bloch, “The Gendered Meaning of Virtue in Revolutionary America”, Signs 13 (1987), p.57. Back
  15. Charles W. Akers, Abigail Adams. An American fVornan (Boston: Little, Brown, 1980), pp.43-45. Back
  16. Mary Beth Norton, Liberty’s Daughters: the Revolutionary Experience of American Women, 1750-1800 (Boston: Little, Brown, 1980). Elaine F. Crane, “Dependence in the Era of Independence: The Role of Women in a Republican Society” in Jack P. Greene (ed.), The American Revolution: Its Character and Its Limits (New York: New York University Press, 1987). Back
  17. Laurel T. Ulrich, “A Friendly Neighbor: Social Dimensions of Daily Work in Northern New England” Feminist Studies 6 (1980), pp.392-405. Claudia A. Goldin, “The Economic Status of Women in the Early Republic: Quantitative Evidence”, Journal of Interdisciplinary History 16 (1986), pp.375-404. Back
  18. Daniel Blake Smith, “The Study of the Family in Early America: Trends, Problems, and Prospects” William and Mary Quarterly 3rd Ser. 39 (1982), pp.3-28 provides an excellent overview of woman’s status within the family and family economy. Back
  19. Baird Diary quoted in Marjorie Kreidberg, Food on the Frontier. Minnesota Cooking from 1850-1900 with Selected Recipes. (St. Paul, Minn: Minnesota Historical Society Press, 1975) p.121. Back
  20. Suzanne Lebsock, The Free Women of Petersburg: Status and Culture in a Southern Town, 1784-1860 (New York: Norton, 1983), p.153. Back
  21. Buckeye Cookery and Practical Housekeeping: Tried and Approved, Original Recipes (Marysville, Ohio, 1881), p.454. Back
  22. Julie Roy Jeffrey, Frontier Women: The Trans—Mississippi (Vest, 1840-1880 (New York: Hill and Wang, 1979); John Mack Faragher, Women and Men on the Overland Trail (New Haven, Conn: Yale University Press 1979), and John Mack Faragher, Sugar Creek (New Haven, Conn: Yale University Press, 1987) discuss women’s willingness to migrate. Eveline M. Alexander, Cavalry Wife Edited with an Introduction by Sandra L. Myres. (College Station, Texas: Texas A & M University Press, 1977). Back
  23. Edward A. Abramson, The Immigrant Experience in American Literature (British Association for American Studies, 1982) pp. 14-16 for a discussion of Cather’s writing. Vera Norwood and Janice Monk, (eds.), The Desert Is .No Lady: Southwestern Landscapes in Women’s IVriting and Art (New Haven, Conn: Yale University Press, 1987). Back
  24. Nancy Grey Osterud, “‘She Helped Me Hay It as Good as a Man’ Relations among Women and Men in an Agricultural Community” in Carol Groneman and Mary Beth Norton, To Toil the Livelong Day: America’s Women at Work, 1780-1980 (Ithaca, New York: Cornell University Press, 1987), pp.87-97. Back
  25. Frances Trollope, Domestic Manners of the Americans Edited with an introduction by Donald Smalley (New York: Vintage, 1949), p.416. Back
  26. Patricia Mainardi, “Quilts: The Great American Art”, Radical America, 7 (1973), pp.36-68. Mainardi also discusses Navajo women’s blankets and pays particular attention to black women’s quilting. Gladys-Marie Fry, Stitched from the Soul: Quilting in the Ante-Bellum South (New York: Dutton, 1989). Back
  27. Edith Abbott, Women in Industry (New York: D. Appleton and Company, 1910), p.90. Back
  28. Mary Blewett, “The Sexual Division of Labor and the Artisan Tradition in Early Industrial Capitalism: The Case of New England Shoemaking, 1780-1860” in Groneman and Norton, p.34. Back
  29. Lucy Larcom, A New England Girlhood (New York: Corinth Books, 1961). Back
  30. Thomas Dublin, Women at Work: The Transformation of Work and Community in Lowell, Massachusetts 1826-1860 (New York: Columbia University Press, 1979), pp.108-131. Back
  31. Carole Turbin, “Beyond Conventional Wisdom: Women’s Wage Work, Household Economic Contribution, and Labour Activism in a Mid­Nineteenth-Century Community” in Groneman and Norton, pp.47-67. Back
  32. Dorothy Sterling, We Are Your Sisters. Black Women in the.Vineteenth Century. (New York: Norton, 1984), p.13. Back
  33. Jacqueline Bernard, 7ourney Toward Freedom. The Story of ,Sojourner Truth (New York: Dell, 1967), p.178. Back
  34. Fox-Genovese, p.97. Ben Ames Williams, A Diary from Dixie by Mary Boykin Chesnut (Boston: Houghton, Mifflin Company, 1949, orig. 1905). Bell Irvin Wiley, Confederate Women (Westport, Conn: Greenwood Press, 1975), pp. 6,31. Back
  35. David Katzman, Seven Days a Week: Women and Domestic Service in Industrializing America (Oxford: Oxford University Press, 1978). Back
  36. Judith Sargent Murray, “Equality of the Sexes” Massachusetts Magazine March-April, 1790, pp. 132 ff. Back
  37. Kenneth Lockridge, Literacy in Colonial New England: An Inquiry into the Social Context in the Early Modern West (New York, 1974), pp.38-44. Back
  38. Linda K. Kerber, Women of the Republic, Intellect and Ideology in Revolutionary America (Chapel Hill, N. C.: University of North Carolina Press, 1982), pp.199-200. Back
  39. Larcom, pp.42-44. Back
  40. Barbara Miller Solomon, In the Company of Educated Women. A History of Women and Higher Education in America (New Haven, Conn., 1985). Back
  41. Quoted in Nancy Cott, The Bonds of Womanhood, p.91. Back
  42. Katheryn Kish Sklar, Catherine Beecher. A Study in American Domesticity (New York: Norton, 1973), pp. 113, 137. Also see Jeanne Boydston, Mary Kelley, and Anne Margolis, The Limits of Sisterhood: The Beecher Sisters on Women’s Rights and Woman’s Sphere (Chapel Hill, North Carolina: University of North Carolina Press, 1988). Back
  43. Leslie Wheeler, Loving Warriors. Selected Letters of Lucy Stone and Henry B. Blackwell, 1853 to 1893 (New York: Dial Press, 1981), p.10. Back
  44. Susan M. Reverby, Ordered to Care. The Dilemma of American .Nursing, 1850-1945 (Cambridge: Cambridge University Press, 1987), p.43. Interview with Frances Krantz Kleinberg, Registered Nurse. My mother trained as a nurse at the Hartford Hospital Training School in the early 1930s, when such restrictive conditions still applied. Back
  45. The Journal of Charlotte L. Forten. Edited with an Introduction and Notes by Ray Allen Billington. (London: Collier, 1969), p.148. Back
  46. Sally Allen McNall, Who Is In the House? A Psychological Study of Two Centuries of Women’s Fiction in America (New York: Elsevier, 1981). Mary Kelley, Private Women, PublicStage. Literary Domesticity in, Vineteenth—Century America. (Oxford: Oxford University Press, 1984). Back
  47. Mary P. Ryan, The Empire of the Mother. American Writing About Domesticity (New York: Institute for Research in History and Haworth Press, 1982), p.120. Back
  48. Daniel Scott Smith, “Family Limitation, Sexual Control and Domestic Feminism” in Nancy F. Cott and Elizabeth H. Pleck, A Heritage of Her Own (New York: Simon and Schuster, 1979), pp. 238, 239. Back
  49. Quoted in Cott, Bonds of Womanhood, p.91. Back
  50. Quoted in Welter, “The Cult of True Womanhood”, p.153. Back
  51. Nancy Hewitt, “Feminist Friends: Agrarian Quakers and the Emergence of Women’s Rights in America” Feminist .Studies 12 (1986), pp.27-50. Back
  52. Barbara Epstein, The Politics of Domesticity: Women, Evangelism and Temperance in Nineteenth Century America (Middletown, Conn: Wesleyan University Press, 1981), pp.45-59. Cott, Bonds of Womanhood, pp.126-159. Back
  53. Jill Conway, The Female Experience in Eighteenth and .Yineteenth Century America: A Guide to the History of American Women (New York: Garland Press, 1982), p.165. Anne Firor Scott, The Southern Lady: From Pedestal to Politics, 1830-1930 (Chicago: University of Chicago Press, 1970), p.8. Back
  54. Louis Billington, “Female Labourers in the Church: Women Preachers in the Northeastern United States, 1740-1840”, Journal of American Studies, 19 (1985), pp.369-394. Back
  55. Maureen Ursenbach Beecher and Lavine Fielding Anderson (eds.), Sisters in Spirit: Mormon Women in Historical and Cultural Perspective (Urbana, Illinois: University of Illinois Press, 1987). Back
  56. Ryan, Empire of the Mother, pp.56, 72. Back
  57. Barbara Berg, Remembered Gate: The Origins of American Feminism (Oxford: Oxford University Press, 1978), 58 p.151. Page Putnam Miller, “Women in the Vanguard of the Sunday School Movement” Journal of Presbyterian History 1980 58 (4), pp.311-25. Back
  58. Keith Melder, “Ladies Bountiful: Organized Women’s Benevolence in Nineteenth Century America”, New York Historian 32 (1970), pp.210-227. Back
  59. Epstein, p.89-90 discusses the origins of women’s temperance activities. D. C. Bloomer, The Life and Writings of Ameba Bloomer, edited with a new introduction by S. J. Kleinberg, (New York: Schocken Press, 1975), p.42. Back
  60. Robert Riegel, “Women’s Clothes and Women’s Rights”, American Quarterly 15 (1963), pp.391-399. Back
  61. Harriet Beecher Stowe Uncle Tom’s Cabin (Boston, 1852) p.44. Minrose C. Gwin, Black and White Women of the Old South. The Peculiar Sisterhood in American Literature (Knoxville, Tenn: University of Tennessee Press, 1985), p.32. Ryan, The Empire of the Mother, pp.132-139. Back
  62. Blanche Hersh, The Slavery of Sex. Feminist Abolitionists in America (Urbana, Illinois: University of Illinois Press, 1978), Ch. 4. Back
  63. For the entire text of the Declaration of Sentiments and Resolutions, Seneca Falls Convention, 1848 see Kraditor, Up from the Pedestal, pp.183­188. Gerda Lerner, The Crimke Sisters from South Carolina. Rebels Against Slavery (Boston: Houghton, Mifflin Company, 1967); Anne Firor Scott, “Women’s Perspective on the Patriarchy in the 1850s”, Journal ofAmerican History LXI (1) June, 1974, p.55. Back
  64. Mabel Newcomer, A Century of Higher Education for American Women (New York: Harper, 1959), p.37. Back
  65. Mary Roth Walsh, Doctors Wanted: No Women Need Apply. (New Haven, Corm: Pale University Press, 1977), pp.191-193. Back
  66. Joseph A. Hill, Statistics of Women at Work, 1900, Washington, D.C.: Government Printing Office, 1906) and Women in Gainful Occupations, 1870-1920, Washington, D.C.: Government Printing Office, 1929). All statistical data are drawn from these sources. Back
  67. Roslyn L. Feldberg and Evelyn Nakano Glenn, “Male and Female: Job versus Gender Models in the Sociology of Work” in Rachel Kahn-Hut, Arlene Kaplan Daniels, and Richard Colvard (eds.), Women and Work: Problems and Perspectives (New York, 1982), pp.65-80. Back
  68. Elizabeth Pleck, “A Mother’s Wage: Income Earning Among Married Italian and Black Women, 1896-1911” in Cott and Pleck, A Heritage of Her Own. Virginia Yans McLaughlin, Family and Community: Italian Immigrants in Buffalo, 1880-1930 (Ithaca, N.Y.: Cornell University Press, 1977). S. J. Kleinberg, The Shadow of the Mills, Working Class Families in Pittsburgh, 1870-1907, Pittsburgh, University of Pittsburgh Press, 1989) discusses the impact of economic structures on women’s employment. Back
  69. Susan Porter Benson, Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940, (Urbana: University of Illinois Press, 1988). Back
  70. Alice Kessler-Harris, Out to I1 ork. A History of hVage-Earning Women in the United States (Oxford: Oxford University Press, 1982). Leslie Woodcock Tender, Il age-Earning Women: Industrial Work and Family Life in the United States, 1900-1930 (Oxford: Oxford University Press, 1979). Roger Waldinger, “Another Look at the International Ladies’ Garment Workers’ Union” in Ruth Milkman, ed., Women, Work and Protest, (London: Routledge and Kegan Paul, 1985). For an illuminating examination of the working class woman’s attitudes towards female employment see Maurine Weiner Greenwald, “Working Class Feminism and the Family Wage Ideal; The Seattle Debate on Married Women’s Right to Work, 1914-1920”, Journal of American History 76 (1989), pp.118-149. Back
  71. Nancy Schrom Dye, As Equals and As Sisters: Feminism, Unionism, and the Women’s Trade Union League of New York (Columbia, Missouri: University of Missouri Press, 1980). Back
  72. Susan Lehrer, Origins of Protective Labor Legislation for Women, 1905-1925 (Albany, New York: State University of New York Press, 1987). Leo Kanowitz, Sex Roles in Law and Society. Cases and Materials (Albuqueque, New Mexico: University of New Mexico Press, 1973), pp.47, 467. Also see Judith A. Baer, The Chains of Protection: The Judicial Response to Women’s Labor Legislation. (Westport., Conn.: Greenwood Press, 1978). Back
  73. Nancy Cott, The Grounding of Modern Feminism (New Haven, Corm.: Yale University Press, 1987). Also see Ellen Carol DuBois, Feminism and Suffrage. The Emergence of an Independent Women’s Movement in America, 1848-1869 (Ithaca, New York: Cornell University Press, 1978). Back
  74. This account of the WCTU is drawn largely from Epstein, The Politics of Domesticity, pp.99-120; Jack S. Blocker, “Give to the Winds Thy Fears:” The Women’s Temperance Crusade (Westport, Corm.: Greenwood Press, 1985) and S. J. Kleinberg, “The Women’s Christian Temperance Union of Back
  75. Wilkinsburg, Pennsylvania”, (Unpublished paper, University of Pittsburgh Department of History, 1970). Back
  76. On the club movement see Karen J. Blair, The Clubmoman as Feminist: True Womanhood Redefined, 1868-1914 (New York: Holmes and Meter, 1980). Back
  77. Eleanor Flexner, Century of Struggle: The Woman’s Rights Movement in the United States (New York, 1974), p.190. Back
  78. For a complete review of the changing nature of family law see Michael Grossberg, Governing the Hearth. Lazes and the Family in Nineteenth Century America (Chapel Hill, North Carolina: University of North Carolina Press, 1985). Nancy Cott, The Grounding of Modern Feminism, p.24. Back
  79. Kathryn Kish Sklar, “Hull House in the 1890s: A Community of Women Reformers”, Signs 10 (1985), pp.658-677. Back
  80. Jane Addams, “Why Women Should Vote” Ladies’ Home Journal XXVII (January, 1910, pp.21-22). Back
  81. Charlotte Perkins Gilman, The Home, Its Work and Influence (New York: McClure, Philips and Co., 1903), p.319. Back
  82. Eleanor Flexner, Century of Struggle, p.282. Back
  83. Maurine Greenwald, Women, War and Work: The Impact of World WarIon Women Workers in the United States (Westport, Conn.: Greenwood Press, 1980). Back
  84. Carrie Chapman Catt and Nettie Rogers Schuler, Woman Suffrage and Politics. The Inner Story of the Suffrage Movement (Seattle, University of Washington Press, 1970, orig., 1923), p.446. Back
  85. Public Papers of Woodrow Wilson: War and Peace I, pp.263-7. Back
  86. Flexner, pp.306-24. Back
  87. Cott, Grounding of Modern Feminism, pp.100-111. Anne Firor Scott, “After Suffrage: Southern Women in the Twenties”, The Journal of Southern History 30 (1964), pp.298-318 takes a more sanguine view of the impact of the vote on women’s political participation. Back
  88. Marjorie Rosen, Popcorn Venus: Women, Mollies and the American Dream (New York, 1974), p.23. Back
  89. Molly Haskell, From Reverence to Rape: The Treatment of Women in the Movies (London, Penguin Books, 1974), p.76. Back

Top of the Page

Michael Woodiwiss, Organized Crime, USA: Changing Perceptions from Prohibition to the Present Day

BAAS Pamphlet No. 19 (First Published 1990)

ISBN: 0 946488 09 6
  1. Liquor and Antecedents
  2. Enforcement Exploits
  3. Enter the Mafia
  4. Static Response – Dynamic Industry
  5. Guide to Further Reading
  6. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1: Anti-liquor and Antecedents

There were plenty of places to buy alcohol during Prohibition. New York’s revellers could go to the famous Cotton Club where the best black entertainers played before strictly white audiences. Otherwise there were less expensive night clubs, or failing that clip-joints or one of the cities 32,000 speakeasies. Clip-joints were dives where the promise was a good time but the reality was often a beating and always an exorbitant bill. Night clubs, clip-joints and speakeasies replaced the legal saloon during Prohibition. Selling alcohol was illegal, so was betting on horses and dice, so were taking drugs and selling sex. All illegal, but all available for a price. There were profits to be made.

One businessman stood above all others in the illegal economy of 1920s New York – Arnold Rothstein. Rothstein had a piece of much of the above action: newspapermen called him “The Big Bankroll.” He was shot dead in 1928, but he had already made his mark by pioneering many of the ways that have made crime pay on a long term basis in twentieth century America.

Rothstein’s parents were rich and respected; his father owned a dress business. They were pillars of the Upper West Side’s Orthodox Jewish community, but Arnold went his own way; a distinctly American way.

As a young man at the turn of the century he worked as a collector for a bookmaker. By 1909 he was taking bets himself and owned a gambling house which paid protection to Tim Sullivan, a prominent Tammany Hall politician. In 1912 there was a police crackdown. For a while Rothstein had to make his money by running “floating crap games” as a substitute for stationary gambling houses. (Readers of Damon Runyon will be familiar with these games and Runyon based a character called “The Brain” on Rothstein. Rothstein was also the model for Wolfsheim the gambler in F. Scott Fitzgerald’s The Great Gatsby.) Police crack downs always come to an end and Rothstein was soon back to the covert operation of plush casinos catering to a wealthy clientele. Rothstein also profited by first providing a service that was essential to bookmakers. Those who felt themselves dangerously overextended could, through Rothstein, lay off bets with other bookmakers in different parts of the country

Rothstein was also one of the first to spot the potential of the Eighteenth Amendment and have the capital and connections to exploit it. He arranged for associates in Britain to buy up quantities of Scotch whisky. This was then shipped to points beyond US territorial waters and transferred to small, fast boats to avoid customs and coastguard patrols. The liquor was then distributed to restaurants, nightclubs, speakeasies and dives in which Rothstein had an interest. Rothstein ended his direct involvement as the business got more bloody and competitive but he did continue to “bankroll” or finance the operations of others. The most notorious of these was Jack “Legs” Diamond, who preferred hijacking the contraband of others to troubling with the more complex operations of bootlegging.

Rothstein also began to traffic in drugs in the early 1920s. The business was not as crowded as liquor and yielded a better and quicker return on capital invested. Two Rothstein associates, Yasha Katzenberg and Dan Collins, bought drugs in Europe and Asia; these ‘were smuggled into the United States and then sold to retailers in New York, Chicago, St Louis and Kansas City by more Rothstein associates.

Connections with gangs of thugs enabled Rothstein to provide strong-arm services in industrial disputes, particularly in New York’s garment industry. At one time employers had the services of Legs Diamond and his men as strikebreakers, while the unions countered with the services of Jacob “Little Augie” Orgen to protect pickets and beat up “scabs.” Both sets of gangsters received their payment from Rothstein. By 1926 both employers and unions had been severely weakened by years of conflict in the needle trades. In this situation racketeers moved in to become much more than mere hirelings, most notably Louis “Lepke” Buchalter. By using violence and intimidation Buchalter’s gang had established a virtual stranglehold over the Manhattan garment industry by the end of the 1920s, extorting protection money from both unions and management.

Rothstein had legitimate and profitable covers for all his illegal activities. Apart from restaurants and nightclubs, he owned real estate, an export-import firm, and a bailbond firm that provided bail of $14 million in liquor prosecutions alone before 1924. And he had several of the best lawyers to handle the legal aspects of his transactions. By the twenties he had built up unrivalled political connections and this gave him leverage in the city’s criminal justice system. He could therefore provide not only capital but protection for numerous illegal enterprises; cases could be “fixed”, prison sentences could be shortened. This is the reason why on the police records of the big names of the future, Frank Costello, Charles “Lucky” Luciano, Benjamin “Bugsy” Siegel, there are so many cases marked “dismissed.” When they stepped in front of the magistrate, prosecutors found they had “insufficient evidence” or witnesses failed to show up, or police officers admitted they had overstepped themselves.

Rothstein was the business organizer of New York crime. He realised that criminal success depended on the complicity of the “upperworld”, police, lawyers, judges, and outwardly “respectable” business people. The essential ingredient of Rothstein’s success was, according to his biographer Leo Katcher, knowing the price of every man, whether politician or killer, and having the money to pay for it. Rothstein financed bootlegging, gambling, drug trafficking and industrial racketeering. These activities were to be the chief sources of profit for organized crime in the twentieth century.[1]

Organized crime has thrived by providing Americans with illegal goods and services, notably alcohol during prohibition, gambling and drugs, and has played a significant role in the nation’s business and industrial life. it has consisted of countless deals and arrangements between Americans who stand to gain by breaking or failing to enforce the law. Gangsters have never operated in a vacuum: criminal networks can consist of representatives of every level of the political, economic, criminal justice and law enforcement systems. Politicians, judges, prosecutors, lawyers, businessmen, union officials and police have all, at one time or another, shared in the proceeds of organized crime with career criminals. The one thing that all these groups have in common is the fact that they are American. Yet most people still start thinking in Italian when organized crime in America is mentioned. The mass media and popular culture have had a profound effect on people’s perceptions of organized crime; in a word association test the first word to come into many minds if confronted with “organized crime, USA,” would most likely be ‘Mafia.” The object of this essay is to trace the way people’s understanding of organized crime changed during the twentieth century, and to what effect. An alternative perspective will also be offered.

Organized and profitable crime existed in the Americas long before the British colonies became the United States at the end of the eighteenth century. Piracy and smuggling were notable features of the colonial experience. In the new nation, city government nurtured organized crime from before the civil war. In New York and Chicago, for example, political machines used gangs to organize election frauds for a large part of the nineteenth century and well into the twentieth. Techniques ranged from altering ballots and records, through multiple voting, to kidnapping opposition party workers and even election officials. In return gangsters were allowed to develop various gambling, prostitution and extortion rackets. As the system became established, the boundary between politician and mobster became obscured. Political bosses were in a good position to engage in racketeering and racketeers and their nominees could move into political positions.

Organized crime, however, has never been restricted to the cities. This is illustrated by the Johnson County ‘war’ for a large slice of Wyoming between 1879 and 1892. On one side were small settlers and homesteaders, on the other were the big cattlemen, many of whom had made their fortunes and grabbed the best land during the Civil War. The small ranchers were subject to constant harassment and several were hanged as rustlers. In 1891, the cattle barons organized themselves into the Wyoming Stock Growers’ Association and decreed that any cattle found in possession of any non-member would be considered stolen unless the rancher could produce a bill of sale from the association itself. The small ranchers responded by organizing a self-defence association to fight back. In 1892, the stock growers finished off this resistance with a large mercenary army of mainly Texan gunmen who disposed of the homesteaders’ leaders. Legal immunity for the cattle barons was ensured by the state government. [2]

The term ‘organized crime’ implies a willingness to use bribery and violence to further entrepreneurial interests, and the founders of America’s industrial and commercial dynasties showed little hesitation in using both. Private armies were employed by the likes of Andrew Carnegie, Cornelius Vanderbilt and Edward H. Harriman to wreck unions; bribery was employed by the same as well as John D. Rockefeller,J. Pierpoint Morgan and the Du Ponts to wreck competition and establish monopolies. By the twentieth century the centre of the economy was dominated by representatives of the group of nineteenth century capitalists, collectively known as the ‘Robber Barons’.[3] ‘The spirit of graft and lawlessness,’ as the muckraker, Lincoln Steffens, put it in 1902, ‘is the American spirit.’[4] But this kind of analysis was unacceptable in a country which liked to think of itself as the model for the rest of mankind to follow. Attention was diverted from the crimes of big business and native-born Americans to the personal behaviour of ordinary people and the criminal activities of members of ethnic groups.

The United States experienced its most significant moral crusade in the first two decades of the twentieth century. As millions of migrants and immigrants struggled to make new lives for themselves in the cities, many native-born Americans had seen a threat to Protestant values. Many reacted by forming or joining anti-vice or temperance societies, and lobbying intensely in state capitals and city halls for laws to eradicate gambling, prostitution, drugtaking and drinking throughout the entire country.

The crusaders found evidence of moral decline everywhere. New styles of clothing, “suggestive” dances, “titillating” movies and “salacious” stage productions were all examples of the “deadly moral poison” sapping America’s strength or contaminating national morality. Gambling, prostitution, and the use of alcohol and other drugs were said to have reached “epidemic” proportions. Numerous books and articles predicted degradation and disgrace for the country’s youth if exposed to liquor, in particular. Boys were doomed to be profligates and degenerates and girls would inevitably meet with seduction and the “white slavery” of forced prostitution. The crusade’s propagandists were not interested in appalling working and living conditions; their only concern was the threat to Protestant values of thrift and self-denial. They also provided scapegoats for America’s alleged slide into degeneracy.[5]

Ethnic conspiracy theories began to proliferate and ethnic stereotypes established in people’s minds. “The Jew,” according to Henry Adams, was the central actor in the “irremedial, radical rottenness of our whole system.” In 1909 McClure’s magazine informed its readers that “the acute and often unscrupulous Jewish type of mind” was behind the liquor business and that the “Jewish dealer in women” had done most to erode “the moral life of the great cities of America.” Italians were described by The Outlook in 1913 as an “aggregation of assassins, blackmailers, kidnappers, and thieves that have piled up a record of crime in the United States unparalleled in a civilized country in time of peace Other publications talked of Italians as “vermin” and “desperadoes” and circulated stories about how they had transplanted secret criminal societies such as the Camorra and the Mafia into their adopted country.[6] Undoubtedly some immigrants had been criminals in the old country and collaborated in joint ventures in their new home, but the conspiracy theories conveniently ignored the fact that Jewish and Italian criminals were very subordinate to native-born and Irish networks before the 1920s.

But the imagery aided the cause of the moral crusaders. Tens of thousands of federal, state and local laws were added to the statute books in an attempt to enforce morality by prohibitions on alcohol, gambling, prostitution and drugs, plus strict censorship and a host of more trivial restrictions. Control and regulation of such behaviour was considered to be as unthinkable as licensing murder, robbery and other crimes. The intention was to end all behaviour that a Protestant culture defined as sinful and non-productive. Americans had to be coerced by law into a virtuous and healthy way of life. America’s moral crusade reached a new peak in 1919 when the Eighteenth Amendment was added to the Constitution. The Volstead Act was passed to provide for enforcement of this attempt to prohibit the manufacture, transportation, sale or importation of intoxicating liquor within the United States, and a vast new market for illegal goods and services was created.

Bootlegging, the illegal trafficking in alcohol, had been restricted to dry areas, mainly in the South. Now, it became a national industry. Thousands of new jobs and executive positions were available in the cities. Men graduated straight from the juvenile gangs. Prohibition gave second generation Jews, Italians, Sicilians, Poles, Slavs, and others the opportunity to climb up the criminal hierarchy and challenge the pre-Prohibition dominance of native-born or Irish networks.

It was during the 1920s that the term ‘organized crime’ first came into common usage and when attempts were first made to analyse it as a distinct social, economic and political problem, most thoroughly by John Landesco of the University of Chicago.

Landesco produced a report on organized crime in Chicago for the Illinois Crime Survey which appeared in 1929. The criminal, according to Landesco, is “the natural product of his environment that is of the slums of our large American cities.” Throughout his survey, Landesco emphasized the importance of political corruption: “Organized crime and organized political corruption have formed a partnership to exploit for profit the enormous revenues to be derived from law-breaking.” He concluded that an understanding of organized crime “should make possible a constructive program that will not content itself with punishing individual gangsters and their allies, but will reach out into a frontal attack upon basic causes of crime in Chicago.”[7]

Landesco’s analysis of local organized crime’s partnership with organized political corruption was confirmed by a national commission, named after its chairman George Wickersham, which issued a report on the workings of Prohibition in 1931. The commission found, for example, that: “When conspiracies are discovered from time to time, they disclose combinations of illicit distributors, illicit producers, local politicians, corrupt police and other enforcement agencies, making lavish payments for protection and conducting an elaborate system of individual producers and distributors.” Corruption, added the report, sometimes involved “the police, prosecuting and administrative organizations of whole communities,” and it pointed to evidence “of connection between corrupt local politics and gangs and the organized unlawful liquor traffic, and of systematic collection of tribute from that traffic for corrupt political purposes.”[8] After piling up evidence of the disastrous effects of Prohibition, the commission decided against recommending repeal. This apparent contradiction was memorably pointed out by a poem published in the New York World:

Prohibition is an awful flop,
We like it
It can’t stop what it’s meant to stop,
We like it.
It’s left a trail of graft and slime,
It don’t prohibit worth a dime,
It’s filled our land with vice and crime,
Nevertheless, we’re for it.[9]

The “Wet” newspaper publishers who opposed Prohibition had only to report the news about “graft and slime,” and “vice and crime,” to win the propaganda battle. Organized crime, which had previously been discreet and localised, now held people’s interest across the nation. The lurid details of bootleggers’ gang wars made for exciting reading. There seemed to be so many ways to kill people. Gangsters were “taken for rides” in cars, then riddled with bullets, then dumped in lonely spots. Others were lined up and shot by firing squads, or packed in cement then dropped in lakes or rivers. Some gangsters achieved national fame, Legs Diamond, Arnold Rothstein and Dutch Schultz from New York and Dion O’Banion, John Torrio and Al Capone from Chicago.

Capone’s notoriety exceeded all others’. He encouraged the press, often feeding reporters with a quotable remark and justification for his operations. To begin with, Jake Lingle of the Chicago Tribune was most favoured, but Lingle became too closely involved with the rackets himself and was assassinated in 1930. Capone would talk about anything including his weight and the need for women to stay at home but only his remarks about crime are worth repeating. Bootleg liquor was obviously in such great demand during the l920s that few could argue when he said, “Somebody had to throw some liquor on that thirst. Why not me?”[10] And people’s disillusion with the system was probably only confirmed when they read Capone’s remark to Genevieve Forbes, again of the Chicago Tribune: “Lady,” he said, “Nobody’s on the legit.”[11]

It was Capone’s cultivation of crime reporters rather than his criminal success that made him into the world’s most famous gangster. He ranked with Henry Ford, Will Rogers, Babe Ruth and Charles Lindberg as an American institution during the 1920s, and, as his biographer F. D. Pasley put it, “The hoodlum of 1920 had become page one news, copy for the magazines, material for talkie plots and vaudeville gags.”[12]

But “Scarface” Al Capone has been much overrated; his reign at the top of the Chicago rackets was hard-fought, short-lived and wholly dependent on the complicity of the local authorities. And he was foolish to encourage publicity. He made himself too notorious for the authorities in Washington to tolerate and a target for federal law enforcers such as Eliot Ness of the Prohibition Bureau and Elmer Irey of the Internal Revenue Service. In 1931 Capone was convicted of tax evasion and sentenced to eleven years in a federal penitentiary. He was finished as a criminal power. The trial was one of the news events of the year and it looked like a triumph for law enforcement. But Chicago syndicate operations were left intact, and most of his partners and rivals had learnt not to draw attention to themselves.

Even the incarceration of the world’s most famous bootlegging gangster could not help the cause of the ‘Dry’ supporters of Prohibition in their despairing efforts to slow the momentum towards repeal as the 1930s decade began: a momentum achieved by the growing antipathy of most businessman to the Eighteenth Amendment. The vast resources of publisher William Randolph Hearst, plus those of several dozen millionaires, ensured that the “Wets” won the propaganda battle.The “Drys” were reduced to arguing that the “noble experiment” was not working because foreign rumrunners and conspirators were attacking the global prohibition revolution by trying to wreck it in the country of its birth.”[13] Few people took them very seriously.

During Prohibition, ethnic stereotyping and alien conspiracy theories temporarily went out of fashion as explanations for crime. In part this was because bootlegging was the main criminal enterprise and not many thirsty Americans could be persuaded that bootleggers were un-American. Al Capone and the rest were simply gangsters, who exploited a corrupt system, never alien intruders or members of ethnically exclusive conspiracies.

Hollywood films are probably the best guide to the perceptions most people had of contemporary organized crime and in the early gangster talkies the main characters were American individuals first; ethnicity was not emphasized neither was organisation. Little Caesar (1930), Public Enemy (1931) and Scarface (1932) starred dynamic actors playing criminals. Edward G. Robinson, James Cagney and Paul Muni represented American outlaws in ways which reinforced some of the country’s most deeply held myths about individual, entrepreneurial success. The way to make it in the land of free, competitive capitalism was the way Edward G. Robinson made it in Little Caesar, with ruthless dedication, determination, and daring. The gangsters may have finished up dead in these films but not before they had made their mark on the world. They had achieved a lifestyle that contradicted the official dictum, “Crime Does Not Pay.”

The extent and success of organized crime during Prohibition was seen by most commentators as an American problem that involved society and government as much as the individual criminal or criminal syndicate. In 1931 Walter Lippmann argued this most lucidly in an article entitled “The Underworld as Servant.” The underworld for him was significant because it serviced the outlawed desires of the American people; drink, sex, gambling and drugs, “The high level of lawlessness is maintained by the fact that Americans desire to do so many things that they also desire to prohibit.” He concluded with a dilemma:

Sooner or later the American people
will have to make up their minds either
to bring their legislative ideals down to
the point where they square with
human nature or they will have to
establish an administrative despotism
strong enough to start enforcing their
moral ideals. They cannot much
longer defy the devil with a wooden
sword.

The repeal of Prohibition in 1933 squared one law with human nature, but the effective enforcement of the remaining morality laws still required, in Lippmann’s words, “the establishment of the most despotic and efficient government ever seen on wearth,” “thousands and thousands of resolute and incorruptible inspectors, policemen, prosecutors, and judges,” “the expenditure of enormous sums of money,” and finally “the suspension of most civil rights.”[14] In the years after repeal the tendency has been towards the undermining of such civil rights as the right to privacy and enormous sums of money have been spent convicting and incarcerating criminals. American government did become more despotic, but no more efficient in enforcing morality. Part of the problem was, of course, finding so many resolute and incorruptible public servants.

America’s moral crusade did not end with the repeal of the Eighteenth Amendment. The institutions which molded public opinion, such as newspapers, churches, chambers of commerce and civic associations, were set against any more tampering with the morality legislation. Gambling, in particular, stayed illegal in every state except Nevacla. Repeal had cut off an immense source of illegal income, but corrupt networks, consisting of gangsters, businessmen and public officials, continued to supply the demand for illegal goods and services. At the same time there were efforts to professionalize federal and local police and prosecutors to make law enforcement more effective. Attention began to be focused on the good guys who promised to make law enforcement work while criminals, many of them police and politicians, found new ways to make sure it did not.

2. Enforcement Exploits

Bootleggers were not the only criminals to attract significant media attention during the early 1930s. In fact, after Capone’s incarceration, the nation was much more interested in the deeds of kidnappers and bankrobbers. On 1 March 1932 the baby son of Charles Lindberg was abducted from his New Jersey home. He was only one of nearly 300 kidnap victims that year but his father’s place in American mythology made this crime touch a particularly sensitive nerve. In 1927, Charles Lindberg had been the first to fly the Atlantic non-stop and single-handed. News of this feat was greeted with unprecedented mass excitement and the adulation lasted long after the 1,800 tons of ticker tape had been swept up off the streets of New York. His picture still hung in countless schoolrooms and homes when his baby was snatched. This was seen as more than a crime against an individual: according to the New York Herald Tribune, the kidnapping was “a challenge to the whole order of the nation … The truth must be faced that the army of desperate criminals which has been recruited in the last decade is winning its battle against society.”[15]

The crime took place during a time of economic disaster. The Great Depression had caused a national crisis of confidence. Richard Gid Powers has argued that kidnapping was the most highly publicized crime of the time because “it was a direct attack on the home, the country’s grassroots symbol of security and traditional values at a time when both were threatened.”[16]

Film makers soon exploited a national desire for vengeance. A cycle of vigilante movies began in 1933, the most notable of which was Gabriel over the White House. This film was produced by William Randolph Hearst’s Cosmopolitan Pictures and it probably reflected the publisher’s ideal solution to the crime problem. In it, martial law is declared in response to a crime wave. Gangsters are rounded Lip, court-martialled then stood before a firing squad. In this world constitutional rights are not allowed to interfere with the processes of law and order.

The Lindberg baby was later found dead. Bruno Richard Hauptmann was electrocuted for the crime in 1936. Doubts remain as to his guilt, but the authorities had given Americans what they wanted; retribution.

Kidnapping is a crime where the victim takes centre stage, not so bank robbery. In these early depression years, few Americans were talked or written about more than Bonnie Parker and Clyde Barrow, “Baby Face” Nelson, “Pretty Boy” Floyd, and John Dillinger; outlaws from the American south and west who robbed banks and, for a short while, got away with it. Their methods were similar. They tended to be well-armed, often with sub-machine guns, and were willing to take part in shoot-outs with the authorities. They made their escape at speed in cars, and, if necessary, crossed state lines where the jurisdiction of their pursuers usually ended.

Federal law enforcement had done little to impress during the administration of President Herbert Hoover. Even the conviction of Capone seemed inadequate: a sentence of eleven years for tax evasion would in normal circumstances appear harsh but the man was widely believed to have been a mass murderer. Meanwhile racketeering remained rife, kidnap victims died and the series of spectacular bank robberies continued. The Saturday Evening Post was not the only publication to demand a decision on “who is the Big Shot in the United States – the criminal or the Government.”[17] The President responded by lecturing about states’ rights and the constitutional limits on federal jurisdiction and in one more area failed to catch the disenchanted mood of the nation. The Democrats and Franklin Delano Roosevelt took office in 1933 and promised government action on the nation’s ills.

A war on crime was declared and orchestrated by Attorney General Homer S. Cummings. The country was, he said, “confronted with real warfare which an armed underground is waging upon organized society. It is a real war which confronts us all – a war that must be successfully fought if life and property are to be secure in our country … Organized crime is an open challenge to our civilization, and the manner in which we meet it will be a test of our capacity for self-government.”[18]

Much of this war was rhetoric accompanied by symbolic action. For example, on 12 October 1933 Cummings announced on the radio that the federal government now had a new type of prison for “our most dangerous, intractable criminals.” This, he continued, was “Alcatraz Prison, located on a precipitous island in San Fransisco Bay, more than a mile from shore. The current is swift and escapes are practically impossible… Here may be isolated the criminals of the vicious and irredeemable type so that their influence may not be extended to other prisoners who are disposed to rehabilitate themselves.”[19] One magazine felt it was too close to shore and called for it to be located on a remote Pacific atoll. “America,” according to Real Detective, “needs an isolated penal colony if it is ever to shake off the tentacles of the crime octopus.”[20] But Alcatraz held unlimited potential for the writers of popular fact and fiction. Al Capone was an early inmate. It almost immediately became part of American folklore.

The agency chosen to represent the New Deal’s commitment to enforcement was the Federal Bureau of Investigation (FBI). In 1934 the FBI was given additional jurisdiction over a variety of inter-state felonies, such as kidnapping and auto-theft. Its director, J. Edgar Hoover, immediately exploited the publicity value of his new powers by directing his agents against the bankrobbers, who had been avoiding capture by crossing state lines. In rapid succession “Baby Face” Nelson, “Pretty Boy” Floyd, and John Dillinger were shot down by Mr Hoover’s agents.

Also in 1934 a new censorship code put an abrupt end to the Little Caesar, Public Enemy’, Scarface type of gangster film. These had outraged moral crusaders by “glorifying” criminals. Instead, in 1935 there was a new Hollywood campaign to glorify G-Men – the Press name for FBI agents. That year there were seven G-Man movies sold by Hollywood as its contribution to the war on crime. “SEE UNCLE SAM DRAW HIS GUNS TO HALT THE MARCH OF CRIME,” ran the ads for one of these films. In the original G Alan (1935) James Cagney played an FBI agent as forcefully as he had played gangster Tommy Powers in Public Enemy. As Andrew Bergman put it, exciting and benevolent law was in the hands of the US Government and in fact ups the US Government.” [21]

Hoover not only helped to create the new pro-police mythology but also became a prominent part of it. Before long his publicists had created an image for the FBI agent that lasted for decades. G-Men were dedicated, clean-cut, familiar with the most up-to-date, scientific techniques of crime detection and totally incorruptible. Books, magazines even bubble-gum cards echoed the same theme as the G-Men films: “Crime Does Not Pay” so long as the elite federal policemen were around.

It has been argued that Hoover’s news management skill could even turn a tragic mistake into a triumph for law enforcement. The journalist, Hank Messick, has analysed the FBI shooting of Kate Barker, mother of the kidnapper and bankrobber, Fred Barker, and suggested that the killing of an innocent, unarmed old woman was justified by an extraordinary story circulated by J. Edgar Hoover and his publicists.

Mother and son were shot on 16 January 1935. News stories at the time described this as another G-Man success and suggested that “Ma” Barker was the brains behind a gang of desperadoes. In 1938 Hoover made this claim himself, stating that she was “the most vicious, dangerous and resourceful criminal brain of the last decade,” and that the criminal careers of her four sons were directly traceable to their mother. “This woman,” he concluded, was “a monument to the evils of parental indulgence.”

Messick argued that the idea of Kate Barker as a crime supremo is farfetched. In fact she had never been convicted of any crime and it is very unlikely that a female hillbilly from the Blue Ridge mountain area would be allowed to interfere with what was considered to be men’s business in a male-dominated society. [22]

But Hoover had given the producers of popular culture an idea that could be endlessly recycled. An Englishman, James Hadley Chase, was first with a novel called No Orchids for Miss Blandish in 1939. In this “Ma Grisson” was “physically powerful and a hideous old woman; she was also the brains who determined the future of the gang…. Ma died in her office with a Thompson submachine gun in her hands, taking four cops with her.” Scriptwriters began adding “Ma” Barker characters to the plots of films, most notably in White Heat (1949) and Bloody Mama (1970). Hoover’s interpretation was not, of course, questioned at the time. Few people had. any doubts about the integrity of his agents; the idea that they may have killed an innocent old woman was unthinkable.

For more than three decades Hoover managed to ensure that the FBI was viewed as infallible, constantly vigilant for crooks and communists who might threaten the security of American citizens. The FBI’s failures and, in particular, its avoidance of significant organized crime until the mid-1960s did not reach the public’s attention. Most people believed that the FBI was unrelenting in its war on crime but the real fight against organized crime was left to the local authorities with their limited resources, overlapping jurisdictions and general lack of commitment to interfere with any crime business that had established a legitimate “front.” Thanks largely to Prohibition, there were many more professional criminals with their own bankrolls, organizations and local political protection in the tradition of Arnold Rothstein. Few, however, survived long unless they could distance themselves from actual criminal activity and establish bases for themselves in the mainstream of American economic life. Gangsters had two instruments for this: the infiltration of the labour movement and the ownership of legitimate businesses. The first led to and often complemented the second and both were profitable in their own right.

Local police forces followed the FBI’s lead and began to improve their images, if not their behaviour, during these years. Corruption was still endemic but public relations units were set up to cultivate the goodwill of newspaper and magazine publishers as well as radio and movie producers. These gave out handouts to reporters and editors, supplied brochures and pamphlets to citizens’ groups, sent speakers to public meetings, and otherwise put the police point of view.

It is not surprising, then, that the G-Man hero was joined by the city policeman hero. In Bullets and Ballots (1936) it was Edward G. Robinson’s turn to join the side of law and order. He played Detective Johnny Blake who goes undercover in order to join and then destroy the “crime combine,” which ran a city’s numbers and public market rackets. Blake’s answer to the problem was to restore respect for law: “to kick the rats into line.” In the final scene the last gasp of the dying public official is, “I’d like to think that when those mugs pass a policeman they’ll keep on tipping their hats.” The message of this film and dozens of others that followed was that an aroused citizenry could smash the rackets by using their votes to install honest and effective public officials.

From the late 1930s Metro-Goldwyn-Mayer joined the war on crime by adding shorts from the Crime Does Not Pay series to many programmes. In these, “Your M-G-M Crime Reporter,” Reed Hadley, introduced films on most types of crime from drug trafficking to faulty repairs on second-hand cars with one thing in common: police methods and intuition always inexorably tracked down the perpetrators. A well-ordered community was essential and this usually got precedence over basic human rights. Popular radio shows, such as Gangbusters and Mr D. A., showed a similar sense of priorities. [23]

The 1930s also saw the beginnings of the glorification of the prosecutor. The career of New York special prosecutor Thomas E. Dewey provided a rich source for opinion-makers wanting to show the law triumphant. For a brief period Dewey had choreographed the downfall of a succession of gangsters and some of their political protectors, making headlines across the nation. His greatest coup was the conviction of Charles “Lucky” Luciano who received a thirty- to fifty-year sentence for one of the few crimes he was probably innocent of, compelling women into prostitution. The New York Daily Mirror congratulated the jury “The 100% verdict was the most smashing blow ever dealt the organized underworld in New York. It was, moreover, hailed throughout the country as the definite beginning of the end of gangsterism, terrorism, and commercialized criminality throughout the United States.” [24] (Dewey later became Governor of New York State and commuted Luciano’s sentence conditional upon his deportation to Italy in 1946. Luciano’s wartime collaboration with US Navy Intelligence was later given as the reason for this leniency.)[25]

Dewey’s tactics were pioneering: close and prolonged surveillance and wire-tapping of suspects, inducements for criminals to become prosecution witnesses and convict their associates, and the use of special laws to make conspiracy convictions easier. These were presented to the nation as the answer to the problem of organized crime. Books, newspapers and films such as Marked Woman (1937), Racket Busters (1938) and Smashing the Rackets (1938) sung the praises of thinly disguised personifications of Dewey and put over the message that the answer to organized crime lay exclusively in the prompt indictment and vigorous prosecution of law-breakers at whatever cost to individual liberties. Dewey had put dozens of illegal gambling operators, loan sharks and industrial racketeers behind bars but the popular accounts of his courtroom triumphs left out some uncomfortable details. He manipulated public hysteria, he coerced reluctant witnesses, he illegally used wiretaps against political opponents and he failed to make more than a marginal impact on wholesale illegal profit-making in New York.[26]

From the 1930s onwards crime, vice and corruption were major issues at election time in numerous cities. Voters were told time and again to express their indignation at government corruption and rampant racketeering by voting the ruling party out of office. The victorious politicians would then, they claimed, get on with the job of cleansing the government and, in particular, improving the efficiency of law enforcement. Criminals would then be imprisoned and, theoretically, that would be the end of the problem. People were presented the issues in terms of good guys and bad guys. There was no suggestion that the problem was in the laws and the system.- It need hardly be said that the fortunes of organized crime were not significantly affected by changing administrations.

The G-Men and Racket-busting films had established plot patterns for thousands of crime films and television cop shows in the following decades. The bad guys were very rarely at the centre of the action and they always finished up dead or in prison thanks to the bravery, expertise or superior intelligence of government agents or prosecutors. Certain subjects were not encouraged. On one occasion, in 1963, a TV writer called David Rintels was asked to write an episode of The FBI on a subject of his choosing. Rintels suggested police brutality. The network said certainly, as long as the charge was trumped up, the policeman vindicated, and the man who brought the specious charge prosecuted. [27]

Popular culture generally helped towards an uncritical public acceptance of whatever the experts in the law enforcement community said was the answer to crime. These experts wanted more federal involvement iii the fight against gambling, drugs and industrial racketeering and federal prosecutors to be armed with Dewey-type powers. This required loosening the limits on federal jurisdiction and significant alterations to the Bill of Rights. To accept this, people had to see organized crime not just as a threat to individual cities but to the nation as a whole. The process began in the late 1940s.

3. Enter the Mafia

After the lean years of the 1930s the Second World War boosted illegal as well as legal businesses: more people were employed and earning good wages; rationing and war production cut back on available consumer goods; an increasing amount of money became available to spend on prohibited goods and services; and the profitability of vice increased.

It soon became apparent that the good guys were only winning on the screen; the newspapers were making it clear that they were losing in reality. Illegal gambling, in particular, enjoyed a wartime and post-war boom. Casino operators, slot machine distributors and off-track bookmakers were nullifying the anti-gambling laws with the help of local policemen, sheriffs and prosecuting attorneys. These, as Life put it in 1950, “have built mansions, bought yachts or loaded their safety deposit boxes to bursting,” on the proceeds of graft.[28] Journalists had a field day exposing an endless string of gambling corruption scandals. And, as the country got richer, individual entrepreneurs, crime syndicates and corrupt public officials maintained the supply of other illegal goods and services such as drugs, prostitution and loan sharking.

In many ways gambling in post-war America resembled the liquor situation during prohibition. Gambling was a popular and socially approved pastime, and the fact that it was illegal played into the hands of corrupt officials and criminal entrepreneurs. Gambling laws, like the dry laws, were plainly not being enforced and some people began to call for liberalization and regulation, so that tax revenue would replace illegal enrichment. But the proponents of legalised gambling lacked the immense financial support that had pushed through the repeal of the Prohibition amendment. Business interests were either uninterested, or accepted the anti-gambling arguments of the Citizens’ Crime Commission movement which had gained strength in the post-war years. The essence of these arguments was that the laws prohibiting gambling were right and necessary not only because gambling was immoral but also for sound business reasons. “Gambling,” it was said, “withdraws money from the regular channels of trade vital to the well-being of a nation or a community.’ [29] Gambling, in other words, was bad for business; ways had to be found to enforce the gambling laws. The only solution was increased federal commitment, involving the enactment of more laws and the establishment of a federal law enforcement capacity that was capable of succeeding where local authorities had failed. By some means people had to be prevented from indulging in the activities that filled the coffers of the “underworld.”

A new phase of America’s moral crusade began. This was to persuade people of the correctness of the above approach and a Senate investigating committee chaired by Senator Estes Kefauver of Tennessee set out to do this in 1950. The committee was formed to investigate organized crime in interstate commerce and it concentrated on gambling with the aim of promoting federal laws to control interstate gambling. Its main work, therefore, revolved around national racing-news wire services and gambling operators, notably Frank Costello of New York, with widespread national investments.

The impact of the Kefauver Committee was increased by the fact that its bearings in several cities were televised. The hour-by-hour television coverage of the proceedings in New York, relayed to other large cities, reached an estimated audience of between twenty and thirty million. The newspapers were full of stories of neglected housework, deserted cinemas and department stores, and Consolidated Edison had to add an extra generator to supply power for all the television sets being used. The New York Times described the mass audience for the committee as “a major phenomenon of our time.”[30]

Television viewers were presented with an impressive array of top crime figures, especially in New York. “They gawked,” as one pundit put it, “like a country boy looking at a painted woman for the first time.”[31] Joe Adonis, Albert Anastasia, Meyer Lansky, Frank Erickson and Willie Moretti testified but gave little away: most pleaded the Fifth Amendment and refused to answer questions on the grounds that it would tend to incriminate them. Frank Costello, described by the committee’s report as “the most influential underworld leader in America,” chose to answer the questions. He did, however, object to living his face filmed. The committee told the television people to avoid Costello’s face and instead the viewers saw the gambler’s nervous, sometimes twitching hands, which, combined with the hoarse whispering voice of a man who had had a throat operation must have suggested guilt and immense conspiratorial power.

Investigating committees do little real investigating. Rather they dramatize a particular perspective on a problem and place the prestige of a Senate body behind a chosen point of view.[32] In effect, the committee’s goal was to reduce the complexities of organized crime to a simple ‘Good versus Evil’ equation. The committee’s conclusions had been decided on before the hearings began. The committee had accepted the arguments against gambling and no serious consideration was given to the possibility of regulation and control of the gambling business. People had to be convinced that prohibition was the only option and prohibition had to be made effective. Enforcement had to be seen as the only answer. The committee chose to put the weight of its opinion behind a bizarre alien conspiracy interpretation of America’s organized crime problems. If people believed that organized crime was run by an alien conspiracy, they would accept the need for a greater federal response to gambling, the main source of income for the mysterious Mafia.

The committee’s conclusions traced the history of the Sicilian Mafia and its “implantation” into America and made a number of often repeated assertions: “There is a nationwide crime syndicate known as the Mafia…. Its leaders are found in control of the most lucrative rackets in the cities. There are indications of a centralized direction and control of these rackets…. The Mafia is the cement that helps bind the Costello-Adonis-Lansky syndicate of New York and the Accardo-Guzik-Fischetti syndicate of Chicago.”[33]

Contrary to these conclusions, the committee had found men of several ethnic groups at the head of criminal syndicates around the nation, and frequent contact and co-operation between different ethnic groups. Even in the committee’s own choice of the two most powerful syndicates in the country, the Costello-Adonis-Lansky syndicate of New York, and the Accardo-Guzik-Fischetti syndicate of Chicago, which were supposedly bound together by the Mafia “cement,” Meyer Lansky and Jacob Guzik were Jewish-Americans, and the parents of Frank Costello, Joe Adonis and Charles Fischetti had all originated from mainland Italy. Presumably Tony Accardo represented the Sicilian “cement.” The evidence the committee uncovered showed that gambling operators in different parts of the country had sometimes combined in joint ventures, in the same way as businessmen everywhere, and had made a lot of money for themselves and for the public officials they had to pay off. Despite a great deal of hopeful effort, no evidence was produced at the hearings to support the view of a centralized Sicilian or Italian organization dominating organized crime in the United States.

Although the committee was mainly concerned with gambling, the federal law enforcement agency which influenced it most was the Federal Bureau of Narcotics (FBN). The FBN’s chief, Harry J. Anslinger, dominated his agency in much the same way as J. Edgar Hoover dominated the FBI. Anslinger often stated that his approach to drug control was simple: “Get rid of drugs, pushers and users. Period.”[34] But his belief that effective enforcement and draconian penalties were the only answers to drug addiction was being seriously challenged. Doctors and academics were arguing for medical, rather than police-based drug control policies and they were getting some support from within the law enforcement community. One crime commission asked whether the country really should pursue a drug control policy that was ineffective and fabulously profitable to drug traffickers.[35]

Anslinger responded to this challenge by developing self-serving distractions, one of which was to blame aliens for America’s drug problems. Through statements and disclosures to the Press and by appearances before Senate committees, beginning with Kefauver’s, Anslinger and his agents propagated the idea that the Mafia supercriminal organization controlled both the world-wide drug traffic and the core of organized crime activity in the United States. As Dwight Smith has argued, the Bureau could therefore justify the importance of its task, and explain its lack of success without having to inquire more deeply into the problem of addiction itself. Just as antigambling campaigners were asserting that the legalising of gambling, its regulation and control, would be a capitulation to criminal interests, Anslinger used similar arguments to justify calls for yet more penalties against drug users and traffickers and for increased yearly budgetary appropriations. (It is also worth noting that in 1968 Anslinger’s Treasury agency was found to be so corrupt that it had to be abolished and replaced by a new agency in the Department of Justice.)[36]

The work of two Hearst journalists, Jack Lait and Lee Mortimer, columnists on the New York tabloid, the Daily Mirror, constituted another of the main influences on the Kefauver committee’s Mafia conclusions. These produced a series of best-selling books beginning with New York Confidential in 1948, and Chicago Confidential in 1950, Washington Confidential in 1951, and USA Confidential in 1952. Chicago Confidential was the first book to string together anecdotes about Italian-American gangsters and claim that this proved that the Mafia controlled organized crime in America. Kefauver had read Chicago Confidential and his committee’s interpretation of organized crime was little more than a more temperate version of the conspiracy theory of the two journalists. It was no longer sufficient for organized crime to be portrayed as evil, it had to be portrayed as foreign as well.

The Mafia, according to Lait and Mortimer, is “the super-government which now has tentacles reaching into the Cabinet and the White House itself, almost every state capital, huge Wall Street interests, and connections in Canada, Greece, China and Outer Mongolia, and even through the Iron Curtain into Soviet Russia.” The organization is “run from above, with reigning headquarters in Italy and American headquarters in New York.”[37] It “controls all sin” and “practically all crime in the United States,”[38] and is “an international conspiracy, as potent as that other international conspiracy, Communism, and as dirty and dangerous, with its great wealth and the same policy – to conquer everything and take over everything, with no scruples as to how.”[39]

Among the other claims Lait and Mortimer made were that the great growth of the “plague” of narcotics addiction had been “parallel to the spread of Communism in our country”[40] and that “organized gangsters, combined with Communists and pinks, were working to turn Americans into addicts.”[41] The authors showed racist and sexual fears and hatreds throughout the four books. Communist women always used their sexual favours to convert “darkies” and children to the party line, and black Americans were regarded as “soft converts” who had to be “imbued by practical demonstration with the complete equality of all comrades.”[42] Most women mentioned in the books were either “nymphomaniacs” or prostitutes, all homosexuals were “faggots,” “fairies,” or “perverts.” It is a symptom of the time that these books were taken seriously by members of the US Senate.

As an easy explanation for the country’s organized crime problems the Mafia could not be beaten. A Jewish conspiracy theory could have been concocted but not sold to the public, given American revulsion at German war-time atrocities against the Jews. Instead the Mafia interpretation continued to be fostered by tabloid journalists in the tradition of Lait and Mortimer. The first book-length development of their alien conspiracy thesis was by Ed Reid in 1952. In Mafia, the conspiracy was “history’s greatest threat to morality,” and “the principal fount of all crime in the world, controlling vice, gambling, the smuggling and sale of dope and other sources of evil.”[43] Frederick Sondern wrote another account of the “always writhing octopus” in 1959.[44]

Lait and Mortimer had produced a formula that journalists the world over have since turned to when writing about US organized crime. The trick was to describe briefly how a secret criminal brotherhood developed in feudal Sicily, was transported to urban America at the end of the nineteenth century, and then took over organized crime operations in the entire country. As “proof’ all editors required were unrelated anecdotes about Italian-American gangsters, mainly from New York, with the narrative enlivened by words like “godfather,” “tentacles” and most essentially, “omerta.” “Omerta” was, according to Lait and Mortimer, “the secret and unwritten code of silence of the Mafia. Every member lived in mortal fear of violating this code.”[45] Other writers followed this lead and inserted paragraphs about “omerta” in their work, thus justifying the wildest assertions without needing to provide evidence. Who could contradict them if the Mafia code of silence could not be violated?

Producers of fiction were not slow to enliven their narratives with references to the omnipotent Mafia. In Kiss Me Deadly (1952) Mickey Spillane described the Mafia as a “slimy, foreign secret army,” that “stretched out its tentacles all over the world with the tips reaching into the highest places possible.” Dwight Smith’s survey of popular crime literature came to the conclusion that by the mid-sixties, “a standard Mafia plot required one key figure, with girls; formal ‘board meetings’; mysterious ‘hit men’; cross-country travel; dissension within the organization; and uncertainty as to the ultimate winner until that dissension has been resolved in an eruption of violence.”[46]

The first film to feature a nation-wide criminal organization was The Enforcer (1951). This was based on a real-life criminal investigation that resulted in the exposure of a national network of contract killers known in the Press as “Murder Inc” (the title of the film in Britain). The film created the impression of an invisible criminal empire which specialised in wholesale killing.

After The Enforcer, gangster films repeatedly portrayed the underworld as consisting of groups of racketeers working together in business-like organizations behind respectable “fronts.” “The Syndicate” or “The Organization” of crime was still usually confined to city-wide operations such as those in Hoodlum Empire (1952) and The Big Heat (1953), but the idea of a national organization resurfaced in The Brothers Rico (1957). In this a retired mobster played by Richard Conte set out to expose the Syndicate after it had killed his brother, and found a criminal dragnet out to kill him that is more efficient and, geographically, more wide-ranging than anything the police can offer.

The first film to put a strong Italian ethnic identity to organized crime on a national scale was Inside the Mafia (1959), which was about, “The World’s Number One Secret Society of Crime,” according to the posters. However, the film was poorly made and failed both commercially and critically. More important was Underworld USA (1960). In this a vast syndicate, masquerading under the corporate name of “National Projects”, controls organized crime throughout the country. The leaders of the syndicate head separate departments: Drug Traffic, Labour Racketeering, Gambling and Prostitution. The syndicate keeps power by murder, intimidation and bribery and “by maintaining a legitimate business facade from basement to penthouse.” But, according to the film’s publicity handouts, the syndicate’s strongest weapon is “public indifference:” Underworld USA provides a scathing portrait of the American public who have allowed the ‘punks’ to take office. The totalitarian threat to the democratic way of life comes not from communism but from organized crime.”[47] The implied solution is public support for a stronger federal law enforcement response.

Crime films of the period continued to justify intrusive and coercive police tactics. The informant is glorified in Elia Kazan’s classic film about union racketeering, On The Waterfront (1954), perhaps because Kazan and Budd Schulberg, the scriptwriter, had recently named people they knew as former communists before the House Committee on Un-American Activities. Wiretaps and bugging devices provide the crucial evidence to destroy the Hoodlum Empire. Undercover and entrapment operations enable one Damn Citizen (1957) to shatter vice and crime in Louisiana. And of course violent police activity, such as kicking doors down, roughing-up and shooting suspects, would always be the correct response in any crime film.

Academics were generally muted during these years but the most significant dissenter from the Mafia conspiracy theory of organized crime during these years was the sociologist, Daniel Bell. His article, “Crime as an American Way of Life,” first appeared in The Antioch Review in 1953 and has since been frequently reprinted. Bell had little time for conspiracy theories. The high proportion of Italian-Americans known to be associated with organized crime could be explained without invoking the idea of the omnipotent Mafia. He pointed out that, like other immigrant groups, they were initially marginal to the socio-economic and political structure. The employment and business opportunities available to them were initially limited. In such a situation, involvement in crime was understandable. This explained why the ethnic succession among gangsters has tended to follow the immigrant waves: American, German, Irish, Jewish and then Italian. With the expansion of the American economy the majority of immigrants were integrated as wage earners and some were able to gain footholds in legitimate business: Irishmen obtained lucrative civil contracts through their control of some big city political machines, German Jews entered banking and merchandising and some of the later Jewish immigrants, the garment trade. These legitimate roads to wealth were spoken for by the time the Italians arrived so the entrepreneurially-minded people made good use of the new opportunities created by Prohibition for organized crime. After prohibition, attempts were continuously being made to move into legitimate businesses but opportunities were still limited. It was therefore logical for them to use their wealth and knowledge in other criminal activities such as gambling.[48] Bell was thus able to explain Italian involvement in crime without invoking notions of a national or international conspiracy.

This “ethnic succession” thesis was a reminder that organized crime was a multi-ethnic phenomenon at a time when professional and public opinion was moving towards a perception that organized crime was ethnically-exclusive to Italian-Americans. However, Bell reflected the reluctance of American liberals to be very critical about the system; corruption is considered but not emphasized or treated as a problem. The impression is given that Italian Americans got involved in significant criminal activity mainly because they were denied legitimate opportunities and, like the Irish and Jews before them, they would eventually become respectable members of society. In Bell’s words organized crime was “one of the queer ladders of social mobility.”[49] Bell was locating the source of organized crime problems among the relatively powerless in society on their way up to middle class respectability. In effect the only insight he was adding to the Mafia conspiracy thesis was that Italians were preceded by and would be succeeded by different ethnic and racial groups. Bell’s thesis proved inadequate to replace the Mafia conspiracy thesis but during the 1970s government officials amalgamated the two to produce the current federal perspective. Now, according to the FBI, the problem is not just Italian but also “emerging crime groups”, mainly blacks, hispanics and Chinese.[50] Corruption within the system is not considered part of this problem. Otherwise people might question the wisdom of the solution that is always offered: give the police more men, more intrusive and coercive powers so that they can make more arrests and fill more prisons.

The existence of the American Mafia as a centralized organization dominating organized crime was never proved but incidents and revelations involving Italian-American gangsters continued and still continue to give some substance to the concept.

In 1957 a convention of about sixty suspected Italian-American racketeers was disrupted by state police at Apalachin, New York. Most of these had legitimate “fronts” in a variety of businesses ranging from taxicabs, trucks and coin-operated machines to olive oil and cheese. No useful information came from the police action, but it gave a much-needed boost to the alien conspiracy theory as interest had been dwindling. [51]

The publicity surrounding Apalachin prompted the formation of another Senate committee, this time chaired by Senator John McClellan of Arkansas. The McClellan Committee was appointed to investigate racketeering in the labour field and is best known for the clashes between the Teamsters’ union leader, Jimmy Hoffa, and the chief counsel to the committee, Robert F. Kennedy. The Teamsters’ was the country’s biggest and most powerful union, and was undoubtedly very corrupt. The evidence leaves little doubt that Hoffa misappropriated union funds and was connected with many wellknown labour racketeers such as Johnny Dioguardi and Tony “Ducks” Corallo.

But 1963 was the conspiracy theorists’ most significant year. Another committee chaired by McClellan held televised hearings before which a small-time New York criminal, Joseph Valachi, revealed he was part of something he called “Cosa Nostra” or “Our Thing.” For years J. Edgar Hoover had consistently refused to credit Mafia conspiracy theories, this new name gave him a chance of a volte-face without too many people noticing. Before, Hoover had successfully kept the FBI out of the futile and corrupting task of gambling law enforcement. But from 1963 on he chose to go along with the consensus of law enforcement opinion: organized crime was an alien conspiracy and gambling was this conspiracy’s main source of income; therefore the FBI had to go into action against gambling. The Valachi show was put on to mobilize support for increased federal involvement in the war against organized crime.

Some of Valachi’s testimony does ring true in the light of later events and revelations, but it was full of inconsistencies and contradictions and it was certainly not enough to justify the assertions about the structure of US organized crime that were based on it. But by the end of the 1960s, as a result of these assertions, most Americans saw the Mafia as a monolithic, ethnically-exclusive, strictly disciplined secret society, based on weird rituals, commanding the absolute obedience of its members and controlling the core of the country’s organized crime. Mafia and organized crime had become virtually synonymous. Government officials were more than happy to supply journalists with the “facts” to support this explanation for the country’s organized crime problems. The Mafia provided bureaucrats and politicians with an easy-to-communicate threat to the nation.

A single work of fiction, published in 1969, put the law enforcement perspective into its most digestible form and gave organized crime its strongest ethnic identity yet. Mario Puzo’s The Godfather was on the New York Times best-seller list for sixty-seven weeks and sold just as impressively on the overseas market. The film of the book was even more successful, breaking numerous box-office records, winning many awards, but in the process fixing misleading images about American organized crime for many years to come. A kind of Godfather industry has since developed with innumerable cheaper versions of the same themes turned out in every form of media communication – even Superman and Batman waged war on the Mafia in the August 1970 issue of World’s Finest Comics – with by-products ranging from Godfather sweatshirts and car stickers to pizza franchises, and a constant supply of Mafia books and articles.

For the law enforcement community the conception of organized crime as an alien and united entity was vital. It was presented as many-faced, calculating and relentlessly probing for weak spots in the armour of American morality. Morality had to be protected from this alien threat. Aliens were corrupting the police; therefore the police had to be given more power. Compromise, such as a reconsideration of the laws governing gambling and drug taking, was out of the question; the only answer was increased law enforcement capacity and more laws to ensure the swift capture of gambling operators and drug traffickers behind whom the Mafia was always lurking. (The “Cosa Nostra” label did not catch on with most.journalists and fiction writers, but federal officials still use it.)

The message got across to the people that mattered, the legislators. Members of Congress were convinced enough by the Mafia’s “threat to the nation” to enact a series of measures long sought after by the federal law enforcement and intelligence community. Organized crime control provisions in the 1968 and 1970 omnibus crime control acts included: special grand juries; wider witness immunity provisions for compelling reluctant testimony; extended sentences for persons convicted in organized crime cases; and the use of wire-tapping and eavesdropping evidence in federal cases. Such considerable alteration in constitutional guarantees was justified by the belief that the problem was a massive, well-integrated, international conspiracy. The measures gave the same head-hunting powers to federal police and prosecutors that Thomas E. Dewey had used in the 1930s on a local level. They inevitably tipped the balance away from such civil liberties as the right to privacy and protection from unreasonable search and seizure, and towards stronger policing powers. Handing the signed 1970 bill to Attorney General John Mitchell and FBI Director J. Edgar Hoover, President Richard Nixon said, “Gentlemen, I give you the tools. You do the job. “[52]

These laws and concurrent anti-drug legislation had a great potential for abuse which was soon fulfilled and this abuse was not restricted to financial corruption and police brutality, much of it was politically motivated. Federal policemen were given more scope to do what Hoover’s FBI had been doing illegally for years, spy on and suppress political dissent. The administration of President Richard Nixon used its new powers more actively against anti-Vietnam war protestors than Italian-American gangsters. Between 1970 and 1974, in particular, grand juries, along with increased wiretapping and eavesdropping powers, became quite clearly part of the government’s armoury against dissent. A list of abuses during these years would include: harassing political activists, discrediting “non-mainstream” groups, assisting management during strikes, punishing witnesses for exercising their Fifth Amendment rights, covering up official crimes, enticing perjury and gathering domestic intelligence. By the time of Nixon’s resignation in 1974 it was clear that Congress had bestowed an armoury of repressive crime-control laws on people who were themselves criminally inclined. [53]

More recently, in 1984, Reagan administration officials used organized crime control powers to infiltrate church meetings and wiretap church phones in Arizona and Texas. The intention was to monitor the efforts of some churches to provide “sanctuary” to Central American refugees. These refugees were primarily from El Salvador and Guatamala, countries whose regimes had the active support of the US government despite much documented evidence of violent suppression of dissent. For a single set of indictments, 40,000 pages of secretly taped conversations involving priests and nuns were compiled. No prison sentences resulted but the harassment had successfully intimidated the “sanctuary” movement.[54] In the meantime the extent of organized crime in America was not significantly affected. The 1968 and 1970 organized crime control measures were, as constitutional scholar Leonard Levy put it, “a salvo of fragmentation grenades that missed their targets and exploded against the Bill of Rights.”[55]

Italian-American organized crime is not the coherent, hierarchical corporation portrayed in most crime books and articles. But, more than twenty Italian-American crime syndicates do exist and participate in an environment that is peculiarly conducive to crime. They will continue either to operate separately or compete or cooperate on occasion. They will also continue to be collectively called the Mafia and be much overrated in films and newspaper articles. In 1986, for example, “Fat Tony” Salerno and Tony “Ducks” Corallo were the most notable gangsters convicted of racketeering in a series of dramatic court cases that made the name of US Attorney, Rudolph Guiliani. An editorial in the New York Times reflected the government s view with the claim that, “Society, at last, is organized. With convictions like these, it’s the mob that is coming apart.”[56] Similar claims had been made fifty years earlier during the prosecutions of Thomas E. Dewey. The organized crime situation in the meantime has worsened.

The history of Italian-American organized crime has been more notable for savage struggle against each other and other groups than for the mutual enrichment, discipline and codes of absolute obedience described in most accounts. Criminal syndicates are powerful in the United States and they are often based on such unifying factors as religion, kinship, ethnicity and prison experience, just as racially-exclusive, old-school or masonic networks exist in other businesses. They are necessarily secretive, all have “omerta-like” codes; criminal activity is not something that it is intelligent to talk about. Organized crime in the United States mirrors the country itself in that is composed of every major ethnic group. The endless speculation about the Mafia merely distracted attention from defects in the political, economic and legal systems, defects which were often exploited in an organized, systematic and profitable way.

Thanks to wiretaps, bugs and informants, American “Mafiosi” have been talking to the federal authorities for the past three decades. Out of a mass of contradictory evidence came an organized crime control strategy that has not controlled organized crime. The situation is now too much a tidal wave of crime and systematic violence to be explained by a neat and tidy hierarchy of capos, consiglieres and soldiers swearing blood oaths of allegiance and dividing up the spoils.

The FBI did not distinguish itself in its fight against gambling, “the principal bank roll” of the Mafia. Hoover found that his agency was as ill-equipped to stop people betting as local police forces. The US anti-gambling laws had only succeeded in creating an immense market with no legal suppliers. Opportunities to profit from protection of or extortion from gambling suppliers were eagerly accepted by innumerable local officials. Often the authorities controlled illegal gambling to a great extent, exploiting bookies and numbers operators, and keeping the bulk of the profits for themselves. But with ineffectual and corrupted enforcement there were also opportunities for the more ruthless and violent criminal individuals and organizations to reach power and influence. Either way the demand for gambling was met, just as the demand for liquor had been met during Prohibition. There are still many laws against gambling in America, but gambling law enforcement continues only as a niggling inconvenience in most states. Resources have been shifted to the effort to control drug taking. Organized crime control is now primarily focused on drug traffickers.

4. Static Response -Dynamic Industry

On 14 October 1982 President Reagan announced a plan intended to “end the drug menace and cripple organized crime.”[57] Part of the plan was the establishment of a task force, nominally headed by Vice-President George Bush, which deployed everything from destroyers and helicopter gunships to a balloon-shaped radar device nicknamed “Fat Albert” to intercept smugglers in South Florida waters. A year later an enlarged task force was expanded into a national narcotics interception system. The tens of millions of dollars spent achieved little. Smugglers were perhaps inconvenienced and required to use more ingenuity but the new elaborate and expensive surveillance techniques failed to stop them bringing drugs into the country. The rest of the plan, similarly, was mainly for public relations purposes and failed to make an impact on the extent and success of organized crime.[58]

Reagan’s advisers were probably well aware that organized crime would not be crippled by the plan and therefore a crime commission was also thought necessary to maintain public support for prevailing organized crime control policies. On 28 July 1983, Reagan formally established the President’s Commission on Organized Crime to be chaired by Judge Irving R. Kaufman and composed of eighteen other men and women, mainly from the law enforcement community.

The commission’s stated intention was to investigate the power and activities of “traditional organized crime” and “emerging organized crime groups.” At the first hearing in November 1983 the nation’s top law enforcement officers were invited to explain the federal’s perspective on the problem. Each identified “traditional organized crime” exclusively with 1talian-Americans or “the La Cosa Nostra.” However, they also showed that the federal perspective had developed since the 1960s. They made it clear that organized crime was not synonymous with any one group and stressed the importance of “emerging groups,” mentioning motor-cycle gangs, prison gangs and “foreign-based” organizations. No doubts were expressed about the essential correctness of the law enforcement approach to organized crime control based on long-term investigation, under-cover operations, informants, wiretaps and asset forfeiture. Successes against “traditional organized crime” and the need “to stay in front” of the emerging “cartels” were emphasized throughout. Drug trafficking was identified as the most profitable organized crime activity.[59]

After three years’ selective investigation the commission’s conclusions were in line with those of the Reagan administration. The basic approach of the nation’s drug enforcement programmes was sound but needed a harder line on all fronts: more wiretaps, informants, under-cover agents in order to’ get more convictions which would require more prisons. Witnesses who might have pointed out the deficiencies of this approach were not consulted. The only recommendation to attract much attention was a call for a widespread national programme to test most working Americans for drug use, in effect to force most working Americans to submit to regular, observed urine tests. The tests require supervision because people might be tempted to bring in someone else’s clean urine. At a news conference Judge Kaufman explained the recommendation. The investigation had convinced him that “law enforcement has been tested to its utmost …. But let’s face it, it hasn’t succeeded. So let’s try something else. Let’s try testing.”[60] The immense problem of drug-related gangsterism and corruption was to be tackled by examining the urine of innocent people.

A small number of liberals objected to this invasion of privacy but a poll taken after the commission’s report was issued showed that nearly eighty percent of Americans did not oppose drug testing. In fact, many already worked for corporations that regularly tested their personnel. The law enforcement community had announced that testing people’s urine would reduce the demand for drugs and therefore hit organized crime in the pocketbook, so millions of Americans were prepared to line up and give their samples. Although the results of these tests have often been shown to be wrong, the business of urine-testing laboratories and equipment manufacturers is booming. Like most wars, America’s war on drugs has its profiteers.[61]

The American approach to organized crime has basically relied on undermining civil liberties and increasing law enforcement and prison capacity in order to give the appearance of effective activity. The organized crime control strategy that has evolved is guaranteed to produce short-term and publicity-laden successes in the war against crime. It promises success in public relations terms for policing agencies and continues to enhance the career prospects of ambitious prosecutors. However, in the long-term it will fail. To help explain why, the final part of this essay will present a picture of organized crime that departs from the law enforcement perspective.

Organized crime has involved all ethnic and social complexions; Americans from the lowest to the highest levels of society and government have been and still are involved.

Police estimate that there are around 70,000 members of the Crips and Bloods youth gangs in Los Angeles County alone, operating in about 600 to 700 different sub-gangs. Law enforcement officers have identified such gangs in at least nineteen states and fifty cities nationally To protect their drug distribution networks the gangs favour AK-47s and other heavy weapons. Organization is strict, methods are flexible, and profits so high that teenage gang members can make thousands of dollars a week selling or transporting drugs. [62] The work is often dangerous and shoot-outs are frequent, but many inner-city young men consider the risks worth taking. The alternatives are unemployment or dead-end jobs. The local police are overwhelmed; wiretapping and eavesdropping to gather intelligence are hardly options with the numbers involved.

There is a never-ending source of recruits for the lower-levels of organized crime and this source is not restricted to the ghettoes and housing projects of the cities. Outlaw motorcycle gangs, for example, are mainly groups of white Protestants from rural and suburban areas. Police estimate that there are now hundreds of individual gangs heavily involved in organized crime activities. Their record ranges from drug distribution and extortion to the use of violent tactics to muscle in on legitimate businesses.[63] The police tactic of undercover infiltration of gangs, which has sometimes worked against Italians, is made more difficult by the membership requirements of some biker gangs. A police officer would have to commit a rape or a contract murder to be accepted.

Many members of youth gangs and outlaw motorcycle gangs eventually get caught, convicted and sentenced to time in prison. But imprisonment has proved to be part of the problem of organized crime rather than its solution. In many US prisons gangs fight over prostitution, protection and drug trafficking rackets in systems based on brutality, informants and staff corruption. Prison gangs tend to be organized along racial and ethnic lines and some like the Aryan Brotherhood, the Black Guerillas and La Nuestra Familia have statewide and even inter-state influence. They run rackets and assassinate competition on the outside as well as on the inside.[64] The Kaufman Commission chose not to highlight prison gangs presumably because they are hardly an advertisement for a drug control strategy that is based on mass imprisonment for drug possession, as well as trafficking, offenses. Overcrowding the prisons and locking up tens of thousands of young men has already created many more and much more ruthless drug trafficking networks than it has disrupted. The first anti-crime proposals of President George Bush also took no account of the prison gang phenomenon. In May 1989 he announced a $1.8 billion package intended to double prison capacity.[65]

Organized crime is evolving, spreading and expanding, fuelled principally by the demand for drugs. No ethnically-based monopoly in the drugs business has ever existed. There are many thousands of distribution and smuggling networks; decentralization characterizes the industry with a high turnover of personnel. Smuggling organizations tend to restrict their operations to importation, leaving distribution within the United States to indigenous groups.

Sources of raw material exist throughout the world, including the United States. Organizations, large and small, buy and process the raw materials, and distribute the product at retail through a host of outlets. Although some operations have lasted for decades, organization in the drug business is largely spontaneous, with anyone free to enter it at any level if he or she has the money, the supplier and the ability to escape arrest or robbery.

If increased drug law enforcement has done anything over the past two decades it has been to create competitive advantage for criminal groups with the skills, connections and capital to nullify enforcement with corruption and the firepower to resist theft and takeover bids. Violence in the drug trade today far exceeds anything experienced during the bootleg wars of the 1920s, but the motives are generally the same; protecting territory or goods from rivals, discouraging informants, or stealing money or drugs from other traffickers. Drug-trafficking mass-murders are not uncommon and they sometimes involve the innocent. On 14 April, 1984 a team of professional hit-men walked into a flat in Brooklyn and shot dead two women and eight children, either holding them in chairs while shooting them in the head or propping them up afterwards. The flat was the home of Enrique Bermudez who operated on the fringes of the drug trade. In 1976 he pleaded guilty to selling half an ounce of cocaine to an undercover policeman. Under New York’s stringent drug laws, he could have received a life sentence. Instead he chose to cooperate with the authorities in return for a five-year term. He was paroled in 1981 and worked for the kind of cowboy taxi service sometimes used by drug traffickers. The deaths of his girlfriend and children, and the others, were probably either a warning or a rebuke for informing.[66]

So far only the lower level of organized crime activity has been discussed. The second level involves the whole system of law enforcement and criminal justice. The problem of corruption was largely avoided by the Kaufman Commission. The decades of making organized crime synonymous, first, with the Mafia alone, and more recently with the Mafia plus marginal ”emerging” groups, enabled the commission to exclude corruption from its definition of the problem without attracting criticism. In 1967 President Johnson’s Crime Commission had de-emphasized corruption in its analysis of organized crime but at least made the unequivocal statement that “All available data indicate that organized crime flourishes only where it has corrupted local officials.”[67] Corruption has continued to characterize drug law enforcement well into the 1980s. The cities of Detroit, Chicago, Miami and Portland have all experienced major scandals in recent years. The offenses uncovered include: “skimming” cash and drugs from seizures, pocketing money earmarked for informants, lying to obtain search warrants, committing perjury in court to obtain convictions, selling drugs and guns, accepting bribes and protecting drug syndicates. In 1982, for example, ten Chicago policemen were convicted on various drug-related charges including aiding and abetting a continuing criminal enterprise and extortion. The violations related to the defendants’ three-year symbiotic relationship with two large drug distributorships in which the police officers were paid off in exchange for warning the distributors of impending police raids, delivering drugs seized from other dealers to the favoured syndicates, and threatening competitors.[68]

Higher-level officials have often been found to be corrupt. In 1982 the sheriff, chief of police, a judge and others from Henry County, Georgia, were convicted of aiding and abetting smugglers when landing at an airstrip and “providing an escort service” into Atlanta. Southern sheriffs, in particular, have been revealed to be as involved in drug trafficking as their predecessors were in bootlegging. Most notable is sheriff Leroy Hobbs of Harrison County, Mississippi, who was sentenced to twenty years in prison in May 1984 for drug trafficking offences. Hobbs had been elected on a promise to crack down on drugs and corruption.[69]

Defence lawyers take their share of the proceeds of organized crime. Defence fees are so high that career criminals often have to step up their illegal activities while on bail to keep up with the payments. The lawyers make sure that their clients know that there is a firm connection between fee payment and the zealous exercise of professional expertise, secret knowledge, and organizational “connections” on their behalf.[70]

A third level of organized crime activity involved both sides of American industry, unions and management, financial institutions and other legitimate businesses. Generalizations are difficult in the complex world of American labour management relations and organized crime. There are cases when the employers’ hand was strengthened by union corruption. It worked out cheaper to pay off criminal networks than engage in honest collective bargaining; employers paid off gangsters gladly in return for low wage settlements. Against this, the notoriously corrupt Teamsters’ union has generally succeeded in providing its members with good wages and conditions. Gangster-dominated unions have often helped to keep the workforce docile for management; many workers had to settle for less money and poor conditions and there was not much future for those who complained about it or tried to organize resistance. On the other hand, employers, especially the owners of small businesses, have also suffered: many being forced out of business by extortion or harassment and replaced by organized crime “associates.” The garbage and toxic waste disposal industry in the New York/New Jersey area, for example, has been dominated by Italian-American gangsters for decades.[71]

Banks, most noticeably in Florida, have boomed in recent years by laundering vast amounts of drug money. In Miami so many drug-trade dollars have flowed through the city’s branch of the Federal Reserve System that it did not need to issue any new currency for some years and even exported used dollars to other Federal Reserve districts. Since 1970 the city has become an international banking centre, rivalling London and New York, and no one seriously disputes that dollars generated by the trade in marijuana and cocaine account for this rapid rise to prominence. The problem, as Senator William Proxmire has put it, is that “Many banks are addicted to drug money, just as millions of Americans are addicted to drugs.”[72]

In order to be useful, organized crime money has to be made legitimate and untraceable. Banks can do this, but money in banks is idle money and few entrepreneurs can resist opportunities to make money active. Investment in legitimate business gives a successful criminal a base in the mainstream of American economic life. The amounts involved undoubtedly make organized crime an important source of investment capital.

This capital, according to Kirkpatrick Sale, has played a particularly significant role in the post-war development off the booming Sunbelt economy of the Southern United States. Millions of dollars, illegally obtained, were invested chiefly in high-risk operations where venture capital is hard to come by; oil exploration in Louisiana or gambling casinos in the Nevada desert, for example. But organized crime money has also found its way into corporate farming, computer manufacturing and above all, real estate.[73]

The fourth level of organized crime activity involves politicians. Politicians have often been the chief organizers of and profit-takers from crime and the 1979 FBI Abscam investigation illustrated some of the ways political power can be used for illegal financial gain. In Abscam short for Abdul Scam – FBI agents disguised themselves as the financial representatives of oil rich Arabs and offered politicians money for their help in criminal activity. The politicians were then videotaped stuffing wads of cash into their suit pockets; one of them was shown asking, “Does it show?” Mayor Angelo J. Errichetti of Camden, New Jersey was one of the first to become enthusiastically involved in the deals. He offered or gave Abscam agents hot diamonds, guns and munitions, forged certificates of deposit, counterfeit money, stolen paintings, leasing contracts, municipal garbage contracts, unregistered boats for drug-running, the use of Port Camden as a depot for drugs, Atlantic City zoning changes, a list of thirteen bribable state and city officials and entrees to five United States congressmen and a senator.[74]

Finally, as a superpower, the United States has often collaborated with domestic and foreign organized crime operations. This involvement began during the Second World War when Naval Intelligence worked with the New York gangsters who controlled waterfront labour to prevent sabotage; it seemed necessary in the circumstances but it set a malign precedent. In postwar Italy, US army agents helped indigenous gangsters, mafiosi in Sicily and camorrista in Naples, back to positions of power in local government as bulwarks against communism. The same cause justified an early 1960s conspiracy between the Central Intelligence Agency (CIA) and such gangsters as Sam Giancana, John Rosselli and Santos Trafficante which planned to assassinate Fidel Castro. Although this plan was aborted, the agency continued to promote anti-Castro Cuban groups whose main business was divided between terrorism and drug trafficking.[75]

Collaboration can work both ways and it has recently been revealed that a Cuban general, Arnaldo Ochoa, allowed Colombian drug traffickers use of military airfields as transhipment points for cocaine en route to the United States.[76] However, it is doubtful that communist citizens and officials have been as involved in international drug trafficking on the same scale as the friends and allies of the United States, simply because drug trafficking routes tend to follow established trading routes.

In recent decades there has been a strong correlation between US involvement in the world’s most volatile areas and the main sources of supply for American drug users. Evidence has implicated the war-lords of South East Asia, right-wing regimes in Central and South America, the Mujaheddin rebels in Afganistan and the Contra rebels in Nicaragua in large scale drug trafficking. There is no disputing CIA knowledge of this, and some writers have charged that tacit consent sometimes became active assistance.[77] In the final analysis, despite the rhetoric, the war on crime has always occupied a lowly position on the list of the nation’s priorities.

Organized crime involves American politicians, police, lawyers, bankers, businessmen and the US intelligence community, not just career criminals. It involves collaboration between these groups, and it also involves collaboration with the citizens who demand illegal goods and services.

The term “underworld” could hardly be more misleading since the “upperworld” has gained more from organized crime activity. There are few significant areas of American life that have not affected or been affected by-organized crime. Organized crime is an essential feature of the American social, economic and political systems, but experts and commentators have managed to disguise this fact by representing it as something alien and distinct from American life. Because of this the American government persists with an organized crime control policy that undermines civil liberties without making more than a marginal impact on the extent of organized crime activity. Ageing Italian-American gangsters are incarcerated from time to time in a blaze of publicity and exaggerated claims about their significance, but any vacuums created are soon filled. There are stronger forces around than the force of law; social, economic and political forces which often combine to make the law at best inadequate, at worst counterproductive.

The increasing use of heroin and cocaine supplied by corrupt and violent networks is undoubtedly a major problem. These drugs are dangerous but it is important to remember that legal substances such as tobacco and alcohol are just as addictive and responsible for far more health problems and deaths. Milton Friedman, an economist who helped shape the supply-side strategy of the first Reagan administration, has pointed this out and recommended the abolition of drug prohibition. From the perspective of the libertarian right the Republicans should be consistently deregulatory; drugs, including heroin and cocaine, should be legally obtainable. Otherwise, they argue, armies of bureaucratic enforcers will continue to drain money from the Treasury, while only gangsters, corrupt police and politicians will benefit.[78]

Proposals to legalise drugs, however, are not likely to be taken seriously. Governments wishing to avoid the devastating social consequences of American drug control policies should consider more realistic alternatives such as the Dutch drug control model. The Dutch government gives a lower priority to drug law enforcement than to maintaining social stability; policies are based on pragmatism rather than moralism. Users and small-time distribution networks are not often troubled by the police; marijuana, the least dangerous drug, is easy to obtain. The intention is to keep users within society rather than ostracize and alienate them.[79] There are problems with this approach, of course, often highlighted by those who favour a law enforcement approach based on the American model, but these problems do not compare in extent and severity to American drug problems.

Governments should also pay more attention to the public health arguments against drug prohibition. Prohibition inflates prices and this not only enriches drug traffickers but also results in many addicts stealing or prostituting themselves to support expensive habits. It leads to addicts risking overdose deaths from drugs of uncertain strength and purity, and risking infections with the use of dirty needles. In the United States even drug paraphernalia is prohibited, therefore syringes are scarce and frequently shared. This is the reason why AIDS is being spread among American intravenous drug users at an alarming rate. People affected in this way must constitute a significant proportion of the half a million or more AIDS carriers in New York City alone.[80] In the Netherlands, prices for heroin are less inflated, there are less lucrative opportunities for large drug trafficking operations, and drug users can buy or exchange needles and syringes. There is less opportunity for the development of significant organized crime and only a small incidence of AIDS among Dutch drug users.

The Dutch, however, carry less weight in the international community than the Americans. The Americans have successfully encouraged numerous other countries to base their response to drugs on their model. Entrepreneurs and criminal syndicates are already taking advantage in these countries. There are profits to be made and opportunities to be exploited by modern day equivalents of Arnold Rothstein, especially in the relatively rich countries of Western Europe. The damage to the health and social stability of countries which fail to learn from the American experience is likely to become progressively more apparent. Control strategies should concentrate on minimizing the damage that drugs can do to society. No strategy should take resources from the only approach that can reduce the demand for dangerous drugs; treatment and education, of course, but, more important, making policy which attacks the causes of social problems and not the symptoms.

Not much about organized crime fits into a neat and simple pattern. Not much conforms to the formulas most popular writers and journalists follow. American organized crime is an increasingly complex phenomenon with an ever more damaging social impact. One thing is certain: the time has come to tear up the labels that say, “Made in Sicily, Made in China, Made in Colombia Such labelling has only succeeded in restricting analysis and forestalling other approaches to serious social problems.

5. Guide to Further Reading

For full bibliographical details, see the appropriate reference in the Notes, as indicated.

Historians have tended to leave the study of organized crime to sociologists and journalists. The two most notable American exceptions are William Moore and Alan Block. Moore’s The Kefauver Committee and the Politics of Crime 1950-1952 (1974) [31] is an excellent introduction to the subject. Block has published in numerous journals and edited collections; his first book was East Side-West Side: Organizing Crime in New York 1930-1950 (Cardiff University College Cardiff 1980). Also worth consulting is Frank Browning and John Gerassi, The American Way of Crime (1980)[2] which gives a narrative history of organized crime as far back as the Elizabethan pirates and privateers. A comprehensive sociological introduction to American organized crime is Howard Abadinsky’s, Organized Crime (1981) [3] Readers include: Gus Tyler’s Organized Crime in America (1967)[14], Francis A. J. Ianni and Elizabeth ReussIanni, The Crime Society: Organized Crime and Corruption in America (New York: New American Library, 1976) and Robert Kelly, Organized Crime: A Global Perspective, (1986)[25]. The recently published Crime and Justice in American History, edited by Eric Monkkone (Westport, Meckler, 1990) has many relevant articles.

An historical account of Italian-American organized crime is provided by Humbert Nelli’s The Business of Crime (New York: Oxford University Press, 1976) which does not support the all-powerful, centralised Mafia interpretation. The story of Jewish-American gangsters is told by Albert Fried in The Rise and Fall of the Jewish Gangster in America (New York: Holt, Rinehart, 1980). Black Mafia: Ethnic Succession in Organized Crime (London: New English Library, 1974) by Francis Ianni is an anthropological study of black American organized crime and is less sensationalist than its title suggests.

Most criminologists have tended to rely on the Justice Department for information about organized crime, usually accepting the government’s all powerful Mafia interpretation. Typical of these efforts is Donald Cressey’s Theft of the Nation: The Structure and Operations of Organized Crime in America (New York: Harper & Row, 1969) which translates Valachi’s testimony into an organizational chart and the sociological jargon of the day to show that the “Italian organization … controls all but an insignificant proportion of the organized crime of the United States.” More recent is August Bequai’s, Organized Crime: The Fifth Estate (Lexington: Lexington Books, 1979) which begins by making the claim that organized crime’s untaxed profits average as much as $600,000 per hour! The literature is full of mythical statistics.

The first sociologist to dissent from the Mafia interpretation was Daniel Bell in “Crime as an American Way of Life” reprinted in The End of Ideology (1962)s. Bell’s ethnic succession thesis is certainly an improvement, but, as Bell admits, its validity is limited to what happened in New York and some other cities at a time when certain characteristics of the American economy, American ethnic groups, and American politics applied. Organized crime has moved into a new era since Bell completed his article.

Bell was followed by Joseph Albini whose The American Mafia: Genesis of a Legend (New York: Appleton-Century-Crofts, 1971) is particularly scathing about Cressey’s organizational chart: “Even the Boy Scouts of America have a far more complex structure than this.” Dwight Smith’s The Mafia Mystique (1976) [6] is an excellent comprehensive analysis of Mafia imagery. Frank Pearce in Crimes of the Powerful (London: Pluto Press, 1976) argues that the myth of the Mafia was a distraction from corruption in the system and emphasizes the benefits of organized crime to the powerful in society. He makes clear the subordinate status of gangsters. William Chambliss in On the Take: From Petty Crooks to Presidents (Bloomington: Indiana University Press, 1978) studied organized crime in Seattle and found that it really consisted of coalitions of politicians, law enforcers, businessmen, union leaders and, at the lowest level, racketeers. For Chambliss crime “is not a by-product of an otherwise effectively working political economy: it is a main product of that political economy.” Chambliss has also combined with Alan Block to produce Organizing Crime (New York: Elsevier, 1981) which is a useful collection of their work, analysing the history and structure of organized crime.

There are several major areas of organized crime activity. The most thorough account of industrial racketeering is in John Hutchinson’s The Imperfect Union – A History of Corruption in American Trade Unions (New York: E. P. Dutton, 1970). Much earlier but still useful is: Harold Seidman, Labor Czar: A History of Labour Racketeering (New York: Liveright, 1938). Senator John McClellan writes about his committee’s investigations into labour racketeering in Crime Without Punishment (New York: Popular Library, 1962) and the committee’s counsel, Robert Kennedy, contributed The Enemy Within (New York: Harper & Row, 1960). Numerous books about the corrupt Teamsters Union exist including: Steven Brill, The Teamsters (New York: Simon and Schuster, 1978) and Dan Moldea, The Hoffa Wars: Teamsters, Rebels, Politicians and the Mob (London: Paddington Press, 1978). Labour racketeering and its relationship to business interests are discussed in Mary Mclntosh, “The Growth of Racketeering,” Economy and Society, Volume 2 Number 1(1973)35-69

Organized crime and gambling concerns Estes Kefauver in Crime in America (London: Victor Gollancz, 1952); an account of the senate investigations that led to the conclusion that, “There is a nationwide crime syndicate known as the Mafia.” The role of gangsters in the development of the legal casino industry is traced by Jerome H. Skolnick in House of Cards: Legalization and Control of Casino Gambling (Boston: Little, Brown, and Company, 1980). Journalistic versions of the same theme can be found in The Green Felt Jungle: The Truth About Las Vegas (London: Heinmann, 1965) by Ed Reid and Ovid Demaris and The Company that Bought the Boardwalk (New York: Random House, 1980) by Gigi Mahon, on Atlantic City. Illegal bookmaking, numbers and loan sharking are the businesses analysed by Peter Reuter in Disorganized Crime: The Economics of the Visible Hand (Cambridge: The MIT press, 1983). After detailed primary research and an application of economic theory, Reuter comes to the conclusion that no organization or cartel is capable of exercising effective control over these illegal markets.

Books about drug trafficking are currently proliferating but only a few are worthy of recommendation. These include: Steven Wisotsky’s Breaking the Impasse on the War on Drugs (London: Greenwood Press, 1986), Hank Messick’s Of Grass and Snow (1979) [77] and Anthony Henman et al., Big Deal: The Politics of the Illicit Drugs Business (London: Pluto, 1985). The Journal of Drug Issues has many relevant articles, including: Robert B. McBride, “Business as Usual: Heroin Distribution in the United States,” (Winter, 1983), 147-66. Donald Goddard’s Easy Money (New York: Farrer, Straus and Giroux, 1978) is a biography of Frank Matthews, the leading black drug trafficker of the early 1970s. But most drug-related journalism is prone to distortion and at least one reporter, Adam Paul Weisrnan, has confessed to this: “I Was a Drug-Hype Junkie,” New Republic, 6 Oct. 1986, pp. 14-17.

To be useful organized crime money has to be laundered. An indication of the processes and results of money laundering can be found in: Thurston Clarke and John J. Tigue’s Dirty Money: Swiss Banks, the Mafia, Money Laundering, and White Collar Crime (New York: Simon and Schuster, 1975) and R. T. Naylor’s Hot Money and the Politics of Debt (London: Unwin Hyman, 1987).

Corruption is essential to the success and extent of organized crime and has been so pervasive in the United States that a two volume bibliography has been written by Anthony Simpson, The Literature of Police Corruption (New York: The John Jay Press, 1977). Thomas Repetto’s The Blue Parade (New York: The Free Press, 1978) is a useful and entertaining history. Peter Maas in Serpico (New York: Bantam, 1974), Robert Daley in Prince of the City (London: Granada, 1980) and David Durk and Ira Silverman in The Pleasant Avenue Connection (New York: Harper & Row, 1976) describe situations where the police were actually organizing crime in New York. Michael Dorman’s Pay Off: The Role of Organized Crime in American Politics (New York: David McKay Co., 1972) finds corruption further up the system. Gary T. Marx in Undercover: Police Surveillance in America (London: University of California Press, 1988) assesses the benefits and dangers of covert crime control methods.

Numerous popular gangster books, both fiction and non-fiction, have been published since the 1920s. The Valachi Papers by Peter Maas initiated a new genre: that of the Mafia member turned government informer. This genre now includes: Vincent Teresa’s My Lift in the Mafia (London: Panther, 1974), Ovid Demaris’ The Last Mafiosi (London: Corgi, 1981), about Jimmy “The Weasel” Fratiano, and Nicholas Pileggi’s Wiseguy – Life in a Mafia Family (London: Corgi, 1987, about Henry Hill. All give portraits of unsuccessful career criminals in a treacherous and not always well-organized Italian American underworld. There are two biographies of a far more significant gangster: Lansky (London: Robert Hale, 1971) by Hank Messick, and Meyer Lansky: Mogul of the Mob (London: Paddington Press, 1979) by Dennis Eisenburg et al.

The best-selling novel on the subject is Mario Puzo’s The Godfather (London: Pan, 1969) which was turned into two commercially successful films directed by Francis Ford Coppola. The Mafia has become such an established part of American folklore that it is now often parodied in films including; Woody Allen’s Broadway Danny Rose (1982), John Huston’s Prizzi Honour (1985), David Mamet’s Things Change (1988) and Jonathan Demme’s Married to the Mob (1989). After all this Runyonesque characterization it will he interesting to see whether the projected Godfather III will he taken as seriously as the others. Sergio Leone’s Once Upon a Time in America (1984) is recommended for those who wish to see a film that shows immigrants adapting to a corrupt and violent New World. The immigrant gangsters in this film are co-opted into the American system rather than bringing secretive and violent ways with them from the old country. It shows organized crime as part of the process of American development rather than as an alien intrusion.

6. Notes

  1. For a contemporary biography of Rothstein, see Donald Henderson Clarke, In the Reign of Rothstein (New York): The Vanguard Press, 1929). More recent is Leo Katcher, The Big Bankroll (New York: Harper, 1958). For a survey of Jewish-American organized crime during and after Rothstein’s life see Jenna Weissman Joseph, Our Gang: Jewish Crime and the New York Jewish Community, 1900-1940 (Bloomington: Indiana University Press, 1983). Back
  2. Frank Browning and John Gerassi, The American Way of Crime (New York: G. P. Putnam’s Sons, 1980), pp. 258-59. Back
  3. Howard Abadinsky, Organized Crime (Boston: Allyn and Bacon, 1981), pp. 23-29, Matthew Josephson, The Robber Barons: The Great American Capitalists, 1861-1901 (New York: Harcourt, Brace and Company, 1934). Back
  4. Lincoln Steffens, The Shame of the Cities (1902; rept., New York: Hill and Wang, 1957) p. 8. Back
  5. For an example of crusading literature, see Clifford Roe, Horrors of the White Slave Trade: The Mighty Crusade to Protect the Purity of Our Homes, (London: privately published, 1911). Back
  6. Adams quoted in David Henry Bennett, Party of Fear: from Nativist Movements to the New Right in American History (London: University of North Carolina Press, 1988) p. 168; McClure’s Magazine (34), 1909; The Outlook, 16 Aug. 1913, quoted in Salvatore J. LaGumina, Wop!: A Documentary History of Anti-Italian Discrimination in the United States (San Francisco: Straight Arrow Books, 1973) p. 98; see also Dwight Smith, The Mafia Mystique (New York: Basic Books, 1975) pp. 27-61. Back
  7. John Landesco, Organized Crime in Chicago (1929; rept., Chicago: University of Chicago Press, 1968) pp. 189-221. Back
  8. National Commission on Law Observance and Enforcement, Report on the Enforcement of the Prohibition Laws of the United States, 71st Congress, 3rd Session, H. D. 722, pp. 37-44. Back
  9. Quoted in Frederick Lewis Allen, Only Yesterday (1931); rept., New York: Bantam Books, 1959) p. 182. Back
  10. Quoted in Mary McIntosh, “The Growth of Racketeering,” Economy and Society, Vol. 2, November 1973, p. 61. Back
  11. Quoted in Paul Sann, The Lawless Decade (New York: Bonanza Books, 1957) p. 214. Back
  12. F. D. Pasley, Al Capone: The Biography of a Self-made Man, (London: Faber and Faber, 1966) p. 301. See also John Kobler, Capone, (London: Coronet, 1973). Back
  13. Andrew Sinclair, Prohibition (London: Faber and Faber, 1962) p. 349. Sinclair’s book is still the most comprehensive study of the social and political history of Prohibition. Back
  14. Walter Lippmann, “The Underworld as Servant,” Forum, January and February 1931, reprinted in Gus Tyler, ed., Organized Crime in America (Ann Arbor: University of Michigan Press, 1967) pp. 58-69. Back
  15. Quoted in Richard Gid Powers, G-Men: Hoover’s FBI in American Popular Culture, (Carbondale and Edwardsville: Southern Illinois University Press, 1983) p.9. Back
  16. Ibid., p. 31. Back
  17. Quoted in Tyler, p. 5. Back
  18. Quoted in Powers, pp. 39-40. Back
  19. Quoted in Kobler, p. 337. Back
  20. Quoted in Powers, p. 298. Back
  21. Andrew Bergman, We’re in the Money – Depression America and its Films (London: Harper & Row, 1972) p. 13. Back
  22. Hank Messick and Burt Goldblatt, Gangs and Gangsters: The Illustrated History of Gangs (New York: Ballantine Books, 1974) pp. 160-64. Back
  23. For gangster films see Colin McArthur, Underworld USA (London: Secker & Warburg, 1972); Carlos Clarens, Crime Movies (London: W. W. Norton, 1980); John Baxter, The Gangster Film (London: Zwemmer, 1970); Frank Pearce, “Art and Reality: Gangsters in Film and Society,” Sociological Review, Monograph 26. For 1930s radio see J. Fred McDonald, Don’t Touch That Dial! Radio Programming in American Life, 1920-1960 (Chicago: Nelson-Hall, 1979). Back
  24. New York Daily Mirror, 8 June 1936. Back
  25. For an analysis of Luciano’s role in the war-time links between organized crime and Navy Intelligence see Alan Block, “A Modern Marriage of Convenience: A Collaboration Between Organized Crime and U. S. Intelligence,” in Robert Kelly, ed., Organized Crime: A Global Perspective (Totawa, N. J.: Roman and Littlefield, 1986) pp. 58-77. Back
  26. For a longer discussion of Dewey’s career see Michael Woodiwiss, Crime, Crusades and Corruption: Prohibitions in the United States, 1900-1987 London: Pinter, 1988) pp. 47-72. Back
  27. Quoted in Todd Gitlin, “Television Screens: Hegemony in Transition,” in Donald Lazere, ed., American Media and Mass Culture: Left Perspectives (London: University of California Press, 1987) p. 250. Back
  28. Ernest Havemann, “Gambling in the United States,” Life, 19 Jun. 1950, pp. 14-16. Back
  29. Virgil Peterson, Gambling: Should it be legalised’ (Springfield: Charles Thomas Publisher, 1945) quoted in Commission on the Review of the National Policy Towards Gambling, Gambling in America, Appendix 4, p. 55. Back
  30. New York Times, 21 March 1951. Back
  31. Quoted in William Moore, The Kefauver Committee and the Politics of Crime, (Columbia: University of Missouri Press, 1974) p. 184. Back
  32. Ibid., p. 75. Back
  33. US Congress, Senate Special Committee to Investigate Crime in Interstate Commerce, 82nd Congress, Third Interim Report, (Washington DC, 1951) p. 147. Back
  34. Quoted in Anslinger’s obituary notice, New York Times, 18 Nov. 1975, p.40. Back
  35. State of California, Special Crime Study Commission on Organized Crime, Third Progress Report, (Sacramento, 31 Jan. 1950) p. 100. Back
  36. Dwight Smith, The Mafia Mystique, (London: Hutchinson, 1975) pp.184-88; FBN corruption detailed in testimony from US Senate hearings before the Permanent Subcommittee on Investigations of the Committee on Government Operations, Federal Drug Enforcement, 94th Congress, 1st Session, 9, 10, 11 Jun. 1975, Part I, pp. 134-144. Back
  37. Lee Mortimer and Jack Lait, Chicago Confidential (New York: Crown, 1950) pp. 176-77. Back
  38. Lee Mortirner and Jack Lait, USA Confidential (New York: Crown, 1952) p.15. Back
  39. Lee Mortimer and Jack Lait, Washington Confidential (New York: Crown, 1951) p. 178. Back
  40. Ibid., p. 107. Back
  41. Lee Mortimer and Jack Lait, USA Confidential p.29. Back
  42. Lee Mortimer and Jack Lait, Chicago Confidential, p. 45. Back
  43. Ed Reid, Mafia (New York: Random House, 1952) p. 1. Back
  44. Frederick Sondern, Brotherhood of Evil: The Mafia (London: Panther, 1959) p. 11. Back
  45. Lee Mortimer and Jack Lait, USA Confidential, p. 9. Back
  46. Smith, pp. 264-274. Spillane quoted, p. 264. Back
  47. Publicity information on films is available from the library of the British Film Institute. Back
  48. Daniel Bell, The End of Ideology (New York: The Free Press, 1962) pp. 138-50. Back
  49. Ibid., p. 129. Back
  50. For the current federal perspective see: President’s Commission on Organized Crime (Kaufman Commission), Organized Crime: Federal Law Enforcement Perspective, Record of Hearing 1, 29 Nov. 1983, (Washington DC: Government Printing Office, 1983). Back
  51. Smith. pp. 302-10. Back
  52. Quoted in New York Times, 16 Oct. 1970 p. 1. Back
  53. For documentation of these abuses see House of Representatives, Committee on the Judiciary, Subcommittee on Immigration, Citizenship and International Law, Hearings on H. J. Res, 46 H. R. 1277 and Related Bills: Federal G. J., (Washington DC, Government Printing Office, 1976) pp. 344-56, 498-513. Back
  54. Steven Shapiro, “Nailing Sanctuary Givers,” Los Angeles Daily Journal, 12 Mar. 1985, p. 4. Back
  55. Levy quoted in House of Representatives, p. 444. Back
  56. New York Times, 24 Nov. 1986. Back
  57. Quoted in New York Times, 15 Oct. 1982, p. 1. Back
  58. For a more detailed evaluation of Reagan’s war on drugs see Woodiwiss, Crime, Crusades and Corruption, pp. 197-226. Back
  59. Kaufman Commission, p. 140. Back
  60. Quoted in New York Times, 5 Mar. 1986, p. 17. Back
  61. For an account of the spread of urine testing see Abbie Hoffman, Steal This Urine Test – Fighting Drug Hysteria in America, (New York: Penguin, 1987). Back
  62. Narcotics Control Digest, 7 Dec. 1988, p.2. Back
  63. Ibid., 20 Feb. 1985, pp. 4-5. Back
  64. New York Times, 20 Jan. 1982, p. 22; Narcotics Control Digest, 16 Oct. 1985, p.5. Back
  65. Guardian, 16 May 1989, p. 11. Back
  66. New York Times 15 April 1984, p. 1. Back
  67. President’s Commission on Law Enforcement and the Administration of Justice, The Challenge of Crime in a Free Society, (Washington DC: Government Printing Office, 1967) p. 446. Back
  68. Los Angeles Times, 24 May 1982, p. 7. Back
  69. Narcotics Control Digest, May 1984, p. 12. Back
  70. For an analysis of “the practice of law as a confidence game” see Abraham S. Blumberg, Criminal Justice (Chicago: Quadrangle Books, 1967) pp. 110-15. Back
  71. For organized crime and toxic waste see Alan Block and Frank R. Scarpitti, Poisoning for Profit: The Mafia and the Toxic Waste in America, (New York: William Morrow, 1985). Back
  72. Quoted in Penny Lernoux “The Miami Connection,” Nation, 18 Feb. 1984, p. 198. Back
  73. Kirkpatrick Sale, Power Shift: The Rise of the Southern Rim and Its Challenge to the Eastern Establishment (New York: Vintage Books, 1976) pp. 80-88. Back
  74. Robert W. Green, The Sting Man – Inside Abscam (New York: E. P. Dutton, 1981) p. 128. Back
  75. For post-war Italy see Norman Lewis, The Honoured Society, (Harmonsworth: Penguin, 1967). For the anti-Castro Conspiracy see Arthur M. Schlesinger, Jr, Robert Kennedy and his Times (London: Futura, 1978) pp. 519-21. Back
  76. Observer, 2Jul. 1989, p. 21. Back
  77. Hank Messick, Of Grass and Snow, (Englewood Cliffs, N.J.: Prentice Hall, 1979; Alfred W. McCoy, The Politics of Hero in South East Asia (New York: Harper & Row, 1972); Jonathan Kwitny, “Money, Drugs and the Contras,” Nation 29 Aug 1987, p.I., pp 162-66. See also Drugs, Law Enforcement and Foreign Policy – A Report prepared by the Subcommittee on Terrorism, Narcotics and International Operations of the Committee on Foreign Relations, US Senate, (Washington DC: Government Printing Office, 1989) for documentation of some of the inconsistencies between US foreign policy and the war on drugs. Back
  78. See Drugs and Drug Abuse Education, Aug. 1985 for the libertarian right’s perspective on drugs. Back
  79. E. L. Englemann, “Dutch Policy on the Management of Drug Related Problems,” British Journal of Addiction, 84 (1989), pp. 211-218. The best source of information about international drug control and drug trafficking is the library of the Institute for the Study of Drug Dependence, London. Back
  80. Drugs and Drug Abuse Education, May/Jun. 1987, pp. 46-47; Guardian, 1 Dec. 1989, p. 36. Back

Top of the Page

Richard Crockatt, The United States and the Cold War 1941-53

BAAS Pamphlet No. 18 (First Published 1989)

ISBN: 0 946488 08 8
  1. Perspectives on the Cold War
    i. Historians and the Cold Warii. The American and Soviet Foreign Policy Traditions
  2. Endings and Beginnings 1941-1946
    i. The Grand Allianceii. The Breakdown of the Allianceiii. 1946: The Turning Point
  3. Containment and the Division of Europe
    i. The Truman Doctrineii. The Marshall Plan, Germany, and the Division of Europeiii. NATO, NSC-68 and the Militarization of Containment
  4. Cold War in the Far East
    i. McCarthyism and the Far Eastern Turnii. China, Japan and the Ferment in Asiaiii. The Korean War
  5. Conclusion
  6. Notes
  7. Guide to Further Reading
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. Perspectives on the Cold War

Historians and the Cold War

After a generation and more of intensive research, historians are no nearer agreement on the causes of the Cold War than they are on other subjects of major importance. To the familiar problem of all historical inquiry—the susceptibility of evidence to multiple interpretations—must be added the decisive fact that the Cold War is a going concern. Total detachment in these circumstances is a sheer impossibility. The passage of time nevertheless can reshape conceptions of even the recent past. The advent of Secretary Gorbachev to the leadership of the Soviet Union, for example, can serve as a triangulation point for a remapping of the postwar years. It allows us to see Soviet foreign policies over the whole period less as an outgrowth of a single species of communism than as a complex interplay between internal and external pressures. That Soviet communism has the potential to change in important ways throws the Stalinist era, for example, into sharper relief. It is now revealed more clearly than ever before as a particular phase in the development of the Soviet state.[1] The same holds true for the United States, though arguably to a lesser degree. Ronald Reagan’s presidency was in many respects a throw-back to the early years of the Cold War. Certainly Americans as a whole have been less inclined to echo recent Soviet views that in the era of ‘glasnost’ the Cold War is over. Nevertheless, for most American historians, if not for officials in the administration, the beginnings of the Cold War are by now in a real sense ‘history’.

Interpretation of the origins of the Cold War hinges on three issues: assessment of the motives behind American foreign policy, the motives of Soviet policy, and some conception of how nations interact. Clearly the tack of primary sources for the study of the Soviet Union, when compared with the reams of American documents, creates a special problem. There are often few means of testing assumptions about Soviet policy against evidence, with the result that conclusions are frequently based on speculation. Memoirs and documents do, however, exist, selective and partial though these are. We also have the record of Soviet actions themselves. Given that this is the evidence with which American policy-makers were confronted, historians must likewise give it close attention. Besides, the massive and growing bulk of American sources has by no means settled the question of the motives of American policy-makers and there is no reason to think that a comparable mass of Soviet sources would make unanimity among historians any more likely. A serious information gap nevertheless exists on the Soviet side and is likely to continue for the foreseeable future. Despite signs of changing attitudes in the Soviet Union, fully informed debate on Soviet policies remains impossible.

American historiography of the Cold War is frequently categorized according to ‘orthodoxy’, ‘revisionism’, and ‘post-revisionism’, appearing in succession roughly in the 1950s, 1960s and 1970s in line with political developments at home and abroad. Consensus among American historians on the nature of the Soviet threat and on America’s firm response to it in the 1940s and 1950s (orthodoxy) gave way to criticism of US policy in the era of Vietnam (revisionism), to be replaced by the less politically charged writings of the detente years (post-revisionism). There is some justice in this scheme. Historians have undoubtedly reflected the prevailing climates of opinion in their work. Neither the climates of opinion nor the historians’ responses to them, however, have been as simple as this scheme would suggest. In the fast place, vigorous debate took place within each historiographical phase. There was, for example, no single entity called orthodoxy. Secondly, earlier views were not abandoned as new ones appeared. The historiography of the Cold War has been a continuing conversation in which the number of participants has increased without silencing all the previous voices. What has changed over tire years is the relative weight ascribed by historians to the three paints mentioned above as the central foci of debate: Soviet intentions, American intentions, and the interaction between them.

The first generation of analysts of the Cold War was heavily influenced by the singular condition of the early postwar years: the emergence of a bipolar divide between two superpowers based on antagonistic social and political principles. Bipolarity was a novel condition in international relations and presented a sharp challenge to those whose assumptions about international affairs were formed in the prewar era when power was distributed among a number of nations. Moreover, the emergence of bipolarity went along with, indeed was a consequence of, the displacement of the old Great Powers of Europe as the focus of the world system. It posed the problem of explaining and justifying the assumption of a new world role by the United States. The so-called ‘orthodox’ position on the Cold War is in actuality a complex of views which range from simple endorsement of government policy to more or less stringent criticisms of it. What links them is broad acceptance of the view that an expanded US world rote was an inevitable product of circumstances, chief among these being the collapse of the old centres of power and the actual or threatened entrance of the Soviet Union into the gap, not only in Europe but also in the Near, Middle and Far East where the old powers had previously exerted influence.

Within the framework of orthodoxy one can identify ‘ideologists’ and ‘realists’. The former, such as political scientist Zbigniew Brzezinski (later National Security Adviser under President Carter), saw the Soviet Union as driven by militant ideological expansionism which could be met only by vigorous American counter-measures. Writing in 1960, he saw little prospect of moderation in Soviet goals and hence little chance of peaceful coexistence with the Kremlin.[2] Traditional diplomacy was hardly possible in these circumstances since the Soviet Union did not subscribe to its values and traditions. Realists by contrast not only believed that Soviet actions arose as much from the desire for security as from ideology but that America’s own tradition of moralistic diplomacy was ill-suited to cope with the postwar world. Americans did not understand the factor of power in international relations. With their all-or-nothing approach, wrote John Spanier in a widely used survey of American foreign policy published in 1960, Americans tended ‘either to abstain from the dirty game of power politics or to crusade for its complete elimination’. This posture, furthermore, militated ‘against the use of diplomacy in its classical sense: to compromise interests, to conciliate differences, and to moderate and isolate conflicts’. George Kennan, a prominent American diplomat and historian, declared in 1950 that ‘a good deal of our trouble seems to have stemmed from the extent to which the executive has felt itself beholden to short-term trends of public opinion in the country and from what we might call the erratic and subjective nature of public reaction to foreign-policy questions’.[3]

The realists’ critique of American policy stopped short of total condemnation. Their quarrel was with the style rather than the substance of American diplomacy since they had few doubts that a Soviet threat existed which must be met. It is sometimes claimed that the roots of revisionism are to be found in realist writings of the 1940s and 1950s, particularly those of the prominent journalist Walter Lippmann. Lippmann himself, though on occasions a sharp critic of administration policies, repudiated the suggestion, and with good reason.[4]Revisionists by and large shared neither the realists’ assumptions about power nor their broad endorsement of President Truman’s policies towards the Soviet Union. Revisionists such as William Appleman Williams and Gabriel Kolko were in many respects closer to the orthodox ideologists than to the realists in that they reinstated ideology as the central category of analysis. They simply shifted attention to American rather than Soviet ideology.[5]

The revisionist critique was most powerfully mounted by William Appleman Williams’ Tragedy of American Diplomacy (1st edition 1959). His basic propositions were that the United States had instigated the Cold War and that Soviet policies had been fundamentally defensive and limited in scope. The ideology of American capitalism, expressed as the pursuit of an ‘open door’ or expanding market for American goods, was, he argued, the determining force behind American diplomacy. Fear of a depression haunted American leaders and drove them to seek outlets for surplus production. Not all revisionists followed Williams along this line of interpretation. Some placed emphasis on personalities (particularly of Truman) and on strident anti-communism in their explanations of the ‘get tough’ American policies of the early postwar years.[6] One important link, however, between revisionists of whatever type was the view that the Cold War had resulted essentially from unilateral American actions and that it had therefore been an avoidable tragedy.

The Vietnam War and its domestic repercussions undoubtedly served to bring revisionism close to the centre of historiographical debate about the origins of the Cold War. It explains in part why the focus of revisionist histories was on American rather than Soviet policy. The central issue was conceived to be: how had the United States arrived at the point where it had become the citadel of reaction and the opponent of freedom movements around the world? In this respect revisionism was no less affected by the agenda of national and international politics than was orthodoxy. However, although the agenda has changed since the appearance of the pioneering works of revisionism, revisionism remains a powerful force and has been developed and refined in important ways.[7]

Both the altered agenda and the continuing influence of revisionism are discernible in a third historiographical phase, which can be usefully dated from the publication in 1972 of John Lewis Gaddis’s The United States and the Origins of the Cold War, 1941-1947. This group of historians, generally labelled ‘post-revisionist’, has been markedly diverse in approach and conclusions. This is so not merely because the winding down of the Vietnam War and the emergence of detente in the early 1970s introduced greater diffuseness into the foreign policy debate but because a mass of new sources became available. These included documents made accessible by the passage of the Freedom of Information Act in 1974. The result has been to enlarge the focus of historiographical inquiry and in some respects to blur the outlines of US-Soviet relations supplied by previous interpreters.

Historians such as Gaddis acknowledge the value of the revisionists’ attention to economic factors in American policy-making without, however, accepting the view that these were decisive. Economic factors are seen rather in the context of an array of domestic constraints which limited policy options in crucial ways. The picture which emerges is of an America struggling to reconcile its heritage of isolationism with the pressure to assume a leading role in the growing conflict with the Soviet Union. Unlike the revisionists, who saw American leaders pursuing a consistent and assertive line towards the Soviet Union from the middle of 1945, Gaddis and other post-revisionists are struck by the hesitancy with which America adopted its unprecedentedly active role in peacetime world affairs. Gaddis, however, can be said to have restored an element of the orthodox interpretation in his view that the Soviet Union bore primary responsibility for the breakdown of the wartime alliance.[8] Daniel Yergin, another prominent post-revisionist, by contrast owes more to revisionism than to orthodoxy in his critical analysis of Truman’s abandonment of cooperation with the Soviet Union.[9]

A significant new departure in recent discussions of the Cold War has been the close attention given to British policy. A number of historians, many of them British, have suggested that in the decisive period from mid-1945 to early 1946 Britain’s contribution was to awaken the United States to the reality of the Soviet threat and that the United States had to be coaxed into its role of Western leadership. From this perspective the central theme of the early Cold War was Anglo-Soviet conflict, in which the United States sought initially to play a mediating role. The failure of British efforts in 1944-45 to create an Anglo-Soviet alliance, coupled with Britain’s serious financial problems in the postwar years, forced the United States to assume a role which Britain was no longer capable of fulfilling—that of guarantor of stability in Western Europe, Iran, and Turkey.[10] This interpretation evidently complements Gaddis’s account and it has also qualified the revisionist argument that the United States rode roughshod over Britain’s interests in pursuit of economic and political hegemony. In short, post-revisionist accounts, while addressing some of the arguments put forward by revisionists and in many instances acknowledging revisionism’s contribution, have tended to argue for the primacy of the political over the economic and of mufti-causal over mono-causal explanations.

It will be apparent from this brief review that it is no easy matter to keep analysis of Soviet and American intentions and their interaction in balance. Ideological interpretations, whether orthodox or revisionist, tend to operate with a double standard. Orthodox historians attribute American policy to a concern with national security and Soviet policy to limitless ideological goals, while revisionists employ the same scheme in reverse. Both view the Cold War as essentially the consequence of one power acting upon another. Both adopt ‘total’ explanations which make it difficult to account for specific policies which do not fit into the favoured scheme. Orthodox historians interpret signs of pragmatism in Stalin’s policies—such as his unwillingness to support the communist rebels in the Greek Civil War between 1944 and 1947—as merely tactical moves in the service of a larger plan to extend the sphere of Soviet domination. Revisionists iron out inconsistencies in American policy by recourse to the open door thesis. The Marshall Plan of 1947, which sought the economic reconstruction of Western Europe, was opposed by many Republicans in Congress and by business interests as involving a wasteful expenditure of American resources. It was labelled contemptuously, in reference to FranklinRoosevelt’sdespised ‘New Deal’ policies of the 1930s, as a ‘New Deal for Europe’. In the event the Plan was endorsed by Congress only after the communist coup in Czechoslovakia (1948) provoked a sense of crisis and a consensus had emerged on the means of dealing with it. Even then, the administration’s original requests were pared down by a cost-conscious Congress. In this instance and many others it is difficult to discern any simple fit between ideology and policy outcomes. Revisionist accounts also ignore the extent to which pressure for US involvement in the economic affairs of Western Europe came from the Europeans themselves.

Revisionism, however, was never merely an inversion of orthodoxy. Its emphasis on economic factors was an important corrective to the exclusive concern in orthodox writings with politics and diplomacy. This weakness in orthodox histories applies not only to the ideologists but also to the realists, for all their seductive hard-headedness. Revisionists have rightly indicated the degree to which the outward thrust of American policy was a consequence of the inherent dynamism of the American economy, even if they have pursued this argument to excessive lengths. Post-revisionists such as Gaddis and Yergin have gone some way towards meeting the problems raised in both orthodox and revisionist histories by directing close attention to the processes of American policy-making, which, it now seems clear, were complex and often contradictory. Their emphasis on wartime diplomacy, furthermore, has helped to place American Cold War policies more fully within the context of the global crisis of the Second World War. The problem of assessing Soviet intentions remains. An explanation of American diplomacy does not of itself account for the Cold War, as Gaddis acknowledges.[11] In the absence of definitive answers to the question of Soviet intentions and to the other questions raised in these pages, one can at least strive to accord due weight to the factors in US-Soviet conflict as they were perceived and acted upon by both sides.

The American and Soviet Foreign Policy Traditions

The American foreign policy tradition is the product of a revolutionary heritage and geographical location. The Revolution supplied America with a set of principles; physical isolation from Europe and the possession of vast territory available for expansion provided the means of preserving and subsequently extending the range of its revolutionary principles. America’s famed ‘isolationist’ tradition during the nineteenth century was in fact contingent upon physical distance from Europe and favourable economic conditions. Isolationism scarcely applied to Latin America. The Monroe Doctrine of 1823 embodied two claims: denial of the right of European Powers to attempt further colonization there, and an assertion of the incompatibility of the political values of the Old World and the New. America was different, yet representative of universal values. The Declaration of Independence, wrote President Lincoln, promised ‘liberty, not alone to the people of this country, but hope for the world for all future time’.[12]

As the United States entered the twentieth century and overtook the major European nations in economic power, the sense of difference was retained but its application in new circumstances elevated a hemispheric doctrine to a global scale. Breaking with the isolationist tradition in 1917 to enter the First World War, President Wilson announced that America was going to war ‘to make the world safe for democracy’. What has been called the ‘diplomacy of principle’[13] was not merely a matter of couching policy in idealistic terms. It was the product of a culture which had experienced revolution and national growth as a ‘natural’ process. America had achieved revolution in the late eighteenth century, as Louis Hartz has observed, without a major class upheaval.[14] It had expanded across the continent without damaging conflicts with major powers. It had achieved economic growth, so it was believed, through the natural operations of the market. What more logical than to assume that liberal democracy, laissez-faire economics, and a diplomacy based on the application of self-evident principles should suffice for all nations? The difficulty experienced by the United States in accepting the consequences of twentieth-century revolutions was thus rooted in its cultural tradition. To compromise with those revolutions was to betray America’s own.

The germ of the Cold War lies in the coincidence of Woodrow Wilson’s globalism and the Bolshevik Revolution of October 1917. Ironically both Wilson and Lenin consciously dissociated themselves from the diplomacy of the European powers. Secret diplomacy, annexations, trade discrimination, exclusive alliances (the United States entered the war as an ‘associated’, not an allied power) and balance of power politics were denounced as generators of war. Both leaders, as Geoffrey Barraclough has pointed out, adopted a new democratic diplomacy, appealing to the people of other nations over the heads of politicians. Both were competing ‘for the suffrage of mankind’.[15] They were competing, of course, on behalf of different ideologies, and relations quickly deteriorated as Lenin took Russia out of the war against Germany and the United States joined with other Western powers in a policy of intervention in Russia. That the aim, at least nominally, was to protect Western interests in Russia rather than unseat the Bolsheviks did not erase the Soviets’ conviction that the United States was out to strangle the Revolution in its cradle.

These events illustrate clearly an important difference between the revolutionary experiences of the United States and the Soviet Union. From the outset the Soviet Union was under siege, forced to establish itself without the luxury of time and space to develop. Soviet policy, like that of the United States, thus arose not only from the dictates of ideology, but from the specific conditions of national growth. A preoccupation with security was inseparable from preservation of the Revolution and the furtherance of its goals. It is this which makes it difficult to accept the facile distinction often made between security and ideological expansion as motives behind Soviet foreign policy. To be sure, the Soviet Union often found it necessary to compromise. The Leninist goal of world revolution gave way to Stalin’s ‘socialism in the one country’ as the hoped-for communist revolutions in Europe failed to materalize in the wake of the Bolshevik success. In the greatest compromise of all the Soviet Union signed a non-aggression pact with Nazi Germany in 1939 in the hope that German aggression would be directed westwards rather than eastwards. Both of these instances, however, are compatible with the view that ideology and national interest were inextricably mixed in Soviet policy. The Soviet Union did not abandon the goal of promoting communism wherever possible during the period of ‘socialism in one country’. During the period of the Nazi-Soviet pact (1939-41) the Soviet Union took the opportunity to absorb and sovietize Eastern Poland and the Baltic States, and to attack Finland. These examples show that the Soviet Union was capable of flexibility in the short term in the service of long-term goals, arguably to a greater degree than the United States. But they also suggest that Russian nationalism and Soviet communism were two sides of the same coin.

Two revolutions, two states with explicit principles at stake, two large nations with expansive tendencies: these do not of themselves make for Cold War. While its beginnings can be discerned in the years 1917-1920, it took another world war and an alliance between the United States and the Soviet Union to ignite the Cold War proper. In the interwar years the decisive factors in international relations were the efforts of Germany and Japan to reorder the balance of power in their favour within their respective spheres. Until the late 1930s the United States played a peripheral role in these dramas, while the Soviet Union, more directly threatened by German resurgence, attempted a holding action, first by the promotion of collective security via the League of Nations and second, when that failed, by entering into a short-lived agreement with Germany in the form of the Nazi-Soviet Pact of August 1939. Throughout the 1930s relations between the United States and the Soviet Union themselves were cool, despite American diplomatic recognition of the Soviet Union on Franklin Roosevelt’s assumption of the presidency. Relations were also distant. There were neither major areas of agreement nor pressing conflicts of interest.

The war changed that picture out of all recognition. Agreement within the ‘Grand Alliance’ on the goal of defeating Germany (the Soviet Union did not declare war on Japan until August 1945) and the achievement of that goal in May 1945 left Europe at the disposal of the victorious powers but it also placed a high premium on the continuance of unity if a postwar settlement was to succeed. As we know, Great Power unity broke down. The Cold War was thus in important respects an outgrowth of the Second World War. The latent ideological antagonism stemming from 1917 was, so to speak, energized by the demands placed upon the United States and the Soviet Union for cooperation during the war and its aftermath.

2. Endings and Beginnings 1941-46

The Grand Alliance

How cooperative was the Grand Alliance? The Roosevelt administration certainly made efforts to present ‘our gallant ally’ in a favourable light to American public opinion. Roosevelt made long trips to meet Stalin at Teheran in 1943 and Yalta in 1945 at considerable risk to his health and security. Even before America entered the war, lend-lease aid was made available to the Soviet Union. Beyond that, Roosevelt indicated repeatedly his desire for cooperation to the fullest extent possible in both the political and military spheres. The decision to pursue a Europe-first strategy rather than concentrate the American effort on defeating her attacker Japan was in part prompted by Roosevelt’s awareness that the Soviet Union was bearing the major military and civilian burden of the war. Stalin, while making clear his paramount interest in the establishment of friendly governments on Russia’s western border following the war, gave at least nominal assent to the principles of the Atlantic Charter (1941) and the Declaration on Liberated Europe (1945), among whose provisions was self-determination for all peoples. The dismantling in 1943 of the Comintern (the international arm of the Soviet Communist Party) was presented by the Soviet Union and perceived in the West as a gesture of goodwill, implying a suspension of the ideological goal of spreading communism.[16] Finally, Stalin indicated that he would go along with Roosevelt’s cherished idea of a United Nations Organization to replace the old League of Nations.

Strained relations, however, were evident from an early stage in the war, and given the fact that the Alliance was largely a marriage of convenience this was hardly surprising. Two issues loomed large. The first was Stalin’s request for acknowledgement of Soviet interests in Eastern Europe (essentially a ratification of the gains Stalin had acquired in the Nazi-Soviet Pact of 1939, above all the Baltic states and eastern Poland). The Western Allies resisted making firm commitments on this point at the outset of the Alliance and Stalin reluctantly agreed to postpone agreement upon receipt of an assurance that the Western Allies would mount a `second front’ in Western Europe with all speed. This became a second point of friction since the major assault in North-West Europe which Stalin desired was repeatedly postponed, preeminently at Churchill’s insistence, until June 1944. (Indeed differences between Roosevelt and Churchill on strategy formed a powerful cross-current in the wartime Alliance, with Churchill less inclined than Roosevelt to yield to Soviet interests and doggedly insistent on protecting British interests in the Near East -hence his preference for a Mediterranean rather than a Northern European strategy.) The reason for delay offered by Churchill and Roosevelt was that their military preparations were not sufficiently advanced to ensure success. The Western Allies settled instead on a North African campaign in 1942 followed in 1943 by landings in Sicily and the Italian mainland. Stalin’s suspicion of bad faith on the part of the Western Allies proved to be a potent source of mistrust, full of praise though he was for the Allied landings in France when they finally came.

A pattern was set in which military decisions would come to have major political repercussions. Roosevelt’s policy was not, as has sometimes been claimed, to ignore political issues in favour of military priorities. While it is true that he preferred to leave detailed discussion of territorial questions until the German surrender was within sight, he had clear political motives for doing so. Chief among these was his conviction that the best foundation for postwar reconstruction lay in the continuance of big power unity and that depended, in his view, on the swift prosecution of the war to a successful military conclusion. To press forward on territorial issues too early, above all on Eastern Europe, would be to risk exposing serious differences between the Western Allies and Stalin. In any case Roosevelt had little quarrel with the general proposition that the Soviet Union was justified in seeking friendly governments on its western border. He would doubtless, though, have been less sanguine about the chances of avoiding a wholesale Soviet domination of Eastern Europe if he had been able to overhear Stalin’s remark to Milovan Djilas in early 1945 to the effect that `this war is not as in the past: whoever occupies a territory also imposes his own system as far as his army can reach’.[17]

By the time Roosevelt, Churchill, and Stalin gathered at Yalta in the Soviet Crimea in February 1945 to discuss the major questions arising from the imminent prospect of victory over Germany, the broad framework of the territorial arrangements had been decided by the disposition of forces. The American decision not to engage in a race with the Russians for Berlin and to allow Soviet troops to take Prague, decisions made partly on military and partly on political grounds, reinforced the emerging pattern of divided responsibility between zones of occupation. The Western Allies themselves had already set a precedent in Italy where the Allied Control Council, on which a Soviet representative sat, was accorded only advisory status, the real power lying with the Western Allied commander of the occupying force. The more comprehensive European Advisory Commission, charged with overseeing joint military and political control of all liberated areas, was similarly limited in its capacity to enforce a unified approach in the treatment of liberated areas.

The Breakdown of the Alliance

The subsequent breakdown in the Grand Alliance can be traced through the year following V-E day (May 8, 1945) in the attempts to deal with an array of major problems: the establishment of a new political order via the United Nations Organization, the reconstruction of the world economy, the settlement of borders and the establishment of new governments in Poland and Eastern Europe, the treatment of Germany, and the issue of atomic energy. By the middle of 1946 the mould was set in which the best that could be achieved on each of these issues was fragile agreement; the worst was rank failure to find common ground.

On the face of it the United Nations represents the least unsuccessful joint venture of the postwar years. Having repudiated Woodrow Wilson’s cherished League of Nations in 1919, the United States was now determined to play a leading role in the new international organization. Bipartisan support in Congress for participation and widespread public sentiment in favour of internationalism were decisive in making up Roosevelt’s mind. Not that this meant a reversion to the Wilsonian idealism of 1919. Convinced that the League of Nations had been based on the unrealistic assumption that all nations deserved equal status in decision-making, Roosevelt favoured Big Power predominance. This priority was reflected in the Dumbarton Oaks Plan of 1944 in which preeminent power was given to the Security Council with its five permanent members—The United States, The Soviet Union, Great Britain, France, and China—while the ‘universalist’ principle of the old League of Nations was retained in the form of the Assembly. Stalin, though less inclined to trust Soviet interests to an international body, was prepared to countenance a scheme which acknowledged the reality that some powers were more equal than others. The organization was thus conceived essentially as a continuation of the wartime alliance.

Early disputes arose over procedural issues and the Soviet Union’s claim to sixteen votes in the Assembly, based on its sixteen ‘autonomous republics’ and designed to match the separate votes accorded to the nations within the British Empire. Agreement on these points, however, scarcely removed crippling weaknesses from the organization. The veto power available to the five permanent members of the Security Council, desired as much by the United States as by the Soviet Union, meant that each could forestall decisions perceived to be against its interests. In so far as the effectiveness of the organization depended upon Big Power unity, disagreement on major issues would undermine its capacity to act. Such proved to be the case. The UN became a mirror of the growing disunity among the major powers. Its singular ‘success’ in the early postwar years, the decision to resist the North Korean invasion of South Korea in June 1950, was possible because at the time the Soviet Union was boycotting the organization in protest at the refusal of the UN to admit Mao Tse­Tung’s government as the legitimate government of China. (The nationalist government of Chiang Kai-shek had been driven out to the island of Formosa in 1949.)

Cooperation to promote a new economic order proved even more elusive. The Roosevelt administration was convinced that the political instability of the interwar years had been rooted in economic nationalism and had been a major cause of the Second World War. Roosevelt’s closest advisers believed therefore, in the words of Treasury official Harry Dexter White, that ‘the absence of a high degree of economic collaboration among the leading nations, will, during the coming decade. inevitably result in economic warfare that will be but the prelude and instigator of military warfare on an even vaster scale’.[18] The outcome was the formulation at Bretton Woods in 1944 of a plan to provide for international currency stabilization and the distribution of loans to needy countries with a view to promoting international trade. The key principle as far as the United States was concerned was openness, its global applicability to capitalist and communist states alike. Indeed the United States anticipated more resistance from Britain, with its jealously guarded system of imperial preference, than from the Soviet Union. Economic self-interest undoubtedly played a large role in the United States’s promotion of the system. As by far the largest subscriber of funds to the proposed International Monetary Fund and the World Bank, the United States was in a position to determine the shape and the operation of the new institutions. It stood to gain from an increased demand for American exports and from the shifting of financial and commercial power to the United States. It was also thought that the new system would obviate the need for large-scale US loans to other countries for postwar reconstruction.

In practice, white the United States did become the world’s economic and financial powerhouse, this was less because of the establishment of the Bretton Woods machinery than because of the simple fact that the United States emerged from the war as the dominant and expanding economic power. The Bretton Woods system of itself was not capable of creating economic equilibrium where no equilibrium existed, and extensive loans to Britain and later to Western Europe as a whole via the Marshall Plan proved necessary. The Soviet Union meanwhile, having endorsed the Bretton Woods agreements in 1944, failed to ratify the accords by the deadline of December 31, 1945. This decision seems to have arisen less from objections to the plan itself than from growing rifts with the United States on other issues during 1945. The American refusal to grant a postwar loan without stringent conditions, involving access for American goods and investment to the Soviet Union, exacerbated relations which were already marked by serious disputes over the formation of governments in Eastern Europe. In this sense, as John Gaddis has remarked, the Soviets’ withdrawal from participation in the Bretton Woods system ‘was an effect rather than a cause of the cold war’.[19] Nevertheless, the result was to encourage the development of separate economic blocs to match the political divisions which were fast appearing.

Confrontation over the establishment of new governments in Poland and other East European countries, coupled with the failure to agree on a German settlement, lay at the heart of the emerging Cold War. Agreements of a sort, however, were reached on Eastern Europe and to that extent tension arose as much from problems of interpretation and implementation of agreed policies as over the character of the policies themselves. On Poland provision was made at Yalta for the establishment of a new government based on the existing Soviet­hacked ‘Lublin’ government but with the addition of ‘democratic leaders from Poland and from Poles abroad’. Free elections were to follow as soon as the military situation would permit. Even before FDR’s death on April 8, 1945 wrangles had developed over observance of the agreement, as the Soviet Union made it clear that it was unprepared either to accord non-Communists any real rote or to conduct the kind of elections which would satisfy the West. Roosevelt’s successor, Harry Truman, introduced a more abrasive style in relations with the Soviet Union, but in substance he sought to continue Roosevelt’s preference for dealing independently with Stalin rather than tying America to British policy and arousing Stalin’s suspicions of a Western Allied bloc against him. Truman dispatched Harry Hopkins, a former Roosevelt advisor, to Moscow in May 1945 in an effort to persuade Stalin to make good the Yalta pledges on Poland. The meeting produced minor Soviet concessions, enough to lead the United States to recognize the new Polish government, but in essence little hadchanged. The truth was that Truman was left with the alternative of accepting an unsatisfactory agreement or provoking an open breach with Stalin.

A similar pattern followed in Rumania and Bulgaria. In a series of Foreign Ministers’ conferences held in late 1945, during which Stalin managed to exploit disagreements between the United States and Britain, a framework was established for settlements on Bulgaria and Rumania. These provided nominal self-government but little in the way of genuine democracy. Only in Czechoslovakia and Hungary did non-communists still retain real power but, as events were to prove, the United States possessed as little leverage in those countries as they did in Bulgaria and Rumania. (This is discussed in Chapter III.) An important result of the tangled and often acrimonious negotiations over Eastern Europe was to draw the United States and Britain closer together. British objections that American Secretary of State James Byrnes was willing to pay too high a price for agreement with Stalin, coupled with criticism within the United States of the agreements on Bulgaria and Rumania, provoked a reassessment of American policy towards the Soviet Union. By early 1946 a new and tougher American line was developing, characterized by a growing partnership between the United States and Britain. Whether more coordinated policies at an earlier date would have made a material difference to the situation in Eastern Europe, however, is doubtful. The extent to which the time for bargaining was past is well brought out in a frank remark by Maxim Litvinov to the American journalist Edgar Snow as early as June 1945: ‘Why did you Americans wait till now to begin opposing us in the Balkans and Eastern Europe? . . . You should have done this three years ago. Now it’s too late and your complaints only arouse suspicion now’.[20]

In Germany, American policy shifted from an initial desire to reduce that nation to virtual political and economic impotence to a recognition that an impoverished and resentful Germany could prove a potent source of instability in Europe. The Soviet Union’s paramount interest—shared by France—in preventing German resurgence and gaining recompense for the destruction Germany had wreaked ran counter to the United States’s growing preference for ‘rehabilitation’ over’repression’.[21] Once again the issue of joint control by the Big Powers was the focus of disagreement. By 1944 zonal boundaries of occupation had been agreed, though within a framework of joint supervision of Germany as a single unit. The story of the following two years is of the hardening of these temporary zones of occupation into rigid boundaries.

Of the many disputed questions reparations posed the largest difficulty. Given its enormous human and material sacrifices in the war against Germany—it is estimated that the Soviet Union suffered 20 million casualties during the war—the Soviet Union had most at stake and at Yalta proposed the substantial sum of $20 billion as a basis for the joint Allied claim on Germany, half to go to the Soviet Union. Fearful of German economic collapse (and of the drain on the American taxpayer once the bill for German recovery was presented), the United States proposed instead at the Potsdam meeting in July 1945 that the occupying powers should each extract reparations from its own zone. The consequence of this decision was a de facto division of responsibility for the transition period. As with the settlements in other areas, temporary and transitional expedients became the basis of permanent arrangements. The Allied Control Council for Germany, riven by disputes over denazification policies, access by the occupying powers to each other’s zones, and the interzonal transfer of reparations, was hardly in a position to enforce joint administration of Germany. Germany represented the European situation in microcosm. Though the United States and the Soviet Union both continued to advocate a united Germany—a policy which the Soviet Union persisted in longer than the United States—neither was willing to risk its potential cost: the absorption of Germany into the other’s camp.

The atomic bomb, exploded over Hiroshima and Nagasaki by the Americans in August 1945, indoubtedly introduced a new dimension into conceptions of warfare, but its initial impact on US-Soviet relations was surprisingly limited. Revisionist historians have argued with considerable justice that the Americans hoped and believed that the demonstration of the bomb’s power in bringing about the defeat of Japan would incline the Soviet Union to be more compliant in negotiations over political and territorial issues.[22] News of the first successful test of the A-bomb, which reached Truman while he was at Potsdam in late July 1945, reportedly bad the effect of stiffening Truman’s resolve not to yield to the Soviets on the composition of new governments in Bulgaria, Rumania, and Hungary. But to whatever degree ‘atomic diplomacy’ was envisaged as a means of cowing the Soviets it has to be said that it failed; the Soviet position on Eastern Europe; as John Caddis has pointed out, ‘became increasingly rigid after August 1945’.[23] The chief effect of the American test was probably to encourage the Soviet Union to step up its own nuclear program, on which it had been engaged in any case since before the war. The period of the American atomic monopoly (1945-49) were precisely those years in which atomic diplomacy might have been expected to succeed. In the event the atomic bomb proved to be a very blunt diplomatic instrument and the will to contemplate its use as a weapon of war was probably decreased by its employment against Japan.

The attempt to establish international control of nuclear power illustrates more tangibly the capacity of the nuclear issue to focus disagreements. The debate in the UN Atomic Energy Commission during the summer of 1946 on the ‘Baruch Plan’ was a harbinger of the nuclear arms race no less than a manifestation of the incompatibility of US and Soviet goals. Baruch’s plan proposed the establishment of an international authority to oversee all phases of the development and use of atomic energy. Provision for inspection and control was integral to the plan, as was the American insistence that the United States could contemplate ending manufacture of its own weapons and disposing of its stockpile only when the machinery was in place. In response the Soviet Union called for the destruction of existing stockpiles prior to the establishment of the machinery of inspection and control. Deadlock ensued, ensuring that separate and competitive development of nuclear weapons would be the outcome.

1946: The Turning Point

In the political climate of 1946 deadlock was perhaps inevitable. Such willingness as there had been during 1945 on the part of the United States and the Soviet Union to seek common ground was fast disappearing. The crisis in Iran during March 1946 had already exposed a serious rift. Soviet delay in withdrawing troops from Iran deployed there during the war along with British forces in order to prevent the Iranian oilfiekls falling into Axis hands) provoked an angry American response. Not content that the problem should be resolved by bilateral negotiations between Iran and the Soviet Union, which appeared close to success by the end of March, the United States insisted on placing the issue before the UN Security Council. In the event the Soviet Union backed down amid a flurry of resentment at the American decision to throw a public spotlight on the issue.

The decisive change during 1946 was the public acknowledgement by both sides that the Grand Alliance was moribund. Within the United States it took the form of mounting criticism of the Truman administration’s ‘soft line’ on Eastern Europe, focussing on Secretary of State James Byrnes who had negotiated the agreements on Bulgaria and Rumania. Byrnes himself, shifting with the times, signalled a new hard line in early February 1946 in which he acknowledged deep differences between the United States and the Soviet Union. This was followed in early March by Churchill’s ‘Iron Curtain’ speech at Fulton, Missouri, a statement which aroused intense anger within American public opinion far its belligerent tone but which Truman and his closest advisers did little to disown. Just as telling, though still secret, was the so-called ‘long telegram’ from George Kennan, senior American diplomat in Moscow. This document painted a dark picture of a Soviet Union ‘fanatically committed to the belief that with the US there can be no permanent modus vivendi, that it is desirable and necessary that the internal harmony of our society be disrupted, our traditional way of life destroyed, the international authority of our state be broken if Soviet power is to be secure.[24] With these words Kennan uttered the as yet unspoken thoughts of the Truman administration and it brought him swiftly, though briefly as it turned out, to the centre of the policy-making process.

Kennan’s assessment had been prompted by a request from the State Department for an interpretation of Soviet policy in the light of a speech by Stalin at the beginning of February. Though given as an electoral address for a domestic audience, its strident denunciation of capitalism, its picture of the war as having been the product of capitalist imperial designs, and its claim that the war proved ‘that our Soviet social system has won’, were hardly calculated to promote cooperation and understanding with the West.[25] It was perceived in the United States as a declaration of Cold War.

The degree to which the range of permissible views within the Truman administration was narrowing during 1946 can be gauged by the reaction to an address by Secretary of Commerce Henry Wallace given in September. ‘The real peace treaty we now need is between the US and Russia’, he declared, going on to suggest that, while Americans may not have liked what was going on in Eastern Europe, ‘we should recognize that we have no more business in the political affairs of Eastern Europe than Russia has in the political affairs of Latin America, Western Europe, and the US’.[26] Such evenhandedness was unacceptable and Wallace was forced to resign. His error was to endorse a Soviet ‘sphere of influence’ in Eastern Europe. In practice the United States was left with little alternative, short of war, to accepting Soviet predominance in Eastern Europe. The situation was unpalatable, however, for a number of reasons: it left countries recently liberated from Nazi rule subject to a new form of domination and its risked alienating the large section of the American population whose ethnic roots lay in Eastern Europe. Nor were Truman and his advisers convinced that Soviet ambitions would rest with Eastern Europe. But Wallace committed the further error of equating the Soviet position in Eastern Europe with the United States’ in Latin America. While, as Eduard Mark has pointed out, the Truman administration had been willing to countenance an ‘open’ Soviet sphere in Eastern Europe, comparable to America’s benign interest in Latin America, events had dashed such hopes. The reality in Eastern Europe was Soviet domination, and Wallace’s remarks seemed to Truman willfully ignorant of that fact.[27]

Wallace’s enforced resignation in September 1946 registered a marked shift of mood in America. Within a month the mid-term Congressional elections had produced Republican majorities in both Houses, overthrowing a Democratic dominance of sixteen years. Into the legislature came a new cohort of Congressman, many of them war veterans (among them Richard M. Nixon and John F. Kennedy) who had learned the lesson of ‘appeasement’ from the war and readily transferred their hatred of Hitler’s totalitarianism to Stalin’s version of it. The death in January 1946 of Harry Hopkins, the embodiment of Rooseveltian aspirations for maintenance of the Grand Alliance, symbolized the passing of the wartime ethos. The exposure of a Soviet atomic spy-ring in Canada during 1946 sensitized Americans to the threat of communist subversion and within a few months of the elections Truman came under great public pressure to introduce a ‘loyalty programme’ for all Federal employees. These developments laid fertile ground for the subsequent activities of the House Un­American Activities Committee and of Senator Joe McCarthy, ensuring that the Cold War at home would preoccupy American public opinion as much, if not more, than the conflict abroad.

Less often noticed by historians of American policy is the fact that a similar process was taking place in the Soviet Union. The downplaying of communist ideology during the war and the elevation of Russia’s national (especially military) traditions was speedily reversed after the defeat of Germany. Fearing the effects of the Soviet troops’ exposure to non-communist Europe as they drove westward, Stalin quickly reinstated the Party and communist ideology as the reigning force in Soviet life. National heroes such as Marshal Zhukov, the architect of Russia’s military victory, disappeared from public life within a few months of the end of the war. By 1948 Pravda was celebrating the third anniversary of the taking of Berlin without mentioning Zhukov.[28]

These developments raise questions about the relationship between domestic affairs and the formulation of foreign policy. Historians have had little trouble in showing that Truman strove hard to create a consensus for a hard line towards the Soviet Union, that he engaged in a vigorous campaign to discredit critics such as Wallace, and that his preference for unambiguous solutions to complex problems often served to raise the temperature of public debate.[29] As Lyndon Johnson found later in the Vietnam war, though with very different results, Truman had a war on two fronts—one with the Soviet Union and another with American public opinion which, as he saw it, needed to be persuaded of the reality of the Soviet threat. One can agree that Truman engaged in the manufacture of consent, thereby helping to promote a Cold War mentality, without, however, endorsing the view that the Soviet threat was entirely a figment of his imagination or that he was insincere in his interpretation of Soviet actions. The truth is that Truman was subject to multiple pressures: Soviet actions which he perceived to be aggressive, requests from Britain to make a commitment to the stability of Western Europe in the face of Soviet intransigence, Communist Party strength in France and Italy, and the need to justify to the American Congress and the American public an unprecedentedly active role for the United States in international affairs.

On the Soviet side too anxiety may have been triggered by American actions during the Iran crisis, by the employment of economic and atomic diplomacy, and by Churchill’s Iron Curtain speech, rekindling the fear of ‘encirclement’ aroused during the interwar period. The result was a spiral of mutual mistrust between the powers. Stalin also had a war on the home front and he underestimated the degree to which his method of fighting it would be interpreted in the light of disagreements with the United States in the field of diplomacy. As the American Ambassador in Moscow, Walter Bedell Smith, wrote in 1949: ‘The time has passed when foreign affairs and domestic affairs could be regarded as separate and distinct’.[30] Smith meant by this that the United States must gear all its domestic resources to a long struggle with the Soviet Union, but the statement has wider applications than Smith had in mind. The erosion of the boundary between foreign and domestic affairs substantially raised the stakes of diplomacy, making the Cold War a conflict of cultures no less than of nation states. ‘[The Cold War] has really become a matter of the defence of western civilization’, wrote a British Foreign Office Official in 1948.[31] In the United States the publication of an abridgement of Arnold Toynbee’s A Study of History (1946), a sweeping survey of the decline of great civilizations throughout world history, jangled raw American nerves. Toynbee’s message that only a spiritual regeneration in the West could prevent it going the way of the Roman Empire was eagerly absorbed by an American intelligentsia seeking a counter-weight to Marxism.[32] In the end the striking feature of US-Soviet relations in the immediate postwar years is the acute sense of vulnerability on both sides and the inability of each to comprehend the fears no less than the interests of the other.

3. Containment and the Division of Europe, 1947-1950

Roosevelt had returned from Yalta in February 1945 declaring in his message to Congress that the agreements ‘ought to spell the end of unilateral action, the exclusive alliances, the spheres of influence, the balances of power, and all the other expedients that have been tried for centuries—and have always failed’.[33] Roosevelt’s hopes for a clean slate, as we have seen, were quickly undermined in the events of 1945-46, but some fluidity remained in US-Soviet relations. They were stilt talking to each other, trying to give tangible form to their aspirations for cooperation. In the event the outcome of these efforts was precisely those despised talismans ofthe bad old politics—spheres of influence, exclusive alliances and so on . . . . By 1950 they had become firmly institutionalized and management of cooperation had given way to management of conflict.

‘Containment’ supplied the intellectual rationale for the Truman administration’s new orientation and was classically expounded by George Kennan in his article The Sources of Soviet Conduct (July 1947). He did not view the Soviet Union as bent upon immediate invasion of Western Europe; nor did he believe that a fanatical devotion to world-wide expansion of communism was the driving force behind Soviet policy. While Soviet ideology assumed the inevitable downfall of capitalism, no timetable was laid down by the Kremlin. The Soviets were prepared for the long haul. Given the doctrine of the infallibility of the Kremlin and the iron discipline of the Party, the Soviet leadership was ‘at liberty to put forward for tactical purposes any particular thesis which it finds useful to the cause at any particular moment and require the faithful and unquestioning acceptance of that thesis by the members of the movement as a whole’. Caution and persistence characterized Soviet policy and America must respond with ‘policies no less steady in their purpose, and no less variegated and resourceful in their application, than those of the Soviet Union itself. In these circumstances the United States must seek to contain Soviet power by the ‘adroit and vigilant application of counterforce at a series of constantly shifting geographical and political points’.[34]

Kennan’s prescriptions for American policy appear to be unmistakably global in scope and to carry strong military implications. In fact Kennan, the supposed architect of containment, dissociated himself from many aspects of its implementation, emerging as a critic of bath the Truman Doctrine and NATO. Since Kennan played such an important and ambiguous role in policy-making during these years, we can usefully employ his writings as a vantage point from which to view the institutionalization of the Cold War.

The Truman Doctrine

Containment had been enacted in the form of the Truman Doctrine before Kennan supplied the policy with a label. In March 1947 Britain announced that it could no longer afford to sustain its support for the Greek government in the civil war which raged intermittently after the liberation of Greece from the Germans in 1944. Greece had been conceded by Stalin as a Western sphere of influence in the so-called ‘percentages agreement’ with Churchill in October 1944. It is now clear that Stalin held to this agreement and not only withheld support to the Greek communists but was disturbed by the prospect of a communist revolution with indigenous roots. (Marshal Tito of Yugoslavia, the one communist leader in Eastern Europe not beholden to the Red Army for his position, bore out Stalin’s fears when in 1948 he declared his independence from the Soviet Union.) Such distinctions, however, meant littleto American leaders who could only see in the flow of arms from Yugoslavia and Albania to the Greek rebels the hand of the Soviet Union. In Turkey too, although not at risk domestically to the same degree as Greece, Soviet pressure for control of the Black Sea straits was perceived as a bid not merely for influence in Turkey but as a stepping-stone to gains in the Middle East.

The striking feature of the American response to the announcement of British withdrawal is not so much the actual decision to send economic and military aid to Greece and Turkey as the ease with which Truman was able to produce a consensus for a fundamental reorientation in American policy. Presenting a stark contrast between two alternative ways of life, ‘one based upon the will of the majority’ and ‘one based upon the will of a minority forcibly imposed upon the majority’, he declared in his speech to Congress on March 15 1947 that ‘it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures’.[35] There seems little doubt, as historians have shown, that Truman’s motive for pitching the rhetoric of his message so high was the need to convince a cost-conscious Congress and an American public opinion not yet fully congnizant of the extended rote the United States was quickly assuming that the stakes were as high as he perceived them to be in Greece and Turkey. But it is equally clear that Truman and his advisors sincerely believed that the crisis was much wider than the situation in Greece and Turkey. To that extent, as Daniel Yergin has pointed out, while the message to Congress was certainly conceived as a ‘sales job’, it was not a cynical manoeuvre.[36]

In the congressional hearings on the Greece and Turkey Aid Bill the administration was pressed on the question of the extent of the commitment the United States was undertaking. Under-Secretary of State Dean Acheson sought to calm fears that the Truman Doctrine was a blank cheque to be drawn on at will in other comparable situations, but he barely succeeded. Each case, he said, would be considered according to the specific circumstances but he could not disguise the fact that the Truman Doctrine speech had established the framework within which such cases would be judged. This is clear in an exchange with Senator Vandenberg, a leading Republican Senator whose influence in this and other crucial policy initiatives laid the basis far bipartisan support for Truman:

Vandenberg: … In other words, I think what you are saying is that wherever we find free peoples having difficulty in the maintenance of free institutions, and difficulty in defending against aggressive movements that seek to impose upon them totalitarian regimes, we do not necessarily react in the same way each time, but we propose to react.

Acheson: That, I think, is correct.[37]

George Kennan objected strongly to the universalist character of the message but also to the specifics of the aid to Greece and Turkey. He opposed any aid to Turkey and felt that the emphasis in the proposals for Greece was excessively military.[38] These distinctions appear puzzling in the light of his July article already referred to. In his memoirs Kennan admits to serious deficiencies in his exposition of containment: the failure to make clear that he considered containment as primarily political and the ‘failure to distinguish between various geographic areas and to make it clear that the ‘containment’ of which I was speaking was not something I thought we could, necessarily, do everywhere successfully’.[39] The problem seems to have arisen because Kennan’s exposition in his July article was both complex and incomplete. His intention was evidently to underline the seriousness of the Soviet threat which he felt was insufficiently appreciated. In the process he overplayed his hand. Attention was thus deflected from arguably his most important suggestion that ‘the issue of Soviet-­American relations is in essence a test of the overall worth of the United States as a nation among nations’.[40] By extension the West’s strongest card in its conflict with communism was the health and vigour of its own democratic traditions and values. This view became central to the Marshall Plan. Kennan himself has noted the irony that his name should be associated with the Truman Doctrine, about which he had serious reservations and in which he was scarcely involved, rather than the Marshall Plan, in which he was a prime, if largely unseen, mover.[41]

The Marshall Plan, Germany, and the Division of Europe, 1947-1949

The Marshall Plan was not to all appearances designed to divide Europe but such, in conjunction with other developments, was its effect. The growth of despised ‘spheres of influence’ was hastened by tire Marshall Plan and consolidated in the division of Germany and the establishment of NATO. Once again Kennan’s role is instructive. John Caddis has drawn attention to an exchange of letters in early 1945 between Kennan and Charles Bohlen, the State Department’s other leading Soviet expert. Kennan expressed the view that, given the Soviet Union’s determination to dominate Eastern Europe, ‘why could we not make a decent and definite compromise with it—divide Europe frankly into spheres of influence—keep ourselves out of the Russian sphere and keep the Russians out of ours?’ Bohlen shared Kennan’s assessment of Russian intransigence but replied that ‘foreign policy of that kind cannot be made in a democracy . . . Only totalitarian states can make and carry out such policies’.[42] In the event official policy took tire direction outlined by Kennan in 1945 while Kennan himself modified his stance and held out hopes for the promotion of Europe as a ‘third force’ independent of the superpowers. The formulation of the Marshall Plan represents the meeting point of these two trends of thought: it combines pan-European ideas with acknowledgement of the divide between East and West.

European recovery plans were already under discussion while the Truman Doctrine was being formulated. Indeed the two policies were related in the minds of the Truman administration from the start—‘two halves of the same walnut’, as Truman put it. Three things provoked a reconsideration of policy towards Europe: deepening dismay at the consolidation- of communist power within Eastern European governments, continuing failure to agree on a German settlement, and economic disarray in Western Europe, apparently threatening the political stability of, in particular, France and Italy.

The last presented the most urgent problem. Returning from Europe in May from a fact-finding mission, Under-Secretary of State for Economic Affairs William Clayton declared that ‘it is now obvious that we grossly underestimated the destruction to the European economy by the war… Europe is steadily deteriorating’. The plight of Europe, however, was not his only preoccupation. In addition to the awful implications a European collapse would have for the future peace and security of the world, he wrote, ‘the immediate effects on our domestic economy would be disastrous: markets for our surplus production gone, unemployment, depression, a heavily unbalanced budget on the background of a mountainous war debt. These things must not happen?’ (sic) Concluding that the United States must therefore initiate a substantial program of aid, he remarked (with emphasis):’The United States must run thus show’.[43]

Clayton represented the hard-nosed side of the Marshall Plan, its economic bottom line. The State Department’s Policy Planning Staff (PPS), headed by George Kennan, was less bald in its perception of the consequences for the US economy of European decline, and emphasized rather the goal of restoring Europe’s faith in its future. In a document which was heavily drawn on by Marshals in his public announcement of the European Recovery Program (ERP) in June, Kennan was at pains to stress that ‘the American effort in aid to Europe should be directed not to the combatting of communism but to the restoration of the economic health and vigour of European society’. He insisted also that the initiative must come from Europe and must be jointly conceived by all participating countries.[44] The conception was thus of a plan which would promote European unity as well as economic revival, with the overall aim of creating a stable and independent European bloc.

But how much of Europe? The PPS addressed itself primarily to Western Europe bus also felt it necessary to consider the possibility of Soviet and East European participation. The way in which Soviet bloc participation was discussed, however, shows that it was neither genuinely desired. nor really anticipated. It was essential, Kennan believed, that the proposal for general European cooperation ‘be done in such a form that the Russian satellite countries would either exclude themselves by unwillingness to accept the proposed conditions or agree to abandon the exclusive orientation of their economics’.[45] Not surprisingly, the Soviet Union was unwilling to accede to conditions which would involve opening its economy to Western penetration. Having accepted the invitation to attend the opening ERP meeting in Paris, the Soviet representative withdrew, once the conditions had been made explicit, and placed pressure on the Poles and Czechs to follow suit.

The announcement of the Marshall Plan had implications for the whole range of European issues which divided the United States and the Soviet Union. The Soviet response was swift and unequivocal. Four days after his return from Paris, Foreign Secretary Molotov announced the establishment of the Communist Information Bureau (Cominform), designed to strengthen Soviet control in Eastern Europe. In Hungary non-communists within the government were purged and Cominform leader, Andre Zhdanov, embarked on a campaign of ideological vilification of the West which included a call to French and Italian Communists to foment disruption and seek the elimination of all non-communist leftists in their countries. Whether one interprets these actions as a defensive response to a perceived threat of Western encroachment on Eastern Europe or as an aggressive design for the destabilization of Western Europe makes little difference to the essential point: that the Marshall Plan forced Stalin to reassses his stance towards Europe East and West.[46] The communist coup in Czechoslovakia in February 1948 was the most dramatic outcome of this process, removing the only remaining non-communist leader in Eastern Europe. Since 1945 President Benes had trodden a careful path between remaining on good terms with the Soviets and resisting communist control. The fall of his government under the pressure of Soviet troops stationed on the Czech border, coupled with the suicide of his Foreign Minister Jan Masaryck under suspicious circumstances, profoundly shocked the West. Its immediate effect within the United States was to hasten the vote on appropriations for the Marshall Plan which had languished in Congress for eight months. Amidst a war scare Truman went before Congress to impress upon legislators that the survival of freedom was at stake.

The German issue too was inseparable from the Marshall Plan. In January 1947 the British and American zones had been formally merged, an acknowledgement that four-power control of Germany was not working and was unlikely to work. American policy was now aimed frankly at rebuilding the West German economy. An important goal of the Marshall Plan was to calm French fears about a revived Germany by integrating West Germany into a Europe-wide system. German recovery, the Americans argued, was vital to the economic health of Europe. Two important steps were then taken at a conference of the United States and five West European nations held in London in February 1948: the decision to introduce a new currency into West Germany to provide financial stability for economic revival and an as yet tentative move towards West German statehood. Again France’s anxieties were aroused about German revanchism and were soothed in this instance by an American commitment to retain some troops indefinitely in Europe. Though aimed at containing Germany rather than the Soviet Union this decision coincided with discussion on the establishment of NATO. The militarization of containment flowed inexorably from the logic supplied by the economic and political decisions of 1947-48.

The Soviet reaction to the introduction of the new currency in West Germany (including West Berlin) ensured that this logic would be played out. The day after the new Deutschmark was introduced the Soviets cut the land routes between West Germany and West Berlin. The Berlin blockade began on June 24 1948 and lasted for nearly a year until the airlift mounted by the United States and Britain convinced the Soviet Union that the only alternative to accepting the ‘illogic’ of a Western enclave deep within the Soviet zone was war. Indeed at no time since the end of hostilities in 1945 had war seemed so likely. While the blockade stretched out, as Louis Halle has observed, ‘sixty long-range bombers of the US airforce were quietly moved across the Atlantic to the British Isles’, to remain there and be reinforced subsequently.[47] The German (though not the Berlin) problem was ‘solved’ by the adoption in May 1949 of a constitution which established the Federal Republic of Germany (FRG). The Soviet Union responded immediately with the formation of the German Democratic Republic (GDR).

NATO, NSC-68 and the Militarization of Containment

Daniel Yergin has identified the growth of a ‘national security state’ in the postwar decade—the organization of the United States ‘for perpetual confrontation and for war’.[48] At a time in mid-1946 when the United States was demobilizing fast, Truman’s Special Counsel, Clark Clifford, wrote to the President that ‘in restraining the Soviet Union the United States must be prepared to wage atomic and biological warfare. A highly mechanized army, which can be moved either by sea or by air, capable of seizing and holding strategic areas, must be supported by powerful naval and air forces’.[49] A year later a new Defense Act created a more integrated defence organization, including a single Department of Defense to replace the separately run armed services, and a National Security Council to advise the President on the whole range of defence activities. War plans in the event of conflict with the Soviet Union were being formulated as Europe fractured down the line of the Iron Curtain.

As yet, however, there was no consensus within Congress, to say nothing of the country at large, on the desirability or need for an extensive US military commitment to the defence of Western Europe. Many of those Senators and Congressman who were loudest in their demands for a tough line against the Soviet Union were precisely those who resisted granting appropriations for military purposes. The debate in Congress on NATO itself reflected the strength of these feelings. While there was a good measure of agreement on the need for American participation in some form of security pact with Western European nations, there was little appetite for a large US force permanently stationed in Europe. When asked whether the administration planned to send ‘substantial’ numbers of US troops to shore up the European defences, Acheson assured the Senate [that ‘the answer to that question . . . is a clear and absolute no’.[50] The Senate was similarly assured that there was no plan for the rearmament of Germany. The NATO pact was conceived by the administration, or at least presented publicly, as a confidence booster to Europe to prevent it succumbing politically to appeasement or neutrality under Soviet pressure.

A marked change in the conception of NATO was produced by the detection in September 1949 of a Soviet atom bomb test. The Truman administration had convinced itself, against the views of American atomic scientists, that the Soviet Union would take twenty years to manufacture its own bomb. News of the Soviet test threw these calculations to the winds, setting off a frantic search for the traitors who had passed the secret of the bomb to the Soviets. In Klaus Fuchs (captured and found guilty by the British in March 1950) and the Rosenbergs (also convicted in early 1950) such traitors were found, though it is possible that the Soviet atomic program would have achieved its end with little delay in the absence of espionage.[51] The military and strategic implications of the ending of the American atomic monopoly, however, profoundly affected the calculations of American policy-makers. In conjunction with the ‘loss’ of China in 1949 and the beginnings ofJoseph McCarthy’s attacks on the Truman administration’s weak response to communism, news of the Soviet test produced pressure for a fundamental reassessment of America’s strategic objectives and plans. It took the form of a lengthy document, justifiably considered by historians as equal in significance to the Truman Doctrine speech, known as National Security Council Resolution 68 (NSC-68).

NSC-68 illustrates the inseparability of military and ideological concerns in this critical year of the Cold War. The challenge presented by the Soviet Union was conceived to be moral as much as material. With the eclipse of freedom in Czechoslovakia two years before, ‘it was in the intangible scale of values that we registered a loss more damaging than the material loss we had already suffered’. However, in order to convince the Soviet Union of its determination to uphold the idea of freedom (‘the most contagious idea in history’), the United States must match its capabilities to its intentions. ‘Without superior aggregate military strength, in being and readily mobilizable, a policy of ‘containment’—which is in effect a policy of calculated and gradual coercion—is no more than a policy of bluff’.[52] The result was the decision to embark upon a rapid build-up of political, economic, and military strength in the Free World.

NSC-68 was commissioned by Truman in January 1950, forwarded to the President in April, and approved in September. Of central importance in converting the document from a blueprint for a massive arms build-up into practical policy was the outbreak of the Korean War in June of the same year. (This is discussed in section IV). There is no need to claim US complicity in bringing about the North Korean attack, as some historians have argued, to recognize that the beginning of the Korean war confirmed the logic of NSC-68 and eased its implementation. By the time the Korean War ended in 1953 the United States had substantially bolstered its forces in Western Europe and begun the delicate task of promoting the rearmament of West Germany.[53] Above all, NSC-68 was explicitly global in scope and military in application. Negotiation was not abandoned as a goal but remained carefully circumscribed by military priorities. As the authors of NSC-68 put it, ‘negotiation is not a separate course of action but rather a means of gaining support for a program of building strength, of recording, where necessary and desirable, progress in the cold war … ’.[54] In short, Kennan’s beliefs in the need for selectivity in American commitments, for the primacy of the political over the military, and for a policy based on Soviet intentions rather than capabilities were firmly sidelined with the adoption of NSC-68.

NSC-68 established the framework of American policy for the next twenty years. It might be described as ‘containment-plus’ in that it anticipates John Foster Dulles’s policies of ‘liberation’ and ‘rollback’, announced in the early 1950s. It was not, however, wholly lacking in flexibility. Recording ‘progress in the cold war’ was a vague formula which allowed for strategic retreats from exposed positions. Despite, for example, the verbal support given to the Hungarian reformers of 1956, the United States clearly had no intention of backing words with actions to the extent of risking war with the Soviet Union. As the tanks rolled into Budapest to crush the rebellion, American policy-makers wrung their hands but did little more. Where, however, the United States felt that its interests could be advanced at acceptable cost, then NSC-68 proved capacious enough to accommodate them. In the Far East the costs proved if anything greater than in Europe. To this we now turn.

4. Cold War in the Far East

McCarthyism and the Far Eastern Turn

Truman was politically vulnerable in the early months of 1950. The twin blows of the Soviet atom test and the Chinese Revolution had shaken public confidence in the administration’s policies, exposing it to charges of negligence and worse. The conviction of Alger Hiss for perjury in January 1950 seemed to confirm suspicions that the Truman and Roosevelt administrations had harboured traitors in key policy­making positions. Hiss had worked in the State Department during the late 1930s, had been a member of the American delegation at Yalta, and after the war had been appointed President of the Carnegie Endowment for International Peace. The specific charge against him was that he had perjured himself before the House Un-American Activities Committee in denying that he had passed information to the Russians while employed at the State Department in 1937-38. (Thanks to the Statute of Limitations, he could not be charged with espionage as such.) In actuality Hiss’s role in policy-making had been a minor one, but the case became a convenient peg on which critics of the Truman administration could hang a series of accusations, amounting to a comprehensive denunciation of the whole Roosevelt-Truman record in foreign policy.

For figures such as Senator Joseph McCarthy, who rose to prominence in the wake of the Hiss conviction, the case provided an explanation for the succession of American defeats in the Cold War, beginning with the ‘sell-out’ of Eastern Europe to the Soviets at Yalta and culminating in the ‘loss’ of China. No note is more consistently sounded in McCarthy’s speeches than his belief in America’s ‘impotency’ in the face of communism, ‘the feeling of America’s weakness in the very air we breathe in Washington’. The present situation could only be accounted for as the product of ‘a great conspiracy, a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man’. Hiss’s cultured, urbane demeanour, his association with the east coast liberal elite, and his deep roots in Roosevelt’s New Deal offered easy targets for Republican vilification of the betrayers of true Americanism and heralded the end of bipartisanship on foreign policy within Congress. Dean Acheson, Marshall’s successor as Secretary of State, blackened the Democrats’ record still further when he announced, following the conviction of his old friend, that ‘I will not turn my back on Alger Hiss’. With this statement, said McCarthy, ‘this pompous diplomat in striped pants, with a phony British accent . . . awakened the dormant indignation of the American people’.[55] For four years McCarthy pressed home his message, throwing the administration on to the defensive and ensuring that the communist issue would dictate the agenda of domestic affairs.

McCarthy did not invent anti-communism. His genius was to dramatize the issue, to put his personal imprint upon it by a combination of adroit self-publicity and unscrupulous exploitation of the media’s appetite for sensational copy. His targets were many—the State department, the Democratic Party, and subsequently the army and the Presidency. By 1954 he had become an embarrassment to his own party. Republicans who had been content to go along with McCarthy’s attacks on the Democrats, especially for their conduct of the Korean War, balked at his increasingly indiscriminate charges against such hallowed institutions as the army and the (now Republican) presidency. In 1954 he was censured by the Senate andeffectively silenced. Within three years he was dead, a broken man mired in alcoholism.

There are many contexts in which McCarthy and McCarthyism can be viewed. A rich literature on the political and sociological roots of the American anxiety about communism began to appear within months of McCarthy’s censure by the Senate.[56] From the standpoint of foreign relations, however, the significance of McCarthy’s career lies in the coincidence of his brief period of notoriety with the shift of attention from Europe to the Far East, a shift which he helped to promote. Two figures appear repeatedly in his catalogue of traitors—George Marshall and Owen Lattimore. Both, it was claimed, had been instrumental in the disastrous policy of denying adequate support to the Chinese Nationalists under Chiang Kai-Shek, hence paving the way for the Communist victory of 1949. The Korean War, and Chinese entry on the side of North Korea, would never have happened, it was argued, if the Truman administration had given due attention to the danger of communism in the Far East rather than devoting its resources to Europe in the crucial years after 1945.

Once again, McCarthy was not the initiator of the ‘Asia first’ view. Its roots lay in controversy over the United States’ wartime ‘Europe­first’ strategy and gained powerful advocates within Congress and among prominent publishers and businessman in the immediate postwar years. Henry Luce, publisher of Time and Life, was a persistent supporter of the Chinese Nationalists and critic of the Truman policy of seeking to resolve the civil war in China by bringing the Nationalists and Communists together. Madame Chiang, a Christian with close ties to American businessmen and legislators, lobbied energetically on behalf of the Nationalist cause both before and after the Revolution. Within Congress Senator Knowland’s role in this cause was such that, following the Nationalists’ flight to Formosa (Taiwan), he was dubbed ‘the Senator from Formosa’. The links which the Asia firsters and the China Lobby managed to forge between the communist threat in Asia and inside America had profound effects upon the future of America’s involvement in the Far East. It ensured that recognition of Communist China would remain off the agenda for a long time to come. It removed from office the cream of America’s China specialists in the State Department purge which followed the Chinese revolution.[57] It also encouraged a heightened sensitivity to the dangers of further losses to communism in Asia. Thereafter, compromise or accommodation with Asian communism was tantamount to abject surrender.

China, Japan, and the Ferment in Asia, 1945-1950

The Asia firsters’ contention that the Truman administration had lost China is justified only if one accepts their premises that China was America’s to lose. This view was based on the romantic notion that the United States had a record of benign concern for China, stemming from the ‘Open Door’ notes of 1899. In opposing the European nations’ plans to parcel up China in line with their own economic interests, the United States, it was felt, had demonstrated an enlightened concern for China’s territorial integrity. White extensive cultural and educational ties existed between China and the United States, in fact the United States did little to enforce the principle of the open door, which in any case could be seen as a self-interested claim for an economic stake by a latecomer on the Chinese scene. In other spheres too China had little reason to feel beholden to the United States. Discriminatory American immigration laws and maltreatment of the Chinese population within the United States were a constant source of friction from the 1880s onwards. Nor did successive American administrations do much to aid China in the face of Japan’s growing aspirations to dominance in the Far East. With the (admittedly reluctant) support of the United States, Japan gained concessions in the Shantung Peninsula at China’s expense at the Treaty of Versailles (1919); the Japanese invasion of Manchuria in 1931 produced only verbal protests from the United States; and when full-scale war broke out between China and Japan in 1937 Roosevelt shrank from imposing sanctions on Japan.

American policy towards China changed substantially with the deterioration of American Japanese relations in 1940-41. Indeed the cause of this deterioration was increasing encroachment by Japan on China as well as South East Asia. In this sense the United States ‘special relationship’ with China was a late development; too late, it might be said, given that China was burdened with internal disruption in addition to the war with Japan. Having once made the decision to support China with lend-lease and to build up China as a major power by including her in the councils of the anti-Axis nations, the United States was confronted by the problem of supporting a leader—Chiang Kai-shek—whose hold on power was distinctly fragile. The central issue for American policy-makers in the wartime and immediate postwar years was their attitude towards the relations between the Chinese Nationalists and the Communists.

Contradictory advice was reaching Washington from China as the war drew to a close. Ambassador Hurley (appointed in January 1945) advocated unreserved support for Chiang in his drive against the Communists, while counsellors within his Embassy doubted Chiang’s ability to produce stable government and were critical of his dictatorial style. Ironically, McCarthy’s bete noire, Owen Lattimore, held a more favourable view of Chiang than many other old ‘China hands’, perhaps because he had worked as Chiang’s political adviser during the war. In a widely read book published in 1945 Lattimore wrote that Chiang was not at present ‘losing control’. Nevertheless he felt that there was a case for political compromise with the Communists. The Communists, he wrote, ‘have done well enough in the territory they control to stand comparison with the Kuomintang [Nationalists]’.[58] Hurley was incensed at the signs that Washington was leaning towards a policy of seeking to reconcile the Nationalists and Communists and on November 26 1945 he resigned in protest. ‘It is no secret’, he wrote in his letter of resignation to Truman, ‘that the professional foreign service men sided with the Chinese Communist armed party . . . Our professional diplomats continuously advised the Communists that my efforts in preventing a collapse of the National Government did not represent the policy of the United States’.[59]

In truth Hurley’s view did not fully represent the policy of the United States Government. Truman’s policy was signalled in late 1945 by his dispatch of General Marshall to China to bring about a cease­fire between the Nationalists and Communists and to encourage the formation of a coalition government. After a year of largely fruitless attempts to mediate between the rival factions, Marshall returned empty-handed. (Marshall subsequently paid dearly for his efforts at the hand of McCarthy, who in 1951 launched a 60,000 word diatribe against him in Congress, later published as a book.)[60] By 1947 the United States resolved on a course of recognition of the Nationalist government, coupled with moderate economic and military aid. As the civil war raged and the Communists advanced, aid to China was gradually scaled down and in January 1949 the American Military Advisory Group was withdrawn. As Foster Rhea Dulles has pointed out, the climax of the civil war coincided with the Berlin Blockade and the prospect of a Communist victory in China did not seem to weigh heavily enough to warrant a large allocation of military resources.[61]

By the early months of 1949, as a Communist victory approached, angry Republicans produced a ‘round robin’ letter in Congress accusing Acheson of ‘irresponsibility’ in his China policy, and followed it up by introducing a series of China aid bills in Congress with the aim of pressuring the administration into action. These efforts achieved partial success, since the administration needed Congressional support for continuance of aid to Europe under the Marshall Plan. A moderate package of assistance to Chiang was passed as an amendment to a European aid bill. It would appear, however, that by the middle of the year the administration was more or less reconciled to a Communist victory in China and was bracing itself for the inevitable reaction within the United States. In August the State Department published the China White Paper, a lengthy history and justification of American policy accompanied by extensive documentary evidence. In the appended Letter of Transmittal from Acheson to Truman it was argued, to the consternation of the administration’s critics, that ‘the ominous result of the civil war in China was beyond the control of the government of the United States’.[62] That this was no more nor less than the truth did not mollify Truman’s opponents, who believed that his policy of heavily qualified support for Chiang had been a self­fulfilling prophecy. A new set of battle lines was thus drawn around the related questions of recognition of Mao’s China and the United States’s attitude towards the Nationalist regime which at the end of the year fled to the island of Formosa.

Non-recognition of Mao and unequivocal support for Chiang was by no means a foregone conclusion. For one thing, this policy would incur the risk of war in support of Chiang and neither Truman nor the joint Chiefs of Staff favoured such a course. A strong lobby within the State Department argued for a ‘realistic’ policy of recognizing whoever was fully in control. (Britain recognized Mao’s government in 1950.) The scales were tipped away from Truman’s preference for disengagement from the China conflict by the influence of McCarthy and the Asia firsters, as we have seen, but more decisively by the outbreak of the Korean War in June 1950 and China’s entry on the side of North Korea in October. The Korean War settled another important question which had been hanging fire for a number of years—the signing ofa peace treaty with Japan. Before discussing the Korean War we must consider briefly the development of policy towards Japan.

In many respects Japan’s place in American Asian policy parallels that of Germany in Europe. Early designs for a punitive settlement were quickly shelved as it became clear that communism was on the march in Asia. Initially the reconstruction of Japan was premised on the need to remove the entrenched elites and institutions which had given rise to militarism in the 1930s. Democratization of the political system, dismemberment of the family-based industrial monopolies (or zaibatsu), and the elimination of Japan’s capacity to produce heavy industrial goods (including, of course, war material) were all aimed at uprooting authoritarianism and encouraging a wider distribution of wealth and political power. To a degree each of these policies was embarked upon but, as Michael Schaller has observed, they gradually ‘lost momentum or changed direction in 1948’.[63] The zaibatsu essentially survived efforts to break them up and the new constitution, while establishing formal democracy and liberal principles, allowed ruling conservatives to retain their position. The plan to deindustrialize Japan was never realized since it soon became clear, as was the case in Germany, that a weak and unstable Japan would invite communist inroads and would undermine America’s broader goal of setting up a counter-weight to communism in Asia.[64] The change of policy towards Japan coincided with the mounting of the Marshall Plan and the Truman Doctrine in Europe. Japan became the keystone of containment in Asia.

The development of the occupation policy in Japan bears directly on the administration’s reluctance to get involved too deeply in the internal affairs of China. The United States were able to exert control over the Japanese situation to an extent which they were not able to do in China. Indeed the United States insisted from the outset that the Soviet Union should have only a nominal role in the occupation of Japan, a point which the Soviets exploited to the full in their claim for a similar role in the Balkans and Eastern Europe.[65] Though it is going too far to say that the administration viewed China as expendable, it is the case that Japan was increasingly regarded as the strategic key to the American position in Asia. From the evidence of PPS and NSC documents in 1948-49, policy was formulated in anticipation of the fall of the Nationalist regime and the recurring theme is the danger of an extensive commitment to preventing its defeat.[66] When in January 1950 Dean Acheson announced the ‘defensive perimeter’ which the United States must be prepared to defend in Asia, it excluded not only Formosa but also Korea. America’s policy in Asia was an offshore policy, reflecting the prevailing conventional wisdom that the United States should resist being drawn into a land war in Asia. In the event the North Korean attack not only undermined this policy, it also removed any remaining obstacles to the formalization of America’s relations with Japan. Ironically, as Schaller points out, the North Korean attack ‘set the stage for the termination of the Occupation’.[67] Any qualms about reaching a separate peace with Japan were now brushed aside. Though the Treaty was signed amid the trappings of an international conference in San Francisco in September 1951 (which the Soviet Union also attended), its provisions—which included a security pact and the granting of American forces base rights—reflected exclusively bilateral interests between the United States and Japan. The Soviet Union refused to sign.

The Korean War

The Truman administration showed little hesitation in revising its assumptions about involvement in a land war in Asia. Within a few days of the North Korean attack on June 25, 1950 it had committed ground troops to the defence of South Korea, pushed a resolution through the UN labelling North Korea as the aggressor, and interposed the 7th Fleet between Formosa and the mainland to prevent an attack by the People’s Republic of China. Militarily the course of the war fluctuated wildly in the first few months. The initial push by the North Koreans took them deep into the South by the middle of September and left only a corner of the peninsula beyond their reach. General MacArthur, seconded from his post as occupation commander in Japan, responded with an outflanking amphibious attack at Inchon, a port half way up the west coast, and within a month had retaken Seoul and driven the North Koreans back to the line dividing North and South Korea at the 38th parallel. MacArthur’s military success raised the question of America’s political aims in Korea—to reestablish the status quo- or to revise it by reunifying Korea?

The division of Korea had followed the pattern of Germany since 1945—provisional partition following the removal of the Japanese wartime occupation forces, failure of the United States and the Soviet Union to agree on a means of reunification, and the establishment of separate governments in North and South. The initial UN resolution on the Korean War envisaged only the restoration of the 38th parallel but the success of MacArthur’s northward drive held out the inviting prospect of reunification by force of arms. Containment, it appeared, was giving way to ‘roll-back’ as Truman endorsed military operations north of the 38th parallel and gained UN approval for it. Ignoring China’s warnings that they would intervene if the Americans continued north, MacArthur pushed deep into the North, reaching the North Korean border with China at the Yalu river at the end of October. As promised, the Chinese entered the war and by the end of the year had forced the UN forces into a headlong retreat down the peninsula to a point south of the 38th parallel.

Truman’s response was a combination of strident verbal aggression against China—including a hint that the United States reserved the option of using the atomic bomb—and a strategic retreat to the initial goal of restoring the 38th parallel. The signs he was giving at the end of 1950 and the beginning of 1951, however, were sufficiently confused to arouse dismay in a variety of quarters. The British Prime Minister, Attlee, rushed to Washington in early December to express his anxiety about the direction of American policy. In a series of conversations with Truman Attlee urged him to open negotiations with the Chinese in order to avoid the possibility of an all-out war with China. Though no more was heard publicly about using atomic bombs in Korea, Attlee failed to convince Truman and Acheson that a less belligerent policy towards China might cool the situation in Korea and also encourage a split between Peking and Moscow. The United States was by now too committed to Chiang and to the view that accommodation to the committee in one sphere meant capitulation everywhere to find Attlee’s arguments acceptable.

In practice, however, Truman was as concerned as Attlee to avoid war with China. Without conceding Attlee’s point about the possible advantages for policy in Asia in general of negotiating with China, he acknowledged the narrower point about the danger of an all-out war. A serious obstacle in the way of this policy was General MacArthur, whose bellicose pronouncements and evident desire to extend the war into China became a serious embarrassment to Truman. Though it could be said that MacArthur was simply following through the logic of the decision to press forward north of the 38th parallel, after the Chinese entry the costs of that policy looked to Truman to be excessive. By April 1951, with MacArthur now back at the 38th parallel, eager to cross it once again into the North, and demanding unconditional surrender from the Chinese, Truman ordered his recall. Containment was re-established as the reigning orthodoxy.

Having escaped one set of costs, however, by ruling out war with China, Truman then incurred another—the storm of protests within the United States which greeted his dismissal of MacArthur. MacArthur returned to the United States as a conquering hero denied his booty, to be feted by Congress and public opinion. Whether these protesters would really have welcomed war with China is open to question. It was enough that MacArthur offered a sounding board for public frustration at the conduct of the war. As the front stabilised around the 38th parallel, military stalemate ensued, though at enormous cost in casualties. It was two years before negotiations, continually stalled over the issue of the return of prisoners of war, brought a conclusion to the conflict.

Korea has been described as a ‘limited war’, an exemplary case of containment in action. But, as we have seen, it also illustrates the fine line in American policy between containment and rollback. The words used by Walter Lippmann to characterise Soviet intentions in the Cold War seem to apply equally well to the United States in the case of Korea: ‘They will expand the revolution, if the balance of power is such that they can; if it is such that they cannot, they will make the best settlement they can obtain for Russia and the regime in Russia’.[68] The best the Americans could obtain in Korea was a return to the lines which existed in 1950. It was not in all respects, however, a return to the status quo ante, since it was associated with a deepened commitment to holding the line in Asia generally. The geostrategic and ideological assumptions which had led to the original formulation of containment in Europe were now firmly adopted in Asia. These included the ‘domino theory’—that a loss of one country to communism would set up a chain reaction in its neighbours—and the belief that by whatever devious route ail manifestations of communism were to be traced to the activities of the Kremlin.

The situations in Europe and Asia, however, were very different. The United States had little control over the course of events in Eastern Europe, largely because of the presence of the Red Army. In Asia, the challenge of communism was more complex. Connections between Soviet communism and its indigenous forms in Asia certainly existed but Asian communism was never simply the creature of the Soviet Union as was the case in Eastern Europe. If it had been—if the Soviet Union had viewed, say, Indochina as equally vital to its security as Poland—then the Vietnam War would quickly have become a world war. In fact the Soviet Union was cautious in its support for Vietnam’s Ho Chi Minh in the early stages of his drive for Vietnam’s independence (1945-1950) as it was decisive in its control over Poland. By a curious irony the United States was powerless to liberate Eastern Europe from communism where it had been imposed by force of arms but harboured the belief that communism could be defeated in Asia where it had indigenous roots. In Europe the reality principle operated, frustrating as it was. In Asia no such check existed, or only intermittently, and it took defeat in Vietnam to bring the lesson home.

5. Conclusion

It is easy to assume that all the dilemmas faced by American policymakers in the postwar decade arose from the relationship with the Soviet Union. In fact the United States would have taken on an expanded role in world politics and in the international economy if the Soviet Union had not existed. The destruction of Europe saw to that. The United States was simply assuming the role which matched its economic and military power, as demonstrated in the war. Even then its adoption of an extended ‘peacetime’ role was marked by indecision and confusion in the initial stages. The desire to demobilize quickly, to renew civilian occupations, to ‘return to normalcy’, account in part for this, and isolationist or non-interventionist sentiment remained strong. Interventionism was also the result of pressure from Britain and Western Europe to step into the Great Power breach which they could no longer fill. As has often been pointed out, the chief antagonist of the Soviet Union among the Western powers immediately after the war was Britain, not the United States. The United States initially saw its role, as Roosevelt had during the war, as that of mediator between Britain and the Soviet Union.

Once the decision was made in 1947 to engage in containment of Soviet power, however, the United States embraced the role with a firmness and fervour which owed much to ideological conflict with the Soviet Union and helped to further it. There is an instructive analogy with American intervention in the First World War. Entry into the war in 1917 followed upon a period of uneasy neutrality during which in practice if not in theory the United States favoured the Allies against Germany. Once in the war, a political and emotional climate resulted which was repeated after the Cold War began—the emergence of a wartime psychology, pressure for ideological conformity, the characterization of the enemy as devil, and the justification of a new diplomacy in universalist terms. In this sense the American Cold War experience was firmly within the nation’s traditions. The key difference was that thirty years on from 1917 there was no safe haven of normalcy to return to.

The Soviet Union’s problems too in the early postwar years cannot be summed up in its relations with the United States. Economic reconstruction and security were its chief concerns, but these quickly involved the exertion of political authority in neighbouring states whose cultural ties lay with the West and who had a legacy of mistrust of Russia.[69] Then as now Soviet energies have been deployed to a considerable degree in maintaining control within its own sphere. Soviet actions in the early Cold War were thus a consequence of weakness rather than strength—or rather, economic weakness compensated for by military power and the imposition of political conformity in its ‘buffer zone’ with the West. In this sense the Soviet Union’s creation of a sphere of influence was as much a product of the war as was the United States’ economic supremacy.

If, as has been justifiably claimed, postwar US-Soviet relations constitute a Cold War ‘system’,[70] it is one which has been characterized by asymmetries rather than parallelisms. America’s superior economic power was not of itself sufficient to assure the stability of Western Europe and it was quickly supplemented by the NATO Alliance. Even so, the geopolitical and military mismatch between the Soviet Union and the United States—Soviet ground troop superiority in Europe against US reliance on air power and nuclear weapons—remained a potent source of friction between the powers and of frustration within the United States. The United States was and is confronted with a strategic dilemma—the possibility of a Soviet invasion of Western Europe with conventional forces in the event of which the United States feels it has to reserve the option of a nuclear response. The search for security, however, has bred further insecurity, as the history of the arms race shows. In every field of policy ambitions are matched by constraints. The origins of the Cold War are to he found as much in these constraints and the attempt by both sides to surmount them as in the dictates of ideology.

7. Guide to Further Reading

Study of American Gold War foreign policy is proceeding with such speed and across such a variety of fronts that the standard surveys inevitably lag some way behind the current state of research. On the question of origins, however, John Lewis Caddis, The United Mates and the Origins of the Cold War, 1941-1947 (New York: Columbia University Press, 1972) and Daniel Yergin, Shattered Peace: The Origins of1he Cold War and the National Security State (Harmondsworth: Penguin, 1980) have held up well. Yergin offers a more eye-catching thesis and he writes in a more sprightly fashion, but Caddis’s account is judicious and penetrating. Of the many surveys which carry the story forward, in most cases to the 70s and beyond, Walter LaFeber, America, Russia, and the Cold War, 1945-1-984 (5th Edition New York: Knopf, 1985) is a heavily researched revisionist interpretation. Stephen Ambrose, Rise to GloBalism (Harmondsworth: Penguin, 1988) is readily available in the UK. Among orthodox accounts Louis Halle, The Cold War as History (New York: Harper and Row, 1967) contains much which is not to be found elsewhere, particularly on geopolitical issues. Wilfried Loth, The Division of the World, 1941-1955 (London: Routledge, 1988) is the work of a German historian which offers a fresh perspective upon the scholarly controversies from a relatively detached point of view. Three further books by the ever busy John Lewis Caddis repay close reading: Russia, The Soviet Union and the United States: An Interpretive History (New York: Wiley, 197.8); Strategies of Containment: A Critical Appraisal of Postwar American .National Security Policy (New York: Oxford University Press, 19$2); and The Long Peace: Inquiries into the History of the Cold War (New York: Oxford University Press, 1987).

The historiographical debate on the Cold War is discussed in Section 1 of this pamphlet and some of the major works are cited in the text or notes. Among the most useful commentaries on revisionism are Charles Maier, ‘Revisionism and the Interpretation of Cold War Origins’, Perspectives in American History 6 (1970), pp.313-47 and Ole R. Holsti, ‘The Study of International Relations Makes Strange Bedfellows: Theories of the Radical Right and the Radical Left’, American Political Science Review, LXVIII (March 1974), pp.217-42. A useful critique of Williams is J. A. Thompson, ‘William Appleman Williams and the ‘American Empire’, Yournal of American Studies 7 (1973), pp.91-104. Thomas McCormick, though not restricting his view to the Cold War, has mounted an argument for a new direction in diplomatic history which builds on revisionism and is dismissive of post-revisionism as developed by Caddis and others. See ‘Drift or Mastery? A Corporatist Synthesis for American Diplomatic History’, Reviews in American History, 10 (December 1982), pp.318-30.

Historians have increasingly sought clues to American policy­making in the cultural and intellectual backgrounds of the policy­makers. Daniel Yergin addresses these issues but Hugh de Santis broke new ground in The Diplomacy of Silence: The American Foreign Service, The Soviet Union and the Cold War, 1933-47 (Chicago: University of Chicago press, 1981). Deborah Welch Larsen explores the mind-sex of poiicy­makers in Origins of Containment: A Psychological Explanation (Princeton: Princeton University Press, 1985).

Two excellent documentary histories offer a wide range of materials: Joseph M. Siracusa, ed., The American Diplomatic Resolution: A Documentary History of the Cold War, 1941-1947 (Milton Keynes: Open University Press, 1976); and Thomas Etzold and John Lewis Gaddis, ed., Containment: Documents on American Policy and ,Strategy, 1945-1950 (New York: Columbia University Press, 197$).

Of the many memoirs, those by Truman, Acheson, and Byrnes are largely exercises in self justification and, while important, should be used carefully. George Kennan’s two volumes, Memoirs 1925-1950 (Boston: Atlantic Monthly Press, 1967) and Memoirs, 1950-63 (London: Hutchinson, 1972), are more reflective and productive of insights as well as being eminently readable. Walter Bedell Smith, Moscow Mission 1946-1949 (London: Heinemann, 1950) and Charles Bohlen, Witness to History 1929-1969 (New York: Norton, 1969) contain much interesting detail on life in the Soviet Union as well as diplomatic developments.

It is likely that in the near future knowledge of Soviet policy will be substantially improved. Already several joint American-Soviet working groups are studying aspects of the Cold War. Meanwhile, a number of valuable studies are available: William Taubman, Stalin’s America Policy (New York: Norton, 1982); Vojtech Mastny, Russia’s Road To the Cold War: Diplomacy, Warfare and the Politics of Communism, 1941-1945 (New York: Columbia University Press, 1979); Adam Ulam, Expansion and Coexistence: The History of Soviet Foreign Policy, 1917-1967 (New York: Praeger, 1968); and Isaac Deutscher, Stalin: A political Biography (New York: Vintage, 1960), chapters XII-XIV.

Wartime diplomacy is covered in Herbert Feis, Churchill, Roosevelt and Stalin: The War They Waged and the Peace they Sought (Princeton: Princeton University Press, 1957); Robert E. Sherwood, Roosevelt and 1-jopkins: An Intimate History (New York: Harper and Row, 1948); Keith Sainsbury, The Turning Point: Roosevelt, Stalin, Churchill and Chiang Kai-shek, 1943 (Oxford: Oxford University Press, 19$6); Robert Dallek, Franklin Roosevelt and American Foreign Policy, 1932-1945 (New York: Oxford University Press, 1979). Athan Theoharis has charted the political uses made of the ‘Malta myth’ by critics of the Roosevelt­‘I’ruman policies in The Malta Myths: An Issue in American Politics, Ig¢5-1950 (Columbia: University of Missouri Press, 1970). American policy towards Eastern Europe is covered in Lynn Etheridge Davis, The Cold War Begins (Princeton: Princeton University Press, 1974); Geir Lundstad, The American Non-Policy Towards Eastern Europe (Oslo: Univeritestsfarlaget, 1978); and Robert Garson, ‘The Atlantic Alliance, Eastern Europe and the Origins of the Cold War: From Pearl Harbor to Yalta’, in H. C. Allen and Roger Thompson, eds., Contrast and Connection: Bicentennial Essays in Anglo-American History (London: G. Bell and Sons, 1976), gp.296-320.

The increasing emphasis on the British role on the early Cold War is well represented by Roy Douglas, From War to Cold War (London: Macmillan, 198Q); Victor Rothwell, Britain and the Cold War, 194-1­1947 (London: Cape, 1982); Terry Anderson, The United States; Great Britain, and the Cold War, Ig44-1947 (Columbia: University of Missouri Press, 1981) and Fraser Harbutt, The Iron Curtain: Churchill, America and the Origins of the Cold War (New York: Oxford University Press, 1986). Alan Bullock’s Ernest Bevirr: Foreign Secretary (London: Heninemann, 1983) is a masterly study.

Economic issues in the US-Soviet relationship are examined from contrasting points of view by Thomas G. Paterson, Soviet-American Confrontation: Postwar Reconstruction and the Origins of the Cold War (Baltimore: John Hopkins University Press, 1973) and Robert A. Pollard, Economic Security andthe Origins of the Cold War,1945-1950 (New York: Columbia University Press, 1985). Paterson buds that the United States was willing to employ economic power for diplomatic leverage, while Pollard concludes that ‘postwar American foreign economic policy was not driven by a strong anti-Soviet animus’. Much useful data and analysis is contained in James L. Clayton, ed., The Economic Impact of the Cold War.’ Sources and readings (New York: Harcourt Brace, I970).

Since the publication in 1965 of Gar Alperovitz’s Atomic Diplomacy: Hiroshima and Potsdam, the Use of theAtomic Bomb and the Confrontation with Soviet Power (Revised edition, New York: Penguin, 1985) the political implications of atomic weapons in the war against Japan and in relations with the Soviet Union have been the subject of fierce debate. A range of views on these issues is presented in Burton J. Bernstein, ed., The Atomic Bomb: The Critical Issues (Boston: Little Brown, 1976). Martin Sherwin, A World Destroyed.’ The Atom Bomb and the Grand Alliance (New York: Vintage Books, 1977) shows that atomic policy was subject to political calculations from the beginning of the Manhattan project. Gregg Herken, The Winning Weapon: The Atomic Bomb in the Cold War, 1945-19511(New York: Knopf, 198Q) illustrates the illusions governing American policy in the period of its atomic monopoly.

The origins of the Truman Doctrine are covered in the surveys mentioned above. Of great value-isjoseph M. Jones, The Fifteen Weeks (New York: Harcourt Brace, 1964), an account by a member of the Truman administration. Bruce Kuniholm’s The Origins of the Cold War in the ,Near East: Great Power Conflict and Diplomacy in Iran, Greece and Turkey (Princeton: Princeton University Press, 1980) is the leading study of this subject. Kuniholm makes a strong case for regarding conflict over the ‘Northern Tier’ of the Middle East as the major breeding ground of the Cold War. Richard Freeland, The Truman Doctrine and the Origins of McCarthyism (New York: Knopf, 1972) argues that the seeds of domestic anti-communism were planted by Truman by the manner in which he promoted the Truman Doctrine. On the Marshall Plan see Jones’ Fifteen Weeks and John Gimbel, The Origins of the Marshall Plan (Stanford: Stanford University Press, 1976). Michael J. Hogan’s recent book The Marshall Plan: America, Britain and the Reconstruction of Western Europe, 1947-1952 (Cambridge: Cambridge University Press, 1987) sees American policy as the logical outgrowth of economic assumptions going back to 1920s ‘corporatism’. Unlike most studies of the Marshall Plan this one looks in detail at its implementation.

The division of Germany has attracted a good deal of attention from historians, though there is still scope for a study which relates the German issue more widely to the development of the Cold War. Detailed studies of American policy include Bruce Kuklick, American Policy and the Division of Germany (Ithaca: Cornell University Press, 1972) and John H. Backer, The Decision to Divide Germany (Durham: University of North Carolina Press, 1978). Essays by C. Greiner and N. Wiggershaus in Olav Riste, ed., Western Security: The Formative Years, European and Atlantic Defence 1947-1953 (Oslo: Norwegian University Press, 1985) examine the issue of German rearmament. The same volume contains essays by L. S. Kaplan, S. F. Wells, Jr and T. H. Etzold on the development of NATO strategy.

Good studies of American Far Eastern policy in the Truman years are Ernest May, The Truman Administration and China, 1945-49 (New York: Lippincott, 1975) and William W. Stueck, The Road to Confrontation: American Policy Toward China and Korea, 1947-1950 Chapel Hill: University of North Carolina Press, I981). In The China Hands: America’s Foreign Service Officers and What Befell Them (New York: Viking, 1975) E. J. Kahn describes in vivid detail the personal costs borne by America’s China specialists in the right wing reaction to the ‘loss’ of China. Michael Schaller throws much new light on the Cold War in Asia in The American Occupation of Japan: The Origins of the Cold War in Asia (New York: Oxford University Press, 1986). Recent research an the origins of the Korean War is synthesized in Peter Lowe, The Origins of the Korean War (London: Longman, 1986). Contrasting recent accounts of the war itself, both based on television series, are Max Hastings, The Korean War (London: Michael Joseph, 1987) and ion Halliday and Bruce Cummings, Korea: The Unknown War (New York: Viking, 1988).

On the general phenomenon of anticommunism David Caute’s The Great Fear: The Anticommunist Purge under Truman and Eisenhower (New York, 1978) is encyclopedic. A useful starting point for McCarthy the man and the ‘ism’ is Allen J.-Matusow, ed., Senator Joseph McCarthy (Englewood Cliffs: Prentice-Hall, 1970) which prints some of McCarthy’s major speeches, comments by contemporaries, and a range of interpetive studies by historians and sociologists. Richard Rovere’s Senator Joe McCarthy (New York: Harper and Row, 1973. First published 1959) is an old but still valuable biographical study. Among the best recent biographies is David Oshinsky, A Conspiracy So Immense: The World of,jtoe McCarthy (New York: Free Press, 1982). The debate on the meaning of McCarthyism was set in motion by Daniel Bell, ed., The Radical Right (New York: Anchor Books, 1964; first published 1955). Though the sociological assumptions of Bell, Hofstadter et al seem rather dated now, there are hints in this volume which deserve to be taken up, particularly Herbert Hyman’s comparative analysis of American and British anti-communism. M. J. Heale carefully tests various theories against the evidence of state politics in ‘Red Scare Politics: California’s Campaign Against Un­American Activities, 1940-1970’, Journal of American Studies 20 (April 1986), pp. 5-32. On the Hiss case Alistair Cooke’s A Generation on Trial is an exemplary account of the trial itself while also placing it in historical perspective. Allen Weinstein’s massive and controversial Perjury: The Hiss-Chambers Case (London: Hutchinson, 1978) should be supple­mented by Rhodri Jeffreys-Jones’s judicious assessment in ‘Weinstein on Hiss’, Journal of American Studies 13 (April 1979), pp.115-26.

6. Notes

  1. See Stephen Cohen, Rethinking the Soviet Experience: Politics and History Since 1917 (New York: Oxford University Press, 1985). Back
  2. Zbigniew Brzezinski, ‘Communist Ideology: Key to Soviet Thinking’, in Norman A. Graebner, ed., The Cold War: A Conflict of Ideology And Power (Lexington, Mass: D. C. Heath, 1976), pp.79-96. Back
  3. John Spanier, American Foreign Policy Since World War II (New York: Praeger, 1960), p.16; George F. Kennan, American Diplomacy, 1900-1950 (Chicago: University of Chicago Press, 1951j, p.93. Back
  4. Walter Lippmann to Arthur Schlesinger Jr, September 25 1967 in John Morton Blum, ed., Public Philosopher: The Selected Letters of Walter Lippmann (New York: Ticknor and Fields, 1985), p.615-16. Back
  5. William Appleman Williams, The Tragedy of American Diplomacy (Second revised and enlarged edition, New York: Delta, 1972); Gabriel Kolko The Politics of War: Allied Diplomacy and the World Crisis 1943-45 (London: Weidenfeld and Nicolson, 1969) and Joyce and Gabriel Kolko, The Limits of Power: The World and United States Foreign Policy, 1945-1954 (New York: Harper and Row, 1972). Back
  6. See David Horowitz, From Yalta to Vietnam (Harmondsworth: Penguin, 1967). Back
  7. See Thomas McCormick, ‘Drift or Mastery: A Corporatist Synthesis for American Diplomatic History’, Reviews in American History 10 (December 1982), pp.318-330. Back
  8. See John Lewis Gaddis, The United States and the Origins of the Cold War, 1941-1947 (New York: Columbia University Press), pp.359-61. Back
  9. Daniel Yergin, Shattered Peace: The Origins of the Cold War and the American National Security State (Harmondsworth: Penguin, 1980). Back
  10. See specially Roy Douglas, From War to Cold War: 1942-194$ (London: Macmillan, 1980); Victor Rothwell, Britain and the Cold War, 1941-1947 (London: Cape, 1982); Terry Anderson, The United States, Great Britain, and the Cold War, 1941-1947 (Columbia: University of Missouri Press, 1981); Fraser J. Harbutt, The Iron Curtain: Churchill, America and the Origins of the Cold War (New York: Oxford University Press, 1986); and Henry Butterfield Ryan, The Vision of Anglo-America: The US-UICAlliance and the Emerging Cold War, 1943-1946 (Cambridge: Cambridge University Press, 1987). Back
  11. Gaddis, The United States and the Origins of the Cold War, p.vii. Back
  12. Quoted in John Foster Dulles, ‘To Save Humanity from the Deep Abyss’, New York Times Magazine, July 30, 1950, p.35. Back
  13. Spanier, American Foreign Policy Since World War II, eh. 1; and Bruce Kuniholm, ‘The Origins of the First Cold War’, in Richard Crockatt and Steve Smith, ed., The Cold War Past and Present (London: Allen and Unwin, 1987), pp.51-54. Back
  14. Louis Hartz, The Liberal Tradition in America (New York: Harcourt Brace, 1955), Ch. I. Back
  15. Geoffrey Barraclough, An Introduction to Contemporary History (Harmonds­worth: Penguin, 1967), p.120, 118. Back
  16. Vojtech Mastny has shown, however, that dissolution of the Comintern has other motives. Stalin had found the Comintern an unserviceable means of controlling non-Soviet Communist parties and believed that he could more easily achieve his purposes by dealing with them individually. See Russia’s Road to the Cold War: Diplomacy, Warfare and the Politics of Communism, 1941-1945 (New York: Columbia University Press, 1979), PP.95-7. Back
  17. Milovan Djilas, Conversations with ,Stalin (New York: Harcourt Brace, 1962), p.114. Back
  18. Quoted in Robert A. Pollard, Economic .Security and the Origins of the Cold War, 1945-1950 (New York: Columbia University Press, 1985), p.8. Back
  19. Gaddis, The United States and the Origins of the Cold War, p.23. Back
  20. Quoted in Mastny, Russia’s Road to the Cold 617ar, p.218. Back
  21. See Gaddis, The United States and the Origins of the Cold War, pp.238-41. Back
  22. See especially Gar Alperovitz, Atomic Diplomacy: Hiroshima, Potsdam. The Use of the Atomic Bomb and the Confrontation with the Soviet Union (Revised edition, New York: Penguin, 1985). Back
  23. Gaddis, The United States and the Origins of the Cold War, p.246. Back
  24. George F. Kennan’s ‘long telegram’ in Joseph M. Siracusa, ed., The American Diplomatic Revolution: A Documentary History of the Cold War (Milton Keynes: Open University Press, 1976), p.195. The speeches by Byrnes and Churchill are printed in the same volume, pp.201-6 and 206-9. Back
  25. In Siracusa, ed., The American Diplomatic Revolution, p.180. Back
  26. In Siracusa, ed., The American Diplomatic Revolution, p.210. Back
  27. On the question of the distinction between ‘open’ and ‘exclusive’ spheres of influence see Eduard Marks, ‘American Policy Toward Eastern Europe and the Origins of the Cold War: An Alternative Interpretation’, Journal of American History 68 (1981-82), pp.313-336. Back
  28. See Isaac Deutscher, Stalin: A Political Biography (New York: Vintage, 1960), pp.555-565. Back
  29. See, for example, Thomas A. Paterson, Meeting the Communist Threat: Truman to Reagan (New York: Oxford University Press, 1988), Chs. 1-3, 5 and 6. Back
  30. Walter Bedell Smith, Moscow Mission, 1946-1949 (London: Heinemann, 1950), p.327. Back
  31. John Lewis Gaddis, The Long Peace: Inquiries into the History of the Cold War (New York: Oxford University Press, 1987), p.46. Back
  32. Arnold J. Toynbee, A Study of History (New York: Oxford University Press, 1946). A Time Magazine cover story on Toynbee appeared on March 17, 1947 (pp.29-32), two days after the announcement of the ‘Truman Doctrine’ calling for aid to Greece and Turkey. The author of the Time article was Whittaker Chambers, later famous as the accuser of Alger Hiss. Back
  33. In Siracusa, ed., The American Diplomatic Revolution, p.54. Back
  34. George F. Kennan (but published under the pseudonym ‘X’), ‘The Sources of Soviet Conduct’, ForeignAffairs XXV (July 1947), p.573, 575, 576. Back
  35. In Siracusa, ed., The American Diplomatic Revolution, p.227. Back
  36. Yergin, Shattered Peace, p.283. Back
  37. Quoted- in Joseph M. Jones, The Fifteen Weeks (New York: Harcourt Brace, 1955), p.193. Back
  38. George F. Kennan, Memoirs, 1925-1950 (Boston: Atlantic Monthly Press, 1967), pp.314-17. Back
  39. Kennan, Memoirs, 1925-1950, p.359. Back
  40. Kennan, ‘The Sources of Soviet Conduct’, p.582. Back
  41. Kennan, Memoirs, 1925-1950, p.361. Back
  42. Gaddis, The Long Peace, p.48, 49. Back
  43. In Siracusa, ed., The American Diplomatic Revolution, p.253, 254, 255. Back
  44. PPS 1 in Thomas Etzold and John Lewis Gaddis, ed., Containment: Documents on American Policy and Strategy, 1945-1950 (New York: Columbia University Press, 1978), pp.102-3, 104. Back
  45. PPS 1 in Etzold and Gaddis, ed., Containment, p.106. Back
  46. See William Taubman, ,Stalin’s America Policy (New York: Norton, 1982), pp.172-73. Back
  47. Louis Halle, The Cold War As History (New York: Harper and Row, 1967), p. 164. Back
  48. Yergin, Shattered Peace, p.5. Back
  49. In Etzold and Gaddis, ed., Containment, p.66. Back
  50. Walter LaFeber, America, Russia, and the Cold War 1945-1984 (Fifth edition, New York: Knopf, 1985), p.83. Back
  51. Moscow has now admitted officially what it has denied for four decades—that Fuchs (who had worked on the Manhattan project during the war) did provide the Soviet Union with atomic secrets. See The Guardian, August 3, 1988, p.7. Back
  52. NSC-68 in Etzold and Gaddis, ed., Containment, p.389, 402. Back
  53. John Lewis Gaddis, Strategies of Containment: A Critical Appraisal of Postwar American National Security Policy (New York: Oxford University Press, 1982), pp.114-15. Back
  54. NSC-68 in Etzold and Gaddis, ed., Containment, p.426. Back
  55. In A. J. Matusow, ed., .Senator Yoseph McCarthy (Englewood Cliffs: Prentice-Hall, 1970), p.22, 59, 26. Back
  56. See especially, Daniel Bell, ed., The Radical Right (New York: Anchor Books, 1964; first published 1955). Back
  57. See E. J. Kahn, The China Hands: America’s Foreign .Service Officers and What Befell Them (New York: Viking, 1975). Back
  58. Owen Lattimore, .Solution in Asia (Boston: Atlantic Monthly Press, 1945), p.122. Back
  59. Quoted in Kahn, The China Hands, pp.174-5. Back
  60. Joseph McCarthy, America’s Retreat From Victory: The .Story of George Catlett Marshall (1952). Back
  61. Foster Rhea Dulles, American Policy Toward Communist China (New York: Thomas Crowell, 1972), pp.31-2. Back
  62. The China White Paper (Reissued with a new introduction by Lyman van Slyke, Stanford: Stanford University Press, 1967), Volume I, p.xvi. Back
  63. Michael Schaller, The American Occupation of 3apan: The Origins of the Cold War in Asia (New York: Oxford University Press, 1986), p.51. Back
  64. Schaller, The American Occupation of Japan, 51. Back
  65. Schaller, The American Occupation of Japan, pp.58-61. Back
  66. See documents 25-34 in Etzold and Caddis, ed., Containment. Back
  67. Schaller, The American Occupation of Yapan, p.290. Back
  68. Walter Lippmann to Quincy A. Wright, January 23, 1948 in Blum ed., Public Philosopher: Selected Letters of Walter Lippmann, p.505. Back
  69. Jacques Rupnik, ‘Eastern Europe and the New Cold War’, in Crockatt and Smith, eds., The Cold War Past and Present, pp.204-5. Back
  70. Michael Cox, ‘The Cold War and Stalinism in the Age of Capitalist Decline’, Critique 17 (1986), pp.17-82. Back

Top of the Page

David Timms, Nathaniel Hawthorne

BAAS Pamphlet No. 17 (First Published 1989)

ISBN: 0 946488 07 X
  1. Hawthorne in His Time
  2. The Tales
  3. The Scarlet Letter
  4. The Marble Faun
  5. Hawthorne In Our Time
  6. Guide to Further Reading
  7. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. Hawthorne in His Time

I

From several points of view, Hawthorne occupies a unique place in American literary history. In a now famous review of Mosses from an Old Manse, ‘Hawthorne and his Mosses’, written while he was trying to finish Moby-Dick, Melville made a bold bid to claim major status for a man who was soon to become his friend, by comparing him with Shakespeare.[1] While I would not be drawn into a debate about literary league tables, what certainly can be said is that Hawthorne occupies a position in American literary culture analogous to Shakespeare’s in Britain. Shakespeare was born on St. George’s day; Hawthorne, like Yankee Doodle Dandy, on the Fourth of July. Professor Schoenbaum has shown how Shakespeare’s image has varied in relation to the life and times of the biographer.[2] The same is true of Hawthorne. He has been the perfect Victorian gentleman created by his son in Nathaniel Hawthorne and his Wife (1885) and his wife in her editions of the ‘passages’ from his notebooks (Sophia Hawthorne altered the word ‘bottom’ to ‘seat’ even when the bottom in question was that of a chair). To the cosmopolitan Henry James in Hawthorne (1879) he was a bewildered provincial. He was a Rebellious Puritan in 1927 but A Modest Man in 1940. In Randall Stewart’s Nathaniel Hawthorne (1948), published in the anxious early days of the Cold War, he was a family man, concerned about his domestic and civic responsibilities, distinctly not un-American. I am distorting a little, of course, in the interests of a good story; but the broad outline of the pattern is there.[3]

II

Hawthorne was the first native author in America to be thought of as ‘classic’. The earliest of the works that stake his claim to major status dates from the early 1830s, when he was, in the words of one of his own prefaces, ‘the obscurest man of letters in America’,[4] but by 1843 he was able to write to his friend Bridge ‘nobody’s scribblings seem to be more acceptable to the public than mine’. (XVI 688) His long fictions, produced in the 1850s, were widely reviewed in America and in Britain; one review, possibly written by George Eliot, calling The Blithedale Romance ‘the finest production of genius in either hemisphere, for this quarter at least … ‘. [5] The first twenty years after his death in 1864 saw the publication of three ‘Collected’ editions of his works, and there was a market for anything that came from his pen, for his wife and son continued for some years to bring out journals and notebook material further and further removed from a state that can have been intended for publication.

According to Richard Brodhead, for his literary descendants Hawthorne achieved and preserved an almost talismanic force. Brodhead comments that the experience of first reading Hawthorne is a repeated trope in literary autobiographies of nineteenth century figures.[6] The case of Henry James is well known, who records the experience of reading all Hawthorne’s works straight through, ‘in one sweet draught’, in the summer of the older writer’s death.[7] Less well known and far less likely is the case of the naturalist Hamlin Garland, who recalls reading Hawthorne for the first time and finding in him a literary touchstone against which he could judge all other literature, and even people, for he records that a girlfriend’s not liking his new enthusiasm finished the romance.[8]

But this sketch of Hawthorne’s standing does not sufficiently recognise the fact that for all his classic’ status Hawthorne’s was very much a succes destime: he was never a popular author in the sense of having best-sellers, except perhaps in the ease of The Scarlet Letter, which owing to its subject — matter was a suces de scandale. His career as a whole is instructive about the conditions of professional authorship in nineteenth century America. Though, as I note above, Hawthorne found himself in considerable demand by 1843, he goes, on to say in the same letter ‘I find it a tough match to gain a respectable support by my pen’. (XV, 688) J. Donald Crowley in his, excellent Hawthorne: the Critical Heritage says that the total income from Mosses From an Old Manse for 1851 and 1852 was $150. Sophia Hawthorne’ saved a similar sum from the income from her decorative work.[9] It gave the Hawthornes some financial leeway after the writers notorious expulsion from his job in the Salem Custom House: in effect, it gave him the time to write The Scarlet Letter. These figures might be compared with the $30,000 Hawthorne saved from the income from his Consular appointment in Liverpool between 1853 and 1857. ‘Dollars damn me’,[10] Melville said to Hawthorne himself in a different context, and Hawthorne must have known exactly what his friend meant: he took a succession of public appointments that did not so much eke out the income he earned from his writing as vice versa.

Some solutions have been offered to the paradox that Hawthorne was successful but not popular. Jane Tompkins in her controversial Sensational Designs suggests that his contemporaneous reputation was largely achieved because the American literary establishment ‘in the late forties and early fifties, a small group of men, well-known to each other personally, focused on Boston, acted as a kind of literary freemasonry to puff each other’s reputations’. She points, for instance, to an influential review of Twice-Told Tales by Longfellow, whom Hawthorne had known at Bowdoin College.[11] More persuasive than the conspiracy theory is Richard Brodhead’s account of the marketing expertise of Hawthorne’s publisher, James Fields, who ‘established “literature” as a market category with Hawthorne one of the brand leaders, marketed specifically as a classic’.[12] It is surely significant too that Hawthorne’s ‘classic’ status should have been established and consolidated in the 1850s at a time when chronic sectional conflict was becoming acute. New England’s endeavour to elevate an author who so insistently uses the setting and history of his own region is perhaps linked to the rivalry generated by the ‘King Cotton’ prosperity of the South. It is analogous to nineteenth-century Britain’s wish to claim cultural superiority as its economic and political child looked more and more parental.

III

But, in fact, in many respects Hawthorne was out of step with his time. If Harriet Beecher Stowe may be taken to be representative of that ‘damned mob of scribbling women’[13] about whom Hawthorne complained to his publisher (as she was certainly the most successful in terms of public impact) it is clear that what Hawthorne was writing was in general against the grain of public taste. The American public wanted tales of adventure like Cooper’s or Dana’s; local colour that appealed to the regional interest like the works of John Pendleton Kennedy or Augustus Baldwin Longstreet; thinly- or even undisguised tracts like those of Susan Warner or Timothy Shay Arthur’s Ten Nights in a Bar-Room; or books that managed to combine all three, like, triumphantly, Uncle Tom’s Cabin. Hawthorne himself noticed this ruefully in a late letter to Fields: “My own opinion is, that I am not really a popular writer, and that what popularity I have gained is chiefly accidental, and owing to other causes than my own kind or degree of merit”.[14] Nina Baym in The Shape of Hawthorne’s Career claims that Hawthorne was trying to find his audience, and changed his style in successive books to suit what he believed (wrongly, it seems) to have been public taste.[15]

But this surely does not explain the attitude Hawthorne adopts in his introductions, where, in seeking to explain the precise nature of his works, and in particular in his repeated attempts to establish the special status of the romance as opposed to the novel, he is clearly recognising that, far from being in the main line of public taste, his fictions are precisely what they are not used to.

Hawthorne was also out of step with the majority of his literary peers. The intellectual fashion of his day in New England was Transcendentalism, but despite his involvement in the Brook Farm experiment Hawthorne was no Emersonian optimist. A mind for which sin and evil were such potent presences could hardly have been consonant with one for which evil was merely the privation of good, as cold is the privation of heat; and in fact Hawthorne satirized Transcendentalism directly in his pastiche of The Pilgrim’s Progress for the nineteenth century, ‘The Celestial Railroad’. ‘Giant Transcendentalist’ has usurped the cavern of Pope and Pagan in Hawthorne’s Valley of the Shadow of Death, and there he lurks, waiting to seize unwary travellers and fatten them up for his table with large meals of ‘smoke, mist, moonshine, raw potatoes, and saw-dust’. (326) It took a Melville to notice that Hawthorne was one of those who say ‘No – in thunder’,[16] though it seems obvious to us today.

Just as important, perhaps, Hawthorne differed with his Transcendentalist contemporaries on the proper content of literature. Emerson in ‘The American Scholar’ called for an indigenous literature that would reject the remote, and instead deal with ‘the meal in the firkin, the milk in the pan’.[17] Thoreau and Whitman in their different ways would be able to take his advice. But while Hawthorne used the materials of the American past in his fiction, he spelt out very clearly that as far as he was concerned ‘no author, without a trial, can conceive of the difficulty of writing a romance about a country where there is no shadow, no antiquity, no mystery, no picturesque and gloomy ‘wrong, nor anything but a commonplace prosperity, in broad and simple daylight, as is happily the case with my dear native land’. (IV, 3)

Hawthorne also disagreed with the Trancendalists over the basic materials that constitute literature: words themselves. It is clear from his comments in notebooks that for him language and the world it refers to are closed systems; that reality is intractable and finally untranslatable and language can only be suggestive of it. The Transcendentalists were dissatisfied with the current state of language: Emerson looked for a ‘language of fact’ and felt that ‘wise men pierce … rotten diction and fasten words again to visible things’.[18] Thoreau wanted language with the immediacy of an animal cry.[19] The poet for Emerson is one who names and his disciple Whitman formulated the notion that ‘Names are magic’.[20] Behind their views lies the belief that the problem of the artist is to bring words into a proper relation with things; Hawthorne’s statements, and his practice, imply a different proposition, that words and things are inevitably separate from each other.

IV

But Hawthorne was more completely out of step with his European peers. The full-scale ‘romances’ on which Hawthorne’s reputation chiefly rests all belong to the 1850s. The Scarlet Letter published at the beginning of the decade and The Marble Faun, written at the end. However, the same decade saw the first use of the term ‘realism’ in England, and the appearance of the first works of Trollope and George Eliot. ‘To assert specifically when [realism] began is to ensure disagreement’, says George Becker in the introduction to a collection of contemporaneous documents on the topic, ‘yet it seems clear enough that the decade of the 1850s constitutes a kind of watershed’. [21] It is true that Hawthorne regretted to Longfellow that his works were not more substantial. (XV, 251) He said in ‘The Custom House’ that the ‘ordinary characters’ of his everyday experience might furnish the material for ‘a better book than I shall ever write’, (67) and later expressed to Fields his admiration of Trollope’s work, ‘written on the strength of beef and through the inspiration of ale’.[22] But the fact is that he never did, and spent a good deal of effort on explaining that this lack of solidity was an important feature of his fiction and not simply an omission. Hawthorne’s best-known and most succinct definition of what he understood by the term ‘romance’ is contained in the preface to The House of the Seven Gables:

When a writer calls his work a Romance, it need hardly he said that he wishes to claim a certain latitude, both as to its fashion and material, which he would not have felt entitled to assume, had he professed to be writing a novel. The latter form of composition is presumed to aim at a very minute fidelity, not merely to the possible, but to the probable and ordinary course of man’s experience. The former – while, as a work of art, it must rigidly subject itself to laws, and while it sins unpardonably, so far as it may swerve aside from the truth of the human heart has fairly a right to present that truth under circumstances, to a great extent, of the writer’s own choosing or creation. (II, 1)

Hawthorne seems to be making two claims here. The first relates to the raw material of any fiction, the elements of character, setting and action, which in a novel conform to ‘the probable and ordinary course of man’s experience’. In romance, therefore, we should not expect verisimilitude in these things. As I hope to show below, Hawthorne was as good as his word on these matters: characters are not ’rounded’ or ‘realistic’, such as you might see; settings are often impossible to visualise; and plots are not driven by causality.

From their very different standpoints, Hawthorne’s American and European contemporaries were alike in their stress on the visible and the sense of sight. The word that echoes throughout the realists’ own critical writings on their work is ‘observation’, and, from Stendhal to George Eliot, the metaphor they used to characterise their art is that of the faithful reflecting mirror. Ruskin and Carlyle as well as Arnold voiced some version of the dictum that the purpose of art is to ‘see the object as in itself it really is’.[23] Emerson and Thoreau in the USA had inherited the Romantic belief that seeing had to be instinct with ‘feeling’, expressed for example in Coleridge’s ‘Dejection’ ode: ‘I see them all so excellently fair,/I see, not feel, how beautiful they are!’[24]

The Transcendentalists felt that like language seeing needed to be reinvented. Emerson famously wanted to be a transparent eyeball.[25] F. O. Mattheissen points out:

Concern with the external world… came to mark every phase of the century’s increasing closeness of observation, whether on such scientific achievements as telescope or microscope, or in the painter’s new experiments with light, or in the determination of the photographers and realistic novelists to record every surface detail.[26]

It was a concern that Hawthorne expressly ignores.

This brings us to the second aspect of Hawthorne’s definition. According to Marshall McLuhan it is the isolation of the sense of sight produced by print culture that makes the ‘fixed point of view’ possible, and with it the very idea of the author.[27] It is surely this ‘fixed point of view’ that is the foundation stone of the realist novel, the conviction that there is a common world of phenomena referred to by the author, the broad outlines of which we cannot but accept as independent of our individual perception of it. However, the classic plot of the realist novel is one of ‘Lost Illusions’. The consequence is that the characteristic attitude of the author’s representative in the realist novel, the omniscient narrator, is paternal, guiding a readership away from the mistakes made by erring characters. This attitude is summed up splendidly by Trollope in-his ‘Conclusion’ to Barchester Towers: ‘the end of a novel, like the end of a children’s dinner party, must be made up of sweetmeats and sugarplums’. Trollope dispenses, reassures, comforts: ‘Let the gentle reader be-wider no apprehension whatever. It is not destined that Eleanor shall marry Mr. Slope or Bertie Stanhope’.[28] In the late nineteenth century the narratorial voice moved from this paternal authoritarianism to the even more confident ‘invisibility’ of, say, Zola, whose narrator has no need to identify himself because he is reporting what anyone must observe in the context of his world, if they have eyes to see.

Jonathan Culler glosses Sartre’s view of the nineteenth century novel in general: it is ‘told from the viewpoint of wisdom and experience and listened to from the viewpoint of order’.[29] The language of realism, according to John Ellis and Rosalind Coward, is ‘the language of mastery’.[30] Michel Foucault looks at matters from another direction but comes up with a complementary view of the ‘author’ as governor:

The author allows a limitation of the cancerous and dangerous proliferation of significations within a world where one is thrifty not only with one’s resources and riches, but also with discourses and their significations. The author is the principle of thrift in the proliferation of meaning.[31]

Hawthorne suggests that the writing of romances puts the writer into a situation in which it is possible to ‘sin unpardonably’. The idea of the unpardonable sin is itself the explicit theme of one of his stories, ‘Ethan Brand’. and so its mention here is doubly self-referential; but what this notion introduces is the idea that the writing of fiction is a moral business as well as an aesthetic one, that the relationship of the storyteller, his stories and the world is problematic. In fact, he asserts the reverse of that confidence that I have suggested to have been characteristic of the realist writer. The teller behind Hawthorne’s tales and romances is not readily identifiable as a coherent authorial personality who dispenses an unquestionable truth; he will not adopt the paternal role, and is suspicious of the authoritarianism that the role conventionally implies.

V

Hawthorne wrote to a friend on the publication of The Marble Faun that some of the British reviews ‘grumble awfully’ about his ‘wild … fiction’, because ‘it is not every man that knows how to read a Romance’.[32] Instead of seeing Hawthorne’s prefaces as evidence of his wish to follow an audience, they might be seen as his attempt to create one, they might be not so much explanations of how the works were written as how they might be read. It is perhaps in the relationship of reader and writer inscribed in Hawthorne’s texts that we might find his most distinctive qualities.

2. The Tales

I

Disgusted, possibly, by the lack of success of the anonymously published Fanshawe (1828), or just disgusted with its awkward blend of Gothic, melodrama and autobiography, Hawthorne recalled the book, destroyed as many copies as he could, and embarked on writing tales and sketches, for which he found a ready market in the gift-books and annuals that were one of the publishing staples of his time. He eked out a thin income from the stories with literary hack-work, but was reliant on the financial support of his mother and her family. He used the freedom this gave him usefully, however, for in addition to writing, he was reading copiously in the histories of his region, and also travelling around it. The literary fruit of these years were the two series of Twice-Told Tales (1837; 1842), and two further volumes of stories, Mosses From an Old Manse and The Snow Image and Other Twice Told Tales, which followed in 1846 and 1851.

As collections, these volumes offend our urge to look for qualitative ‘development’, for some of the stories currently considered among Hawthorne’s best, like ‘My Kinsman, Major Molyneux’ and ‘Roger Malvin’s Burial’ are very early, dating from 1832. It should be noted, however, that this assessment of his ‘best’ would probably not have been understood by his contemporaries, who consistently singled out for highest praise pieces now largely ignored, like ‘Little Annie’s Ramble’.

The collections are miscellanies, and their inclusions do not simply correlate with dates of original composition. Hawthorne’s selections, and what they suggest about his own relative valuation of his tales, might seem to us surprising, for he omitted both 1832 stories from the first collection, preferring to’ include ‘The Gentle Boy’, for instance. However, it is clear from Arlin Turner’s reconstruction of Hawthorne’s earliest efforts in the short story’ that his original intention was to put together a formally connected set of narratives.[33] The first attempt was to be called ‘Seven Tales of my Native Land’, possibly containing stories that are autobiographically referred to as having been burnt, in the later ‘Alice Doanes’s Appeal’ and ‘The’ Devil in Manuscript’. The second projected collection, ‘The Storyteller’, was much more ambitious. Here too Hawthorne planned a series of ‘American’ tales, linked by the itinerant figure named in the title. Parts of both collections survive, though it is difficult to be sure exactly which stories might originally have been intended for which project. But if in the extant collections the ‘local colour’ aspect of the tales is diminished, a self-referential acknowledgement of artifice, suggested by ‘The Story-Teller’ project, remains, and is characteristic of all his fictions.

II

Many of the tales belong explicitly to the realm of the fantastic. ‘Earth’s Holocaust’, for example, takes as its theme the building of a huge bonfire on which ally the outdated uses of the world are to be heaped up and burned. I have already referred to ‘The Celestial Railroad’, where in Hawthorne’s version of Pilgrim’s Progress Pilgrim makes his journey by rail. Only slightly less fantastic is the notion of a search for, an abstract absolute like ‘the unpardonable sin’ in ‘Ethan Brand’, or perfection in a human being, in ‘The Birth-mark’.

If he does not seek real settings, neither does Hawthorne try to create ‘real’ characters. The tales show that-he conventionally uses types, and that there is a fairly short cast-list of these types. There is the artist in ‘Drowne’s Wooden Image’ or ‘The Artist of the Beautiful’ and the overlapping figure of the scientist in ‘Rappacini’s Daughter’ or ‘The Birth-mark’. Another recurring figure with qualities in common with the artist “‘scientist is one whom Melville might have called an ‘isolato’, the man for some reason cut off from the rest of humanity, like Ethan Brand. There are also the antithetical figures of the neophyte, like the heroes of ‘Young Goodman Brown’ and My Kinsman, Major Molyneux’, anal the authoritarian, like the eponymous ‘Gray Champion’, or Endicott, the iron Puritan who recurs in the colonial tales. This is not to say that the characters lack verisimilitude of any kind, but what they have is a kind that focuses and concentrates on an abstract quality, rather than one that expatiates and includes the miscellaneousness of a ‘real’ individual. ‘Roger Malvin’s Burial’ takes as its subject the operations of a guilty conscience, not the character of one on whom a guilty conscience is operating.

‘Roger Malvin’s Burial’ offers an excellent illustration of Hawthorne’s techniques in his tales, and is also interesting in being the subject of one of the most persuasive analyses in Michael J Colacurcio’s influential book on Hawthorne’s tales, The Province of Piety. Like many of the tales, it takes as its starting point a specific episode of New England history, in this case the experience of a lather and his son-in-law who fought the Pigwackett Indians in ‘Lovewell’s Fight’. Reuben Bourne leaves his father-in-law to be, the Roger Malvin of the title, to die at the foot of a great rock after the fight. Roger persuades him to go, and the narrator carefully details the operation of Reuben’s conscience as he leaves the old man.

He is in one of those impossible moral situations in which any course of action is wrong. Simple’ to stay and await the old man’s inevitable death, risking his own life, would be absurd: Roger’s daughter might lose both father and lover, and besides, it may be Reuben will find help and return to the rescue. On the other hand to leave strains Reuben’s filial wish to care for the old man and to give him a decent burial. Finally he leaves, promising to return, when he is himself well, to bury his friend. Reuben is finally found and restored to health. He marries Roger’s daughter, but he is never able to bring himself to tell her that he left her father still breathing; and because he cannot tell her, he can never return to fulfil his promise At the end he atones for his guilt by unwittingly shooting his own son at the exact scene of Roger Malvin’s death.

Colacureio’s reading of the story’s moral import is impeccable: Reuben’s fault is not what he does or does not do, it is that he cannot admit to himself or to his wife that his action has been other than heroic. And the application of the story to Hawthorne’s historical understanding of his own times is surely accurate too. Colacureio points out that in 1825 New England resounded with celebrations of the centennial of Lovewell’s Fight, and that the centennial pictured the engagement as a heroic defence of white civilisation against the savages. What ‘Roger Malvin’s Burial’ refers to is that tendency of a people to prefer a mythic version of their history from which they emerge with honour to a version that recognises the sins that may have been committed to ensure their success.[34]

Colacurcio’s agenda is not in the least hidden: it is to rescue Hawthorne from formalist critics who seek to remove him from a historical context. His basic warning is salutary and certainly right, that the formalist ignores Hawthorne’s genuine engagement with his times and his interest in the function of the past. But the claim that Hawthorne’s tales often turn on some precise point of abstruse Puritan doctrine or exact historical reference is surely overplayed. Hawthorne’s flexibility over ‘real’ events and his distortions of ‘real’ settings suggest that his interest is in the moral reflections that history can stimulate, and that he himself will be cavalier with ‘fact’ if a moral issue can be foregrounded thereby. He does exactly this in ‘Roger MaIvin’s Burial’ when he takes the delineaments of the ‘Charter Oak’, the hiding place of the Connecticut ‘liberties’ sought by the evil Governor Andros, the villain in several of his own tales, for the tree in Massachusetts beneath which Malvin dies. Nina Bayn’s view is more accurate, that Hawthorne had a ‘lack of interest in theological niceties’ and that ‘Hawthorne was not interested in making history the subject of his fiction or in creating fictions for the purpose of commenting on the American past. (It is arguable whether these were ever his purposes in fiction.)’[35]

III

In one sense Colacurcio’s information about the contemporary celebration of Lovewell’s Fight is unnecessary. Hawthorne’s narrator could hardly be more explicitly ironic: the fight was an incident ‘naturally susceptible of the moonlight of romance’, and there might be found ‘much to admire in the heroism of the little band’, if, that is, ‘imagination’ might cast ‘certain circumstances judiciously in the shade’.(51) This is obviously no uncritical boost to Hawthorne’s martial forebears, and we might surely spot it, even if we had not read other ironic treatments of similar topics, like ‘Endicott and the Red Cross’. But it is important to note that the mode of the story is irony, for its whole point is that, as I suggest above, there are situations in which no course of action is wholly the right one. More than that, such situations seem to me precisely the ones Hawthorne is most interested in, and many of the tables and romances turn on exactly the point that there are individuals who cannot accept the fact, who assume in doctrinaire fashion that what they say is inevitably right, or that what their laws proscribe can never be attended with any good outcome. For that is the further consequence of the ironic vision: while it is dishonest to claim that, say, the history of the relations of the white colonists with the Indians redounds throughout to the credit of the colonists, it is sentimental to pretend that in colonising a country it was possible to do it without bloodshed or detriment to the original occupants.

IV

One such figure who does appear to claim always to be right ‘to stand next to God’, as John Fowles says, within a convention universally accepted’ in mid-nineteenth century fiction, is an omniscient author/narrator.[36] It is exactly’ the confidence of the single point of view that Hawthorne found uncongenial in the authorial role, and authorship in general seems to have been uncomfortable for him. In life he was not fond of the company of other writers, preferring publishers like Fields and Ticknor to fellow authors, and, in England, mixing with businessmen like Henry Bright or Francis Bennoch rather than the literary peers with whom he could have claimed company. In the tales characters who identify themselves as tellers of stories are almost always morally shaky. The itinerant peddlar and self-appointed bearer of news in Mr. Higginbotham’s Catastrophe’ spreads inflammatory scandal on scanty evidence in order that he might himself seem more important. The author-narrators of ‘Earth’s Holocaust’ or ‘The Seven Vagabonds’ or ‘The Celestial Railroad’ are naif or puzzled or uninformed about the events they describe.

Many of the tales have what appears to be an editorial apparatus that offers ‘commentary’ on the texts but actually serves to undermine the status of the writer as sole authority. This sometimes takes joke forms like the ‘editorial’ footnote to Time’s Portraiture’ which deplores the modern bearer of the surname ‘Hathorne’ for adding a supernumary ‘w’; and is sometimes more elaborate, as in the prefatory matter to ‘Rappacini’s Daughter’, which states that the work is a translation from the French of one ‘Aubepine’. The editor’ describes the works of Aubepine in terms much like those in which Hawthorne describes his own in the preface to Mosses from an Old Manse, except that the works of the French master are literally voluminous, whereas Hawthorne’s own are mere tales. Otherwise this editor is as weary of his subject’s work as Hawthorne claimed to be of his own: reading through Aubepine’s work has been ‘wearisome’; inspiring ‘affection’ but not ‘admiration’.(387)

Hawthorne does not allow these tales to fall into easily categorised genres, and consistently blurs distinctions: it is not simply as Poe says that many of them are more essays than tales. What for instance is Endicott and the Red Cross? On the one it is clearly not ‘pure’ fiction. It recounts an episode from Puritan history in which the warlike John Endicott ripped from the English flag the symbol of episcopacy to demonstrate colonial defiance of Charles I’s wish for conformity in religious practice. Much of it is given over to a ‘sketch’ of the Salem of the 1630s: but the items included, lists of townspeople mutilated for religious transgressions, and engines of Puritan authority, are clearly included not for local colour purposes but in order to make a thematic point. The events include the central one I describe, but also a supposed dialogue between Endicott and the acceptable face of Puritanism, Roger Williams, that cannot have taken place, at least in the circumstances the piece elaborates, but which is also there for thematic purposes. ‘Endicott and the Red Cross’ cannot therefore be considered a historical sketch either.

The same goes for tone as well as genre status. The prefatory matter to ‘Rappacini’s Daughter’ I mention above introduces one of Hawthorne’s darkest tales. ‘Little Annie’s Ramble’, consistently singled out by Hawthorne’s contemporaneous reviewers as distilling his most light and charming qualities, when read more suspiciously becomes a much more dubious and complex text. It too begins by looking like a sketch, the unnamed narrator using the device of taking a little girl for a walk to introduce a description of a contemporary New England provincial town. The narrator is nowhere signalled as being anyone other than a representative of the author, but careful reading reveals something more sinister. We learn at the end that this ‘ramble’ has caused much distress to Annie’s mother, since the narrator failed to let her know that he was taking her little girl away. It is clear that the purpose of the walk is less to entertain Annie than to gratify some urge of his own: ‘the pure breath of children revives the life of aged men’ whose ‘moral nature [is] revived by their free and simple thoughts, their airy mirth, their grief, soon roused and soon allayed’. (IX, 129) As it turns out, he is more interested in her grief than in her mirth, for he insistently draws her attention to things that might be supposed to upset a little girl. He wonders if in the busy streets the ‘rattling gigs will be smashed to pieces before our eyes’ (IX, 122) and points out to her ‘a shrill voice of affliction, the scream of a little child, rising louder with every repetition of that smart, sharp, slapping sound, produced by an open hand on tender flesh’. (IX, 128) The relish in the alliteration here condenses one of the implications of the story as a whole: this narrator is aestheticising human suffering. Far from illustrating Hawthorne’s wholesomeness, ‘Little Annie’s Ramble’ illustrates his grasp of human perversion.

The story also illustrates his understanding of the questionable aspect of storytelling, for what is the author of this talc doing but something like what is being done by his creation? The difference of course is that the act is held up for our inspection by Hawthorne, being put in a critical context that confesses its own dubious status in a way that the narrator of the story never explicitly acknowledges: he always stoutly maintains that he is merely taking a little girl for a walk.

V

Colin McCabe, in an essay on Middlemarch, objects to the way in which George Eliot pretends that the language of her narrator is a metalanguage that has more validity than the language of her characters, and would thereby remove from her readers the opportunity to make judgements of their own about the actions in the story.[37] The care with which Hawthorne establishes relations between narrator characters and reader is illustrative of his understanding of the authoritarianism McCabe protests against. Sometimes the touch is light in intent, at least, if not in execution. The narrator of The House of the Seven Gables hovers on the brink of an unmarried lady’s bedroom, questioning the propriety of using his omniscience to go inside.

A more subtle instance can be observed in ‘Young Goodman Brown’. The story is again one of Hawthorne’s earliest, from 1835, and takes as its subject the eponymous hero’s compact with the devil. Brown, recently married, sets off into the forest to be initiated in a meeting of a witches’ coven. He thinks of himself as the only resident of his village to have any truck with black magic, and he is dismayed to find that his neighbours have made this journey before him. They are all there for the festivities, including those he most respected; worse yet, including Faith, his bride. Maddened by despair, Brown rushes into the heart of the forest and collapses. In the morning he returns to the village unsure of whether his experiences were real or a dream. He cannot decide, but from that day forwards lives an embittered and cynical man, ever distrustful of those he had thought worthy.

It is characteristic of Hawthorne’s methods to leave the existential status of the events in his stories in doubt, a feature related to that blurring of genre lines I mention above. Is this a kind of ghost story, or a story about psychological affliction? If we knew whether what Brown saw was supposed to be ‘real’ we could answer the question, but we are as ignorant as Brown himself, drawn into the question, but not offered the answer. But the narrator’s refusal to pacify and to passivise us is itself part of the meaning of the story. Brown has always believed others to be morally superior to himself simply by virtue of their status as authority figures: Goody Cloyse, ‘a very pious and exemplary dame… had taught him his catechism, in youth, and was still his moral and spiritual adviser’, (137) and the minister and Deacon Goodkin are both ‘holy men’. (140) Brown is the other side of the coin that has on its head the profile of figures like the authoritarian Hollingsworth, in The Blithedale Romance, or Endicott, whom I have already mentioned: they think they are always right, but Brown at the outset of the story thinks everybody else is. If the narrator had simply answered our questions – was there really a meeting of a witches’ coven? – he would have been placing himself in the position Brown believed his mentors to have; and it is precisely Brown’s unwillingness to accept any level of human fallibility in those he has considered authorities that leads him into despair.

This subtlety about narratorial placing can be observed in the detail of the story too. As Brown enters the forest on his way to his assignation with the devil, he encounters an old gentleman taking a similar route:

As nearly as could be discerned, the second traveller was about fifty years old, apparently in the same rank of life as Goodman Brown, and bearing a considerable resemblance to him, though perhaps more in expression than in features. Still, they might have been taken for father and son. And yet, though the elder person was as simply clad as the younger, and as simple in manner too, he had an indescribable air of one who knew the world … (135)

If I might refer back to Trollope’s address to his reader, comparison makes it clear that Hawthorne operates within very different conventions. Trollope’s narrator is in the know, and he is willing to dispense, carefully, and as a treat, his information to his reader, while his character is kept in the dark. This hierarchical compact of superiority between the author’s representative and the reader is exactly the kind of conspiracy that McCabe objects to in Middlemarch. I believe George Eliot to be a more subtle author than that but Trollope’s avuncular attitude is certainly the kind of thing McCabe had in mind.

But Hawthorne does not take the superior view and dispense the gospel to his reader: the perceptual and conceptual point of view he adopts is Brown’s own: his presentation of the stranger is as Brown sees him. The description is full of qualifications and uncertainty: ‘apparently . . . perhaps . . . might have been …. indescribable’. The last is particularly telling, for what is the function of narrator if not to describe? The effect is to make a statement of equality between the discourse of Brown and that of the narrator, which includes the reader and privileges him or her over neither of the other two parties. The compact is maintained until the moment that Brown loses his head, ‘maddened by despair’: ‘The road grew wilder and drearier, and more faintly traced, and vanished at length, leaving him in the heart of the dark wilderness, still rushing onward, with the instinct that guides mortal man to evil.’ (142)

It is only at his point that the narrator implies that he and the reader are in a different world from Brown’s: the narrator now speaks to the reader about Brown’s unwillingness to understand. ‘My Faith is gone’, (141) cries the young man; unaware of the ironies of his statement: he is thinking of his wife, whom he discovers to be also of the devil’s company. But his problem was that he never had faith. For him, individuals were valuable only if they were perfect; that is, unlike himself. Like Groucho Marx, he will not join a club that might have him as a member. The narrator on the other hand clearly accepts that frailty is the human lot and understands that the dram of evil does not doubt alt noble substance. He claims Brown for a fellow member of humanity: but Brown deserts him. It is a pattern that reverses the conventional disposition of the narrator, reader, and character: Brown is the one making large and superior generalisations about human nature, while the narrator and the reader are in a compact that accepts the limitation of our knowledge.

VI

In ‘Young Goodman Brown’ the narrator offers guidance, but it is a guidance that is informed by the themes of the story itself. In general, Hawthorne is reluctant in his stories to use what Barthes calls the ‘reference code’: that manner of address that makes statements about the world outside the story, and claims the authority of conventional wisdom or accepted fact.[38] It is a voice that asserts a collective wisdom that the reader cannot question, and is often employed to assert the moral commonplaces that cultures take for granted. Hawthorne is much more likely to draw his reader into a situation in which these moral commonplaces are all questioned, in which all that is clear is that a multiplicity of contradictory readings is possible.

3. The Scarlet Letter

I

The Scarlet Letter stands at a turning point in Hawthorne’s career. Mark Van Doren is right in saying that ‘The Scarlet Letter is in a sense the last of Hawthorne’s tales’.[39] Versions of the major figures in the romance appear in the tales and the work achieves a concentration and economy that seems more characteristic of the tale than of the more discursive longer form. On the other hand, Hawthorne knew before its publication that he had written a book that was different from his productions to date, a book of unusual power. He recalls in his English Notebooks that when he read the manuscript to Sophia he was moved in an unprecedented way: ‘my voice swelled and heaved, as if I were tossed up and down on an ocean, as it subsided after a storm’ [40]

The Scarlet Letter can be seen as the last and the best of the tales, then, but also as the first and best of the romances, the work that distils his most characteristic qualities.

II

The major elements of the story had been present in Hawthorne’s mind for some years. As early as ‘Endicott and the Red Cross’ in 1838 one of a group awaiting public humiliation before the people of Salem was ‘a young woman, with no mean share of beauty, whose doom it was to wear the letter A on the breast of her gown’. (219) Later he speculated in his American Notebooks that there might be a subject for a romance in the ‘life of a woman, who, by the old colony law, was condemned always to wear the letter A, sewed on her garment, in token of her having committed adultery’. (VIII, 254) But the actual circumstances under which Hawthorne came to write the story are instructive, and he gives a full description of them in ‘The Custom House’, which acts as preface to The Scarlet Letter.

It is important to be critical in reading the preface. Some of ‘The Custom House’ is as fictitious as the events of the story proper, in particular the account of the finding of the letter itself among the papers of Mr. Surveyor Pue, one of Hawthorne’s predecessors in the Customs offices, to whom the speaker claims to act as ‘editor, or very little more’. (36) As he recognises, such a claim is ‘of a kind always recognised in literature’ (36) – but recognised as the reverse of a guarantee of literal truth. The claim signifies fictionality as unmistakably as Jim Hawkins’s finding a map in Blind Pew’s sea chest. The ordinariness and specificity of the account of the Custom House background emphasises further the extraordinariness of the ‘discovery’ of the scarlet A. The narrator’s claim to have found a red letter that burned his finger when he touched it is on the literal level preposterous, but much of the rest of the sketch, despite its tone of ironic satire, is a serious account of the responsibility the author/narrator feels towards his fiction, and itself a guarantee that he takes to heart his own injunction at the end of the narrative: ‘Be true! Be true! Be true!’ (271) It thus allies itself explicitly with the attitude that is implicit in the tales to date.

In ‘The Custom House’ the author/narrator asserts his similarity to the breed of Winthrop, Wilson and Bellingham, the Puritan authority figures responsible for Hester’s punishment in the text. He identifies his forefathers as associates of theirs, and the actions he chooses to remember them by are the ‘hard- severity’ towards a Quaker woman by his ‘first ancestor’, and his worthy son’s ‘conspicuous’ part in what the narrator calls the ‘martyrdom’ of the Salem ‘witches’. (40-2) He confesses that ‘strong traits of their nature have intertwined themselves with mine’. (42) He too in the course of his narrative is judging a woman. But unlike his forebears, the author/narrator brings the faculty of imagination to bear on the woman, and is able to look at her from more points of view than that of dogmatic weigher and measurer. Most important, he can see the world from her point of view; he is able to ally himself with the woman as well as with her persecutors. After all, would they not judge him as harshly as they judged her?’What is he?’ murmurs one grey shadow of my forefathers to the other. ‘A writer of story-books! What kind of business in life, – what mode of glorifying God, or being serviceable to mankind in his day and generation, – may that be? Why, the degenerate fellow might well have been a fiddler!'(41-2)

In fact, he has shown his own badge of ignominy to wear in being the living representative of a line of harsh and unbending disciplinarians, and is willing, since they went unpunished in their own lifetimes, ‘to take shame upon myself for their sakes’. (41) Like Hester, he is an outcast, pitched from his place by officials, and suffering anyway ‘the chilliest of social atmospheres’ in his native town. (43) Nonetheless, he is drawn as irresistibly to the place as Hester is to Boston: ‘I felt it almost as a destiny to make Salem my home’. (43) He recognises that his gift could not properly function when burdened by the mundane, and just as Hester is thrown back on the ‘now forgotten art’ of her needle, so Hawthorne, out of office, rejoins the products of his imagination. During the whole of my Custom House experience… An entire class of susceptibilities, and a gift connected with them, – of no great richness or value, but the best I had, – was gone from me. (67) It is regained when ‘sitting all alone’ in moonlight.As D. H. Lawrence commented, A stands for Adulteress, Alpha, Abel, Adam, and America;[41] but it also stands for Artist. What is Hawthorne doing, after all, if not producing scarlet letters?

III

The Scarlet Letter clearly follows the tales in its understanding of the relationship of reader, writer and characters, then, but it reveals something more of Hawthorne’s romance technique. Again, despite the historical setting, there is little attempt to give antiquarian detail. We need only to ask ourselves precise questions about the physical appearance of the characters to discover that what Hawthorne offers in this way is very sketchy. It is an interesting exercise to compare the efforts of illustrators of the book and notice the widely different ways in which they portray the central characters.

The same is true of the settings of the story: Hawthorne seems to have had no interest at all in recreating milieux that might be represented realistically. Certainly the solid objects introduced into the narrative are not there in the spirit of deliberate redundancy that Roland Barthes considers characteristic of the realistic text, realistic precisely because they are redundant, suggestive of the miscellaneousness and contingency of the ‘real’ world.[42] The locations and figures from history in the book are ultimately there to serve thematic or symbolic purposes, not to provide realistic ballast. Things are expressions of the psychology of individuals or of groups. The prison from which Hester emerges at the beginning of the book, for instance, does not primarily inform us about early Boston town-planning; rather it signifies the repressiveness of Puritan thought. Hawthorne gives very few clues as to what the place actually looked like, then, even though he might have gained a good idea of it from the historical documents he read. He could of course simply have made it up.

In fact this part of the text concentrates on two features only: the rusty iron-work of the oaken door, and the wild rose-bush growing immediately beside it. The very sketchiness of the visualisable detail is sufficient in itself to foreground the two features that do find mention; but since these two features are iron and roses, they are removed from actuality almost altogether. They belong to a list of objects or qualities that we see as symbolic by default. Members of our culture must be clearly instructed not to see colours like red and black as having a significance over and above mere designation of hue; the same can be said of objects like trees, cups, gold – and iron and roses. The symbols are of course complex, but iron signifies a nexus of meanings that revolve around hardness and inhumanity; the rose a nexus cent ring on beauty and softness and sweetness. (Though there is sometimes a reference to the barbs. Mary, Mother of Jesus, was a rose without thorns.)

Hyatt Waggoner has shown how certain key objects recur throughout the narrative in the form of ‘actual’ appearances but also as metaphors.[43] The rose that blooms at the prison door is also a ‘sweet moral blossom’. (76) The forest that lies beyond the settlement, where Chillingworth gathers herbs for medicines and Hester meets Dimmesdale, is a type of the ‘wilderness’ in which Hester metaphorically and literally roams (217) The very fact that physical objects from the narrative are used metaphorically in this way serves both to materialise the abstract and make the real insubstantial. Richard Brodhead refers to this ‘fluidity of boundaries’ and considers that it gives The Scarlet Letter a ‘supersaturation – with significant pattern’ which has ‘the quality of overdetermination that Freud ascribes to dreams’.[44] The apparent trapping of realistic representation of the people and the place, then, do the reverse of supplying verisimilitude.or realisability. Hawthorne’s lack of interest in real visualisable details has important corollaries.

To illustrate them I should like for convenience’s sake to adopt Seymour Chatman’s distinction of ‘story’ and ‘plot’.[45] ‘Plot’ is the events with which a story deals, arranged in the order in which a particular discourse presents them. ‘Story’ is the same events arranged in chronological order, much as they might appear in a paraphrase or summary. Different plots have been made from the same story. Story is based on the idea of contiguity. So too on one level is the whole idea of realistic fiction: you write about what you can see and only what you can see; things and people appear together in your novel because they do so in ‘real’ life: we experience them in contiguity. ‘Plots’ in realistic novels tend to approximate fairly closely to the ‘stories’ that might be considered to lie behind them: they are conventionally linear, moving from one point to another in time, each related causally and usually temporally to its predecessor and its successor.

As I have tried to show, Hawthorne has no interest in the principle of contiguity in his settings: things are not included because they were there, but for the purpose of interpretation. Therefore they are often incongruous, do not conform with what we might consider normal in the probable or ordinary world. Given his lack of concern for contiguity in setting, we might expect him to have a related lack of interest in linear plots, for ‘stories’ to be slight. Such is certainly the case in many of the tales, and in the longer fictions too Plots are either static (I have already referred to the opening of The House of the Seven Gables: getting the old lady out of her bedroom takes a full three chapters), or interrupted (again in The House of the Seven Gables the plot is suspended while one character reads another the whole of a lengthy story which he intends to publish).

The one text that might seem to contradict this is The Scarlet Letter, which is arranged in linear order, and does not have any of the interpolations distinctive of the other romances. However, the romance as it is represented in the kind of plot summary that you find in say, a ‘companion’ to American literature, mutilates the memory you take away from a reading of the text itself. What remains in the mind is not a sequence of events that are meaningful as far as action goes, but rather a series of tableaux. Most conspicuous are the three scaffold scenes: at the beginning of the narrative Hester emerges on to the scaffold with the infant Pearl and the fantastically embroidered letter; in the middle Dimmesdale stands there at midnight with the mother and child; at the end Dimmesdale makes his confession and dies on the scaffold. Though these tableaux transcend their significance as events in the forwarding of the story, they are plot-significant in varying degrees; but many of the items that make up the experience of reading The Scarlet Letter are not, and so are omitted from a story-summary. The rose at the prison door has no part in the action, understood as a series of events; nor does Hester’s regarding herself in the highly polished breastplate of a suit of armour at Governor Bellingham’s house, seeing the letter on her breast so grossly magnified that it obliterates the rest of her. But episodes like these are the ones most insistently suggestive of the book’s meaning, as well as the most memorable.

IV

I say ‘suggestive’, for no more than in the tales does Hawthorne offer hard and fast, once-for-all interpretations for the reader’s passive consumption. Take, for instance, a central scene in the book, the forest scene in which Hester and the minister remeet, recognise the fact that their love for each other is unabated, and plan an escape. Thomas Connolly’s edition of The Scarlet Letter, until recently the text most commonly used in Britain, saw the scene as one of a series that takes its meaning from an opposition of town on one side and forest on the other. It is a straightforward opposition of the plus/minus type: Hawthorne sets up a conflict between the law of nature and man-made law . . . the laws of nature (which Hester did not violate) and the laws of man (which she did violate).[46]

The problem with Connolly’s analysis is that it really makes little sense to talk of Hawthorne’s nature having laws at all, for what is stressed about the forest and its denizens is lawlessness. Pearl, insistently linked with and instinctively in sympathy with nature, is amoral, not innocently good. Chillingworth goes to the forest in search of the herbs he uses for Dimmesdale’s ‘medicines’; he emerges from the forest at the beginning of the book. It is in the forest that the witches’ coven meets, and the forest is the dwelling-place of the Indians, whose ‘wildness’ is stressed whenever they are mentioned. The wilderness, then, cannot be regarded as a place of natural beneficence, symbolic of the values suppressed by Puritan morality. The symbolic value of the town cannot be neatly formulated either. Its salient features are the places of punishment, prison and scaffold, but the complexness of its significance is stressed right at the beginning when for Hester the scaffold is at first a place where she must suffer humiliation, but suddenly and paradoxically a refuge when her husband appears. Dimmesdale’s voluntary conformity to what Puritan law forces on Hester is what redeems him. And, after all, Hester accepts the dictates of that law, even if with subtle defiance. The narrator comments, too, that for all its severity Puritan law invests human transgression with moral seriousness, and it is better thus than in a more decadent age, when ‘society shall have grown corrupt enough to smile, instead of shuddering at it’. (83)

Taking their lead from Lawrence, perhaps, what Connolly and other critics want to do is take a reading of The Scarlet Letter according to which Hester’s adultery, being in accord with the ‘natural law’ of love, is ‘right’ and Puritan repression of that instinctual behaviour ‘wrong’. Such a view takes its primary evidence from Hester’s claim that ‘what we did had a consecration of its own’. (212) But to take this view is to ignore the rest of the book. Dimmesdale is a hypocrite, but he never lays that flattering unction to his soul, that what they did was right. Hester herself for most of the book is acutely ashamed of what she has done. In the section of the book leading up to her defence of her adultery the narrator stresses her speculativeness and adventurous thinking, but he does not claim that it has led her to universal truths: rather it has left her in a ‘moral wilderness’. (217)

In such a tract it is presumably possible to hold totally conflicting beliefs, and this is perhaps why her conviction that their deed was ‘consecrated’ can coexist with her general behaviour, which suggests that she accepts that she was wrong. Most of all, to adopt a ‘romantic’ reading of the book flies in the face of the detail of the scene. Hester flings away her scarlet letter to signal her refusal to accept her love affair as a sin, but Pearl will not recognise her without it. Dimmesdale makes his way back to the town, having added to his adultery and subsequent hypocritical concealment of it an acceptance of further adultery and the final hypocrisy of intending to speak to the Boston townspeople in the guise of the purest of the pure when he knows that on the following day he intends to run away with his lover. As he enters the settlement he feels he has undergone a ‘revolution in the sphere of thought and feeling’ (233), but this does not give him some sort of Nietschean grandeur, it only encourages him to soil the minds of young children and virgins, and destroy the hopes for salvation of an old woman. It is significant too that the planned escape route is east rather than west, the sea rather than the land. The sea throughout represents lawlessness like the forest. The sailors we meet are not like Hawthorne’s own seafaring ancestors, but a piratical crew festooned with cutlasses and knives and swords, with an ‘animal ferocity’ in their eyes. (247)

Perhaps most important, to adopt this ‘romantic’ reading of the story is to ignore the very basis of Hawthorne’s symbolic technique; it is to miss the essential polyvalency of his symbols. This complexness is there even in the germ of the story as it occurs in ‘Endicott and the Red Cross Sporting with her infamy, the lost and desperate creature had embroidered the fatal token in scarlet cloth, with golden thread and the nicest of needlework; so that the capital A might have been thought to mean Admirable, or anything rather than Adulteress. (219) The A can stand for many things, some of them contradictory. The scarlet letter is a token of guilt but it is richly embroidered. It is suggestive of Hester’s shame but it also represents creative impulses and the passionate richness of her nature. It is inseparable from her beautiful daughter, but it turns Dimmesdale into a guilt-racked invalid and Chillingworth into a demon.

What Hawthorne condemns in the Puritans is not their harshness in itself, but their failure to understand this polyvalency, which in its turn derives from a failure of sympathy and imagination. They have a limited and single view, like Reuben Bourne and Goodman Brown. For Reuben, he is either a hero or he is a deserter; for Brown, either the elders of the town are perfect or they are wholly and hypocritically evil; for Hester’s neighbours, she is either good or bad, one of the elect or one of damned. The A declares the impossibility of taking such rigid views of human actions and emotions: it suggests our faults as a type of all sin; but it is also suggests that we are all connected, on a literal biological level and on a metaphorical one, since it signifies our common spots of humanity.

 

4. The Marble Faun

I

When Hawthorne set sail for England on 6 July, 1853, he must have experienced some sense of relief. The three years previous had probably been the most taxing of his life. They had been his most prolific, seeing the publication of The Scarlet Letter, The House of the Seven Gables and The Blithedale Romance as well as a new collection of stories, The Snow Image, and two books for children, A Wonder Book and Tanglewood Tales. He was now the breadwinner for a family of five. He had had two personal losses that affected him deeply: the death of his mother in 1849, attended by a moving deathbed reconciliation after some years of relative estrangement, and the death of his sister Louisa in a steamboat accident in 1852.

He had also been more engaged in public life than ever before. It began with the public row over his removal from the Custom House for political reasons, a fire which had been fuelled by the introduction to The Scarlet Letter, and it culminated with his involvement in the presidential candidacy of Franklin Pierce, for whom he had written a campaign biography. Pierce rewarded him with an appointment to the Consulate at Liverpool. Hawthorne looked forward to making large gains from his years in Europe. He could save a good deal of money, which would put him in a position in which he could write at his leisure. Just as important, perhaps, the old world held out to him the prospect of a large stock of material for new romances; and he began to keep a journal which was in part intended as a quarry from which to mine new work. In regard to the money, Hawthorne was to some extent disappointed, but he did return home in 1860 with savings that represented a much larger sum than he had ever made directly from his writings. On the other matter his expectations were much further From the truth.

For some time he had felt that there was a need for something more earthy and substantial in his works. He wrote to Longfellow that he hoped his experience would introduce ‘something ruddier, warmer, and more genial, in my later fruitage. . . Ale is an excellent moral nutriment; so is English mutton; and perhaps the effect of both will be visible in my next romance’.[47] At his death in 1864 four unfinished works were found among his papers, all of them attempts to rework material from his English experience, which proved quite resistant to his imagination. The one work of fiction he completed was The Marble Faun, and comparing the book with the notebooks he kept while in Italy suggests that his contemplations on his experiences persuaded him that his initial implicit resistance to the ‘earthy’ or more realistic in fiction was no mere accident but a constituent feature of his imagination.

His reactions to England and the English, and to Italy and the paintings and statuary he saw there reveal that weighty physical substance, whether of an individual, a people or a culture, oppressed him. Throughout he preferred what can be seen by the mind’s eye to what the physical eye sees: the ruin to the intact building, the sketch to the finished painting, the partial to the complete. Of ‘The Dying Gladiator’ Hawthorne commented:

Like all other works of the highest excellence. . . it makes great demands on the spectator; he must make a generous gift of his sympathies to the sculptor, and help out his skill with all his heart, or else he will see little more than a skilfully wrought surface. It suggests far more than it shows. (XIV, 306)

‘The Custom House’ may be seen as an explanation of the author/narrator’s responsibility towards his text. The Marble Faun, seen in the light of Hawthorne’s comment, might be seen as a statement about the reader’s responsibility towards the text.

II

Notoriously, The Marble Faun has its version of the incompleteness Hawthorne admired in the ruins he visited in England and Italy. On the most trivial level it is the shape of Donatello’s ears, never finally revealed to us; or the contents of the mystery package taken by Hilda to the Cenci Palace. More important are the lacunae in the narrative line. We are never told what links Miriam and the model, or what happens to Donatello and Miriam after their arrest. We do not learn what befalls Hilda during her imprisonment, or what purpose it served. The action takes place in a vacuum.

The book has been criticised for this incompleteness ever since its first publication. ‘A fatal vagueness, James called it, and modern critics have agreed.[48] However, in contrast with his usual denigrations of his own work, Hawthorne felt proud of what was to be his last romance. ‘If I have written anything well’ he wrote to Ticknor, ‘it should be this Romance; for I have never thought or felt more deeply, or taken more pains’.[49] He was not surprised that his contemporaries misunderstood him, particularly in England, arid it was largely owing to pressure from that quarter that he added the ‘Conclusion’ to the second edition of the book, which actually explains but little of the mysteries. He did not feel it to have been an improvement to the book, and to an English friend, Henry Bright, he said ‘The story isn’t meant to be explained; it’s cloudland’.[50] What Hawthorne was aiming at, I believe, was that involving of the spectator or in this case the reader in the work that he claimed to be the distinguishing feature of great painting and statuary. None of this makes comfortable reading. Frank Kermode comments on the ‘readerly’ in conventional fiction:

There is no doubt that sequence, ethos and dianoia minister to comfort and confirm our notions of what life is like (notions that may have been derived from narrative in the first place) arid perhaps even constitute a sort of secular viaticum, bearing ultimately upon one’s private eschatology, the sense of one’s own life and its-closure.[51]

Hawthorne withdraws these conventional supports. His narrator is anything but comforting. He insists on the shakiness of his own grasp of the facts when in a chapter called ‘Fragmentary Sentences’ he describes the nature of his enterprise as like piecing together the torn scraps of a letter scattered to the breeze. Some of the scraps will be missing, and the gaps he says he will fill with his own ‘conjectural amendments’. (IV, 93) He refuses to take responsibility for the veracity of the interpretations of events he offers, sometimes making joke explanations with a poker face, as if we are to take them seriously. He makes several guesses about the meaning of the appearance of a shadowy figure in the catacombs, and says that the legend of Memmius, clearly the most unlikely of the lot, ‘offers the most reasonable version of the incident’. (IV, 32) At the end of the book he claims to be baffled by Kenyon’s account of events he pretends to have been previously uniformed about. Of Kenyon’s statements about the contents of the mystery package and the reasons for Hilda’s detention he reasonably comments that it is ‘as clear as a London fog’, and, with heavy irony, ‘how excessively stupid of me not to have seen it sooner’. (IV, 465)

These parsimonious pseudo-revelations, added grudgingly in the ‘Conclusion’ to the second edition of the work, contrast radically with Trollope’s fatherly disposition of the futures of his characters. It is not even that this is an ‘unreliable’ narrator in a book whose ‘implied author’ we can reconstruct sufficiently to be able to make allowances for distortions. Rather, it is a narrator who insists that he does not know or understand chunks of the story. James objected to the mixture of ‘categories’ in the book, the conflation of the literal features of Rome with the ‘lunar’ realms of fancy.[52] James is right to point to this conflation, but I suspect that the unease it causes is not due to Hawthorne’s breaking some neoAristotelian principle of unity, but to the fact that the text raises expectations it does nothing to satisfy. The plot contains elements that are distinctive of those Seymour Chatman calls ‘resolved’, which are teleological and move through a linked series of changes of states-of-affairs, inviting questions of the ‘what happens next?’ kind.[53] Such plots, since they raise these questions, implicitly promise answers. But from this plot no answers come forth.

The same may be said of the mixture of real and fantastic that James castigated. Rome exists all right, but in the way it is presented to us, and in the circumstances under which it is received, it is also fantastic. It is not that the book shuttles back and forth between the real and the other worldly, but that the real is itself other worldly. For a man of James’s cosmopolitan upbringing, and for a contemporary mind used to rapid travel and the dissemination of images of once remote places like Rome, the ‘Eternal City’ is part of an experienced, ‘real’ world in a way it was not for the vast majority of Americans of Hawthorne’s time. Hawthorne himself was severely limited in his access to painting and statuary before his move to Europe in 1853, and he was writing for an audience whose experience he knew to be similarly narrow. His description of St Peter’s in the Italian Notebooks, for instance, makes it clear that before he saw it the cathedral was in a fantasy world no less remote than the New England of two centuries previously. If we reverse his famous description of what America lacks as a source material for the romancer, we come at an implicit characterisation of Italy that emphasises anything but the ‘real’: ‘shadow… antiquity… mystery. .. picturesque and gloomy wrong’. (IV, 3)

Moreover, Rome is offered very selectively. If we compare the Rome of The Marble Faun with that of say, James’ Roderick Hudson we find the latter gives us a great deal of information about the social mores of the American colony in Rome, Italian expectations in marriage, the relationship of artists to patrons, and so on. The ‘realities’ Hawthorne offers are all of a special order: paintings, statuary, buildings. The objects do not function realistically. In the realist novel objects are either indexes of persons, or indicators of the ‘reality’ of their world, by virtue of their very contingency. But there are no causal justifying links between characters and objects in The Marble Faun: the paintings the characters see and the places they visit do not in themselves influence the course of events. There is nothing like the causal link between Daisy Miller’s general indiscretion and the visit to the Coliseum by moonlight that results in her death, though Hawthorne had the worst reason to find such a situation plausible: his daughter Una nearly died as a result of catching ‘Roman Fever’ after just such a visit. The referents of Hawthorne’s descriptions are interpretive.

III

The effect of the removal of conventional reader comforts is to implicate the reader in the construction of the book’s meaning. Frank Kermode in the essay to which I have already referred makes the point that what he calls the ‘secrets’ of the text are actually at odds with what he calls ‘sequence’, or ‘connexity and closure’. Secrets

… have no direct relation to the main business of the plot…. they form associations of their own, nonsequiturial, secret invitations to an interpretation rather than appeals to a consensus. They inhabit a misty world in which relationships are not arranged according to some agreed system but remain occult or of questionable shape.[54]

Kermode’s comments are provocative in this discussion, for in the comment to Bright I quote above it was the ‘cloudland’ elements of The Marble Faun that Hawthorne picked out as its most distinctive feature. In fact The Marble Faun works through ‘secrets’ too.

Many ‘secrets’ can be glimpsed in The Marble Faun whose meanings lie outside the narrative line of the book: connections between Donatello and the model, for instance, and between Donatello and Miriam; the complex significance of light and shade, and the (Freudian and non-Freudian) meanings of caves and towers. A detailed examination of one of them will show how the technique bears on the interpretation of a particular scene and a particular character.

A basic distinction between Hilda and the others is suggested in a scene near the end of the book when Kenyon, on his way to keep the mysterious appointment outside the Cenci Palace, meets Miriam and Donatello for the last time. The fact that they wear festive carnival dress on their way to meet their fate is suggestive in the way in which the gorgeous scarlet letter on Hester’s otherwise drab dress is suggestive: their sin is at once a guilt and a glory. Kenyon remarks that they are hand in hand, and before they part all three stand with hands joined:

‘Forgive me!’ said [Kenyon].

Donatello here extended his hand, (not that which was clasping Miriam’s) and she, too, put her free one into the sculptor’s left; so that they were a linked circle of three, with many reminiscences and forebodings flashing through their hearts. Kenyon knew intuitively that these once familiar friends were parting with him, now.(IV, 448)

This and the scene that follows, when Hilda appears on the balcony of the palace and attracts Kenyon’s attention by throwing a rosebud at him, take their meaning in part from a pattern of glimpses of hands that recurs in the book, which also excludes Hilda.

Miriam understands the grasp of hands as a symbol of Donatello’s sympathy. She takes his hand and he withdraws it when they see the dead Capuchin (IV, 187) and in the Medici Gardens she waits to see if he will take her hand and thus affirm his bond with her. (IV, 197) When they are reunited in Perugia they hold hands once more as an expression of their new fellow-feeling. (IV, 321) Kenyon uses the grasp of hands as a metaphor when he explains to Miriam that he cannot be Donatello’s counsellor: ‘Between man and man, there is always an insuperable gulf. They can never quite grasp each other’s hands.’ (IV, 285) And Hawthorne uses this metaphor when Miriam looks to Hilda for support and sympathy. Miriam wonders whether Hilda will ‘kiss her cheek, grasp her hand’, and fears that ‘my lips, my hand, shall never meet Hilda’s more’. When she sees Hilda’s repugnance, she asks, ‘Will you not touch my hand?’ (IV, 204)

Hilda is distinguished from the others in that she uses her hands not to make contact but to shield. Hilda is linked with the antique statue Kenyon exhumes from the Campagna, for he finds it when he is looking for her: she is lost, like the statue. He reassembles the broken parts and finds the hands disposed in the traditional gesture of modesty, rather than stretched out to others. (IV, 423) When Miriam offers her hand to Hilda, the American girl puts forth her own in ‘an involuntary repellent gesture, so expressive, that Miriam at once felt a great chasm opening itself between the two’. (IV, 207) Hilda’s gesture is not only the rejection of the grasp of hands, but the negative of the gesture of benediction. That gesture is extended even by the stone figure of Pope Julius in Perugia; (IV, 323) the priest in St Peter’s makes the blessing to Protestant Hilda; (IV, 373) and finally Miriam makes a gesture of benediction to Hilda and Kenyon, and it is by that gesture she is recognised. (IV, 461)

Hilda has not that ‘fellow-feeling’ that both metaphorically and literally unites the others: the closest she can get to the earth is the first piano, the nearest she can get to physical contact is to toss a flower. The only hand of Hilda’s anyone may grasp freely is the marble one that Kenyon sculpts.’

IV

The literary technique is paralleled by the moral position the book implies: once more, the link that binds the writer to the reader is merely a special version of the link any man or woman must recognise with all other individuals. This may once more be illustrated with reference to Hilda, whose withdrawal from sympathetic involvement with Miriam has its direct counterpart in her attitudes to art.

Hilda’s failure to provide comfort to Miriam is linked with a parallel failure towards a picture. Hilda sits in her tower immediately before the arrival of Miriam. On an easel is her copy of the portrait of Beatrice Cenci, and opposite it is a mirror. Hilda sits between them, and catches sight of the reflection of the painting in the mirror behind her own face. Nervously she moves her chair, ‘so that the images in the glass should be no longer visible’. (IV, 205)

Mirrors are hot properties in Hawthorne’s fiction. The title of one piece from Mosses from an Old Manse is ‘Monsieur du Miroir’. But Hawthorne’s mirror does not simply reflect reality, it conveys a special kind of truth. Indeed this is how mirrors generally function as symbols in our culture as in many others. They are magic: breaking one leads to seven years bad luck. In an article on the function of mirror-images in various cultures, James Fernandez ties up superstitions about mirrors with a mirror trick in one of the rooms of the Prado, designed deliberately to produce just such an effect as the one Hilda accidentally experiences. The ‘Las Menindas’ of Velasquez hangs in a room with no other canvasses, but opposite it hangs a mirror: ‘The trick – the duplicity – in the duplication of mirrors is to persuade the observer that he is part of the scene figured, a gathering of the Spanish Royal Family.’[55]

When Hilda catches sight of her face alongside Beatrice Cenci’s, a new work of art is created which includes her. Seeing herself in the work of art presses her towards a relationship with Beatrice that recognises at least contiguity, and perhaps kinship; and kinship with Beatrice exposes a very different facet of her humanity from what is suggested by kinship with the Virgin. The recognition is painful, and so she withdraws, just as she is about to withdraw from Miriam.

V

Tony Tanner says of James’s In the Cage: ‘We may speak, then, with some certainty of the consolations and risks of the imagination, and note the fact that it is intimately related to that important virtue, sympathy.’[56] In my view, Tanner is right but understates the case, and could well apply the same comment to Hawthorne. The virtue of sympathy in the works of both writers is not an important one but the important one, important for a proper appreciation of art and for an appreciation of the fact that according to both authors we are all related. Ultimately, Hawthorne’s strategies in his final complete – and incomplete -work invite that kind of involvement Hilda must achieve in order to mature; the reader must extend the kind of sympathetic interpretive faculty to the suggestive narrative that Hilda abandons in turning from her reflection when it appears alongside the painting of Beatrice Cenci.

5. Hawthorne in our Times

In the twentieth century the universities have ensured Hawthorne’s continuing reputation. Even Robert Lowell’s unmistakably personal poem about his New England literary forebear was in fact a commission from The Ohio State University.[57] Since the criticism of literature became institutionalised at about the turn of the century in the USA Hawthorne scholarship has been unremitting. The Centenary Edition of the works of Hawthorne, emanating from Ohio once more, is the most impressive to have emerged from the Center for Editions of American Authors, not least because it is the only such project to be completed. In 1980 alone two full-scale biographies appeared, and a whole book devoted to Hawthorne’s years in England.[58]

I commented at the outset that there is more than one way to account for Hawthorne’s nineteenth century reputation, and that also applies to his modern standing. One is simply his gender, and feminist critics have rightly pointed out that the ‘tradition’ is one constructed by male academic critics around other male figures, which ignores the group of writers who spoke to the largest audience in the nineteenth century: popular, usually female, authors, whom Hawthorne himself referred to in one of his least felicitous but best-known phrases as a ‘damned mob of scribbling women’.[59]

What I call above the ‘critical consensus’ hardened into received opinion as a result of the publication of a series of books in the forties and fifties that sought specifically to isolate a particular and unique American way of writing, and identified symbolism and romance as the characteristic American mode.[60] This endeavour itself must be seen in the context of political events of the time, and as one element of a general public move towards the fostering of nationalism, which merely took its most obvious form in McCarthy’s clamorous condemnation of what was ‘Un-American’. In the fifties a literature that directly identified social abuses might not be considered patriotic: symbolism, romance and allegory are at least less obvious targets for political questions.

But Hawthorne’s standing has not had universal consent in our own time, for he seems a more narrowly American enthusiasm than most of the other writers that constitute this ‘American Tradition’. For all his centrality to Americans, he has not been held in such high regard in Britain. In the same year, 1980, which saw the publication of three large biographical works on Hawthorne, only one of the novels, The Scarlet Letter, was available in a British paperback imprint: it is only since Penguin joined forces with the American company, Viking, that he has been widely available in this country.

Skepticism about the construction of individual reputations and of literary ‘traditions’ is entirely proper; but Hawthorne’s unique situation is positively as well as negatively instructive. My own belief is that he was unusually ‘modern’. This is not a new view, for it was formulated at least as long ago as 1931 in Edmund Wilson’s Axel’s Castle.[61] But I suggest that Hawthorne’s relative neglect in England derives from the same attitudes that have made the English contribution to modernism slight in comparison with the American. Donald Davie claims that the map of British poetry in the twentieth century must be drawn with Hardy as its chief landmark and not Eliot or Pound, the great innovators – and Americans.[62] If we compare the contemporary English novel with the American, we find that the seam of fiction represented by the work of Barth, Pynchon, Barthelme, Hawkes and others has been mined much less here, and much later.

Two of the most distinctive elements of modernism were the modernists’ opposition to realism, and their invitation to their readers to make a new kind of response. Virginia Woolf ruled that Galsworthy and Wells did not write genuinely ‘Modern Fiction’, for instance, and Eliot and Pound are notoriously demanding. Eliot insisted that ‘Tradition… cannot be inherited, and if you want it you must obtain it by great labour’, and even as great an admirer of Pound as Basil Bunting remonstrated to his friend that ‘you allude too much.’ [63]

What I have tried to suggest is that Hawthorne’s distinctiveness may be seen in exactly these two qualities. It should be said however that Hawthorne is more encouraging even if he asks no less. ‘Be true! Be true! Be true!’ is the explicit ‘meaning’ the narrator draws from The Scarlet Letter: it is one among ‘the many morals which press upon us from the poor minister’s miserable experience’. (271) If many morals can be drawn from Dimmesdale’s experience. how many more may be drawn from the story as seen by the other characters too? But the narrator explains his moral a little further: ‘Show freely to the world, if not your worst, yet some traits whereby the worst may be inferred’. (271)

The notion of ‘truth’ recurs throughout the narrative. Hester fears that in concealing Chillingworth’s identity she may not have been ‘true’ to Dimmesdale, though later she claims that ‘in all things else, I have striven to be true’. (211) Hester exhorts her lover to change his ‘false life… for a true one’, (215) and only before Hester can he be ‘for one moment, true!’ (213) But it is an injunction imposed upon himself by the author/narrator too, when he stresses in ‘The Custom House’ that truth is necessary for all human community: ‘Thoughts are frozen and utterance benumbed, unless the speaker stand in some true relation with his audience’. (35)

To be true is to recognise our own imperfections, and thus to recognise our kinship with such as Hester and Dimmesdale; and that act of human sympathy is an act of the imagination in reading exactly parallel to the author’s in writing.

6. Guide to Further Reading

I: Bibliographies.
The most complete bibliography of primary materials is C. E. Fraser Clark, Jr., Nathaniel Hawthorne: A Descriptive Bibliography (Pittsburgh: University of Pittsburgh Press, 1978). Theodore L. Gross and Stanley Wertheim, Hawthorne, Melville, Stephen Crane: A Critical Bibliography (New York: The Free Press, 1971), pp. 1-100, performs the kind of service that this guide tries to do, though it is much more comprehensive. Much has appeared since 1971, however. There are other reference materials relating to secondary sources, but probably the most recent and complete is Beatrice Ricks, Joseph D. Adams and Jack O. Hazlerigg, Nathaniel Hawthorne: A Reference Bibliography (Boston: G. K. Hall & Co., 1972).

II Editions.
The standard edition of the works of Hawthorne is the Centenary Edition, published at The Ohio State University Press (see n.4 below). At the time of writing, the final volume of letters (1853-1864) and a new edition of the English Notebooks have yet to appear. There is no earlier ‘complete’ edition of the late letters, but Randall Stewart’s edition of the English Notebooks 2nd ed. (1941; New York: Russell and Russell, 1962), will be useful even after the publication of the Centenary edition, for its notes and for his essay on Mrs. Hawthorne’s bowdlerisation of her husband’s texts.

III Biographies.
No American man of letters has been more biographised than Hawthorne: a monograph might be written about the biographies and their relationship with their times. The ‘standard’ scholarly biography is Arlin Turner, Nathaniel Hawthorne: A Biography (New York and Oxford: Oxford University Press, 1980). This is authoritative, but longer, more gossipy and more readable is James R. Mellow’s Nathaniel Hawthorne in his Times (Boston: Houghton Mifflin, 1980). Turner’s emphasis is literary and he gives much attention to the bibliographical detail of Hawthorne’s work; he is sounder in terms of critical opinions. Mellow’s emphasis is social. Hawthorne’s years in England are exhaustively detailed by Raymona Hull in Nathaniel Hawthorne: the English Experience, 1853-64 (Pittsburgh: University of Pittsburgh Press, 1980). Edward H. Davidson, Hawthorne’s Last Phase (New Haven: Yale University Press, 1949), is the most complete survey of the period between Hawthorne’s return from Europe in 1860 and his death in 1864.

IV: Critical Studies.
Critical studies of Hawthorne are legion. His status in his own time is documented in Hawthorne Among his Contemporaries, K. W. Cameron, ed., (Hartford: Transcendental Books, 1968), which reproduces a great many primary sources. J. Donald Crowley, ed., Hawthorne: The Critical Heritage (London: Routledge and Kegan Paul, 1970), is one of the best of a most useful series: its bibliographic information is impeccable, and it contains a good deal of very useful information about the publishing history of Hawthorne’s books. Richard Brodhead’s The School of Hawthorne (New York and Oxford: Oxford University Press, 1986) charts Hawthorne’s impact on his peers and heirs, and makes salutary reminders about the importance of the business aspect of publishing in the forging of the literary canon.

The earliest important literary critical studies of Hawthorne are Poe’s reviews of his contemporary’s work, which are reprinted by Crowley. Crowley also reprints Melville’s ‘Hawthorne and his Mosses’, fascinating not only for its very modern conception of its subject, but for the information it gives us about Melville’s thinking at the time of the composition of Moby-Dick. Henry James’s Hawthorne (London: Macmillan, 1879) is still thought by some to be the best critical study of its subject, but it seems to me to be more interesting for what it says about James’s own writing at the outset of his career. It should be supplemented by the equally personal but more critically acute letter James wrote to substitute for bis penal appearance at the Hawthorne Centenary celebrations in 1904, to be found in F. O. Mathiessen, ed., The James Family (New York: Alfred A. Knopf, 1947), pp.483-487. In the twentieth century, T. S. Eliot’s comments on ‘The Hawthorne Aspect’ of Henry James in ‘On Henry James’ (1918; rpt in F. W. Dupee, ed., The Question of Henry James (London: Alan Wingate, 1947), pp. 127-133) are rich with suggestions, and D. H. Lawrence on Hawthorne in Studies in Classic American Literature (1922; Harmondsworth: Penguin Books, 1971), pp.89-118, is characteristically cranky, provocative, and occasionally brilliant.

Modern discussion of Hawthorne begins with F. O. Matthiessen’s chapters in American Renaissance: Art and Expression in the Age of Whitman and Emerson (‘New York: Oxford University Press, 1941)), pp.179-368 which stress Hawthorne’s Puritan symbolist heritage. Matthiessen’s lead was followed by several studies of American symbolism in the 1950s (see n.54 below). Richard Harter Fogle Hawthorne’s Fiction the Light and the Dark, rev. ed. (1952; Norman: Oklahoma University Press, 1963) and Hyatt Waggoner, Nathaniel Hawthorne A Critical Study, rev. ed. (1955; Cambridge, Mass: Harvard University Press 1963) take a similar line in monographs devoted specifically to Hawthorne. Roy R.Male, Hawthorne’s Tragic Vision (Austin: University of Texas Press 1957)) explicates Hawthorne’s links with Romanticism. Given Hawthorne’s interest in the recesses of the personality it is natural enough that the same sort of attention should have been turned on him and his works, and Frederick Crews, The Sins of the Fathers: Hawthorne’s Psychological Themes (NewYork: Oxford University Press 1966) makes the standard Freudian reading of the man and his work. The 1970s produced a crop of books on Hawthorne some of which have not been superseded. Nina Baym’s The Shape of Hawthorne’s Career (Ithaca and London: Cornell University Press 1976) reads Hawthorne in terms of his attempt to reach an audience: as will be clear, I believe he is better understood in terms of his attempts to create an audience, but Professor’s Baym’s work is always illuminating and original. Richard Brodhead’s Hawthorne, Melville and the Novel (Chicago and London: University of Chicago Press, 1976) relates Hawthorne’s narrative techniques to his language. In the 1980s more attention has been given to Hawthorne’s engagement with his times, and what a reading of his works in the context of his times may reveal. The most outstanding of such studies is Michael J. Colacurcio’s monumental The Province of Piety: Moral History in Hawthorne’s Early Tales (Cambridge, Mass, and London: Harvard University Press, 1984), an exhaustive commentary on the tales that seeks to show how far they depend for their meanings on precise historical reference. John P. McWilliams, Jr., Hawthorne, Melville and the American Character (London and Cambridge: Cambridge University Press, 1984) concentrates on Hawthorne as a more general explicator of American history for his times. Robert Clark, History, Ideology and Myth in American Fiction, 1823-52 (London: Macmillan, 1984), pp.1 10-131, and Michael T. Gilmore, American Romanticism and the Marketplace (Chicago: University of Chicago Press, 1985), pp.52-112, illuminatingly re-read the novels as political documents in the light of the economic changes that came over American society during Hawthorne’s lifetime. A very useful selection of critical essays on Hawthorne may be found in Harold Bloom, ed., Modern Critical Views: Nathaniel Hawthorne (New York: Chelsea House, 1986).

7. Notes

  1. Herman Melville, ‘Hawthorne and His Mosses’, Literary World, 17 and 24 August, 1850, vii, 125-7, 145-7; rpt. in J. Donald Crowley, ed., Hawthorne: the Critical Heritage (London: Routledge and Kegan Paul, 1970), pp.111-126. Back
  2. S. Schoenbaum, William Shakespeare: A Documentary Life (Oxford, London and New York: Oxford University Press, 1975). Back
  3. Julian Hawthorne, Nathaniel Hawthorne and His Life: A Biography, 2 vols. (Boston: James R. Osgood and Co., 1885); Lloyd Morris, The Rebellious Puritan: Portrait of Air Hawthorne (New York: Harcourt, Brace and Co., 1927); Edward Mather, Nathaniel Hawthorne: A Modest Man (1940; rpt. Westport, Conn,: Greenwood Press, 1970); Henry James, Hawthorne (London: Macmillan, 1879); Randall Stewart, Nathaniel Hawthorne: A Biography (New Haven: Yale University Press, 1948). Back
  4. ‘Preface’, The Centenary Edition of the Works of. Nathaniel Hawthorne, Vol. IX: Twice-Told Tales, William Charvat, Roy Harvey Pearce, Claude Simpson, Thomas Woodson and others, eds. (Columbus, Ohio: Ohio State University Press, 1962-date), p.3. This edition, soon to be eighteen volumes, is the standard edition of Hawthorne’s works. The individual volumes so far published are as follows:I: The Scarlet Letter; II: The House of the Seven Gables; III The Blithedale Romance and Fanshawe; 1V: The Marble Faun; V: Our Old Home; VI The True Stories from History and Biography; VII A Wonder Book and Tanglewood Tales; VIII The American Notebooks IX; Twice Told Tales; X: Mosses from an Old Manse; XI The Snow Image and Other Uncollected Tales; XII The American Claimant Manuscripts; XIII The Elixir of Life Manuscripts; XIV: The French and Italian Notebooks; XV: The Letters, 1813-1843; XVI: The Letters, 1843-1853.The Centenary Edition is a model of bibliographical scholarship, and, in its prefaces, a rich source of material on Hawthorne’s life and works. The Centenary Edition provides the texts for the most recent Penguin reprints of Hawthorne, which I shall use since they are more readily available and indeed portable than the large grey hardbacks. The Penguin volumes I shall refer to are: Selected Tales and Sketches, Michael J. Colacurcio, ed. (Harmondsworth, 1987); The Scarlet Letter (Harmondsworth, 1983), Introduction by Nina Baym, notes by Thomas E. Connolly, The Blithedale Romance, Annette Kolodny, ed. (Harmondsworth, 1983). For economy’s sake, I shall in future refer to Hawthorne’s works in brackets in my text. Where possible, the reference will be to the Penguin volume in question; when a Roman numeral prefixes a page number, the reference is to a volume in the Centenary edition. Back
  5. Rpt. in Crowley, Hawthorne: the Critical Heritage, pp.259-264. Back
  6. Richard Brodhead, The School of Hawthorne (New York and Oxford: Oxford University Press, 1986), p.51. Back
  7. Henry James, Notes of A Son and Brother (London: Macmillan, 1914), p.380. Back
  8. Quoted in Brodhead, The School of Hawthorne, p.64. Back
  9. Crowley, Hawthorne: The Critical Heritage, p. 11. Arlin Turner, Nathaniel Hawthorne: A Biography (New York and Oxford: Oxford University Press, 1980), pp.188-9. Back
  10. Herman, Melville, The Letters of Herman Melville, Merrell R. Davis and William H.Gilman, eds. (New Haven: Yale University Press, 1960), p.128. Back
  11. Jane Tompkins, Sensational Designs: The Cultural Work of American Fiction, 1790-1860 (New York and London: Oxford University Press, 1985), p. 10. Back
  12. Brodhead, The School of Hawthorne, p.55. Back
  13. Quoted in Caroline Ticknor, Hawthorne and his Publisher (1913; rpt. Port Washington, N.Y.: Kennicat Press, 1969), p. 141. Back
  14. Quoted in J. T. Fields, Yesterdays with Authors (London: Sampson Low, 1872), p.87. Back
  15. Nina Baym, The Shape of Hawthorne ‘s Career (Ithaca, N.Y. and London: Cornell University Press, 1976). Back
  16. The Letters of Herman Melville, P.124. Back
  17. R. W. Emerson, ‘The American Scholar’, English Traits, Representative Men, & Other Essays (London: J. M. Dent, 1908), p.309. Back
  18. Quoted in Tony Tanner, The Reign of Wonder: Naivety and Reality in American Literature (Cambridge: Cambridge University Press, 1965), p.40. Back
  19. Quoted in Charles Feidelson, Jr., Symbolism and American Literature (Chicago and London: Chicago University Press, 1953), p. 139. Back
  20. Walt Whitman, An American Primer, Horace Traubel, ed. (Boston: Small, Maynard & Co., 1904), p.18. Back
  21. George Becker, Documents of Modern Literary Realism (Princeton: Princeton University Press, 1963), p.7. Back
  22. Quoted in Fields, Yesterdays with Authors, p.63. Back
  23. John Ruskin, Modern Painters, III (1856), quoted in M. H. Abrams, Natural Supernaturalism: Tradition and Revolution in Romantic Literature (New York: W. W. Norton & Co., 1971), p.375; Thomas Carlyle, ‘The Hero as Poet’, Heroes, Hero- Worship and the Heroic in History (1841; London: Chapman & Hall, 1888), p.97 Matthew Arnold, ‘On Translating Homer’, rpt. in Selected Prose, P. J. Keating, ed. (Harmondsworth: Penguin Books, 1970), p.84. Back
  24. S. T. Coleridge, ‘Dejection: An Ode’, rpt. in H. S. Milford, ed., The Oxford Book of English Verse of the Romantic Period: 1798-1837 (Oxford: Oxford University Press, 1935), p. 255. Back
  25. R. W. Emerson, ‘Nature’, The Conduct of Life, Nature and Other Essays (1836; London: J. M. Dent, 1908), p.4. Back
  26. F. O. Matthiessen, Henry James: The Major Phase (London: Oxford University Press, 1944), p.264. Back
  27. Marshall, McLuhan, The Gutenberg Galaxy: The Making of Typographic Man (London: Routledge and Kegan Paul, 1962), p. 136. Back
  28. Anthony Trollope, Barchester Towers (1857; Harmondsworth: Penguin Books, 1982), pp.495, 126. Back
  29. Jonathan Culler, Structuralist Poetics: Structuralism, Linguistics and the Study of Literature (London: Routledge and Kegan Paul, 1975), p. 195. Back
  30. Rosalind Coward and John Ellis, Language and Materialism: Developments in Semiology and the Theory of the Subject (London: Routledge and Kegan Paul, 1977), p.49. Back
  31. Michel Foucault, ‘What Is An Author?’, in Josue’ V. Harari, ed., Textual Strategies: Perspectives in Post-Structuralist Criticism (London: Methuen, 1980), p.159. Back
  32. Quoted in Ticknor, Hawthorne and his Publisher, p.283. Back
  33. See Arlin Turner, Nathaniel Hawthorne, pp.69-79. Back
  34. Michael J. Colacurcio, The Province of Piety: Moral History in Hawthorne’s Early Tales (Cambridge, Mass, and London: Harvard University Press, 1984), pp.107-130. Back
  35. Baym, The Shape of Hawthorne’s Career, pp. 108, 31-2. Back
  36. John Fowles, The French Lieutenant’s Woman (1969; London: Triad/Panther Books, 1977), p.85. Back
  37. Colin McCabe, ‘Realism and the Cinema’, Theoretical Essays: Film, Linguistics, Literature (Manchester: Manchester University Press, 1985), p.37. Back
  38. See Roland Barthes, S/Z trans. Richard Miller (New York: Hill and Wang, 1974), p.18. Back
  39. Mark Van Doren, Nathaniel Hawthorne: A Critical Biography (New York: The Viking Press, 1949), p. 145. Back
  40. Nathaniel Hawthorne, The English Notebooks by Nathaniel Hawthorne, Randall Stewart, ed., 2nd edn. (New York: Russell and Russell, 1962), p.225. Back
  41. D. H. Lawrence, Studies in Classic American Literature (1922; Harmondsworth; Penguin Books, 1971), p.95. Back
  42. Quoted in Seymour Chatman, Story and Discourse: Narrative Structure in Friction and Film (Ithaca and London: Cornell University Press, 1978), p.144. Back
  43. Hyatt Waggoner, Hawthorne: A Critical Study, 2nd edn. rev. (Cambridge, Mass.; Harvard University Press, 1963), pp. 127-129. Back
  44. Richard Brodhead, Hawthorne, Melville and the Novel (Chicago and London: University of Chicago Press, 1976), p.53. Back
  45. Chatman, Story and Discourse, p.43-48. Back
  46. Thomas E. Connolly, ‘Introduction’, The Scarlet Letter and Selected Tales (Harmondsworth: Penguin Books, 1970), pp. 13-14. Back
  47. 11 May, 1855. Quoted in Samuel Longfellow, ed., The Life of Henry Wansworth Longfellow, with Extracts from his Journals and Correspondence, 2 vols (London: Kegan Paul, Trench & Co., 1886), 2, 239. Back
  48. Henry James, Hawthorne, p.170. Back
  49. Quoted in Ticknor, Hawthorne and his Publisher, p.238. Back
  50. Quoted in Julian Hawthorne, Hawthorne and His Wife, 2, 236. Back
  51. Frank Kermode, ‘Secrets and Narrative Sequence’, Critical Inquiry 7 (1980), 87. Back
  52. Hawthorne, p. 169. Back
  53. Stony and Discourse, p.48. Back
  54. Kermode, ‘Secrets and Narrative Sequence’, p.93. Back
  55. James W. Fernandez, ‘Reflections On Looking Into Mirrors’, Semiotics 30 (1980), 37. Back
  56. Tanner, The Reign of Wonder, p.318. Back
  57. Robert Lowell, ‘Hawthorne’, (Columbus, Ohio: The Ohio State University Press, 1964); rpt. in For the Union Dead (London: Faber and Faber, 1965), pp.389. Back
  58. Arlin Turner, Nathaniel Hawthorne; James R. Mellow, Nathaniel Hawthorne in His Times (Boston: Houghton Mifllin, 1980); Raymona Hull, Nathaniel Hawthorne: The English Experience, 1853-1864 (Pittsburgh: University of Pittsburgh Press, 1980). Back
  59. Ticknor, Hawthorne and his Publisher, p. 141. Back
  60. See in particular: F. O. Matthiessen, American Renaissance: Art and Expression in the Age of Emerson and Whitman (New York: Oxford University Press, 1941); Charles Feidelson, Jr., Symbolism and American Literature (Chicago and London: University of Chicago Press, 1953); R. W. B. Lewis, The American Adam: Innocence, Tragedy and Tradition in the Nineteenth Century (Chicago and London: University of Chicago Press, 1955); Richard Chase, The American Novel and Its Tradition (Garden City, N.Y.: Anchor Books, 1957). Back
  61. Edmund Wilson, Axel’s Castle (1931; London: Fontana, 1962), p.17. Back
  62. Donald Davie, Thomas Hardy and British Poetry (London: Routledge and Kegan Paul, 1973. Back
  63. Virginia Woolf, ‘Modern Fiction’, The Common Reader: First Series (London: The Hogarth Press, 1925), pp.184-195; T. S. Eliot, ‘Tradition and the Individual Talent’, Selected Essays, 3rd, enlarged ed. (London: Faber and Faber, 1951), p. 14; Bunting quoted by Hugh Kenner, The Pound Era (London: Faber and Faber, 1972), p. 430. Back

Top of the Page

Brian Lee, Hollywood

BAAS Pamphlet No. 16 (First Published 1986)

ISBN: 0 946488 06
  1. Introduction
  2. The Beginning: Dreams
  3. The Middle: Factories
  4. The End: Independence
  5. Guide to Further Reading
  6. Notes
  7. Filmography
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. Introduction

Hollywood film makers hake often been accused of focusing too much of their attention on the workings of Hollywood itself Friendly critics argue that such exposure reveals what is better left concealed if the magic is to he preserved; others that such introspection is the product of a lack of involvement with the “real” world and a defensive narcissism characteristic of all the popular arts. But no matter what conclusions are drawn, the facts themselves are not in dispute. From the very beginning the Hollywood studios have continued to turn out in almost equally large numbers “biopics” about their employees and musical films about the making of musicals. Occasionally these genre films have been interspersed with more realistic, serious works satirising or criticising the film industry or the mores of Hollywood society, though the structure of the industry and the distribution of power within it has generally militated against such criticism As the anthropologist Hortense Powdermaker discovered in her classic study, Hollywood the Dream Factory, the industry itself rapidly developed into an over-elaborated caricature of the American business world, but the films produced by it consistently ignored, or even attacked those same totalitarian and ruthlessly mercenary features of contemporary life.

Not surprisingly then, the most incisive and intelligent film about Hollywood and its effects upon the lives of its workers, as well as the most poignant, was not a product of Hollywood at all Nor was it even shot in America, though its director, Billy Wilder, had grown up with the studio system and had, as early as 1930 (the very year in which Powdermaker’s book was published), made one of those rare films, Sunset Boulevard, which brilliantly exposed the underlying corruption of Hollywood, even at a time when the system under attack was still all powerful.

Where Sunset Boulevard is prophetic, his later film, Fedora, is elegiac. The quarter century between 1950 and 1978 saw the rapid dismantling and virtual death of the Hollywood production system. Though a few films still spear under the famous old logos, Fedora is a fairly-typical product of the 1970s, rejected by Hollywood, financed with German tax shelter money, and produced by a company called Geria-Bavaria-Atelier. Its hero, Detweiler/Wilder is also trying to finance a film – a remake of Anna Karenina called The Snows of Yesteryear – that the studios are not interested in, and his attempts to engage the famous old star Fedora for this purpose is the springboard for the film’s plot. Detweiler’s unravelling of the mystery surrounding Fedora provides the thread for Wilder to string together his mature reflections on the world of Hollywood; a series of images and ideas expressing his love and revulsion without ever succumbing to any easy resolution of his ambivalence.

The opening sequence of the film is indicative of its complexity. The first shot on the screen is taken from the platform of a small French railway station towards a huge, whistling, steam locomotive that is approaching through the darkness. Along the platform runs a woman wearing a cowled cape. Her face is gaunt, her eyes full of despair. She turns and pauses briefly as someone calls her name “Fedora“, before hurling herself in front of the rushing engine. At this point the image freezes before a cut to a television studio in which a glossy female presenter is delivering a slick, illustrated feature on the life of the dead movie star. We learn of her famous roles as Emma Bovary, Lola Montes, and Joan of Arc; also of her self-imposed exile in her native Europe and her comeback in the 1960s. We see the long file of fans and mourners waiting to view her body as it lies in state surrounded by masses of flowers. Finally, the camera zooms in on one of these mourners, Detweiler, who will now take up the film’s narration.

Even in this brief pre-credit sequence the film has constructed a wealth of cultural reference. Even though she never played any of the parts listed, the stills of Fedora remind us inevitably of Greta Garbo. Moreover, Garbo twice played the part of Anna Karenina, in 1927 and 1935, with its famous suicide scene so like Fedora‘s. The sequence in the television studio is a subtle homage to Orson Welles’s Citizen Kane and a pointer to the film’s and Hollywood’s preoccupation with images, illusions, and the manipulation of reality by the media. When this sequence is seen within the context of the entire film its allusions are broader still. By this time we know that the dead woman is not Fedora but Antonia, her daughter, who had taken on the identity of the disfigured star and had become trapped within a personality she could neither live with nor escape. The younger woman’s tragic death then comes to remind us, not so much of a fictional heroine, as that of a real film actress like Marilyn Monroe, whilst her mother in her elderly retirement seems even more like the real Greta Garbo than did the actress replaying one of her most famous roles. Monroe had worked with Wilder in The Seven Year Itch (1955) and three years before her death in Some Like It Hot (1959). It was Wilder who later referred to her ironically as Hollywood’s Joan of Arc, though in the same interview he also conceded that he had never seen anyone as fabulous on the screen, even Garbo.

The relative ease with which such a short film sequence can fully saturate its images, narrative and technique, is testimony to the power, not just of film as an art form, but also of Hollywood itself. In half a century its history and gossip, legends and myths, personalities and stars, its images and style, have thoroughly permeated the culture of the entire civilised world. To a considerable degree our view, not just of life in the USA, but of life in general, has been shaped by the activities and products of that small district of Los Angeles.

Wilder’s film takes cognizance of such potency by using some of Hollywood’s most cherished myths to flesh out his meditation on time, ageing and death. Most Hollywood films lack Fedora‘s self consciousness, of course, but this does not diminish their suggestive power. On the contrary, it is just those covert meanings, often not intended or comprehended by the film makers themselves, that are the most seductive – though at the same time the most difficult to read. Such meanings, carried by a film’s style and structure, are themselves the product of complex psychological, economic, social, and technological forces. The task of tracing these complicated interactions is both delicate and arduous and the following essay attempts only to sketch a few of the more significant ones.

2. The Beginning: Dreams

The origins of the American motion picture industry have, by now, been thoroughly obscured, both by the nature of the phenomenon itself and by the attempts of different historians to locate the event in a variety of times, places, inventions, or even in the actions of particular people. For the historian of technology the struggle to perfect machines and materials in Room Five of the Edison Laboratories at West Orange, New Jersey, is crucial. The economic historian, on the other hand, might pinpoint the patent wars and the development of a cross-licensing system as the most important facts in the early years of the industry. What really matters though, and makes these events significant, is that by 1905 the ability to photograph continuous action and then project the resulting images on a screen to large audiences had created a massive, insatiable demand for films of all kinds, but particularly for fiction films. It is this, the marriage of art, technology and marketing that made Hollywood such a potent force in American society.

Many other countries have film have film industries of their own, of course, and the apparatus of film culture – writers, producers, directors, stars and technicians – is universal. European and Asian film makers have created distinctive personal or national styles, and within them great individual masterpieces. What then, it might be asked, makes Hollywood unique? The answer to that question is complex and involves the examination of a number of seemingly disparate factors.

In the first place America had, by 1905, undergone a staggering population explosion, brought about mainly by the influx of millions of immigrants during the last quarter of the nineteenth-century. These newcomers brought with them a correspondingly daunting problem of communication and the broad educational possibilities of the movies were quickly seen, both by film makers and audiences, to be far greater than those of the printed word. An early advertisement for Universal Pictures, purportedly written by its president Carl Laemmle, makes the point:

Universal Pictures speak the Universal language. Universal stories told in pictures need no translation, no interpreter. Regardless of creed, color, race or nationality, everyone in the universe understands the stories that are told by Universal Pictures[1]

The implications of Laemmle’s claim are obvious. Given that the pictures he was proclaiming were silent with a minimum of printed titles, the stories in question had to be relatively simple and had to be told in a relatively straightforward way if they were to be understood by “everyone in the universe”. The economics of film production and distribution made it first desirable and later essential that if not everyone then at least a large proportion of the world’s population should have access to the products of Hollywood. In 1937 Gilbert Seldes estimated that in eight weeks, in seventy thousand cinemas throughout the world, the audiences for films equalled the total population of the globe.[2] And in America alone, it has been reckoned that the average weekly attendance at cinemas during the years of the Second World War was eighty-five millions.

The need for intelligibility and simplicity was, therefore, established early, partly by the mode of production as we shall see, but more importantly by the means of marketing. Moreover, this need was enhanced by the early methods of exhibition. With the exception of a few prestigious and expensive metropolitan theatres, cinemas chose to exhibit their programmes on a continuous basis, leaving audiences free to enter or leave at any time. For this policy to be successful a film’s story and discourse had to be designed in such a way that the average viewer could make sense of it without having followed the plot from the film’s beginning. The repetition of well tried structural formulae catered to this need which was at least partly responsible for the dominance of genre films in the Hollywood canon.

There were other reasons for the development of the so-called Hollywood style, of course, and these, too, have affected the development of film in America as a mass medium. By its nature the photographic image has a specificity and an apparent lack of selectivity. It produces an impression of reality that makes it ideally suited to the reflection and propagation of ideas and pictures of life that audiences will accept as true. The need not only to communicate with, but also to assimilate and indoctrinate a vast heterogeneous audience of industrial workers, who were also consumers, led Hollywood to capitalise on the inherent realism of film as a medium. The urge to make ideals seem real and the real, ideal, is one of the major covert motives behind the Hollywood film, and the history of the art can be profitably seen as a drive to perfect the means to this end. It accounts as much for the casting of Henry Fonda as Tom Joad in The Grapes of Wrath (1939) as for the special effects in Close Encounters of the Third Kind (1977); it lies behind Gregg Toland’s deep focus photography for Citizen Kane (1940) as well as the use of an eight track system for overlapping sound in Nashville (1975). The advent of colour and sound; panchromatic film and Mazda lighting; method acting and the star system; zoom lenses and Cinemascope, have all been easily incorporated into an art form surpassing all others in its irresistible surface realism.

In the early days, however, audiences were entranced merely by the exhibition of lifelike, moving images, and it was not until Edwin S. Porter developed the principle of editing film into a method for constructing a story that the possibilities of the new medium began to be glimpsed. Even so, his early classic, The Great Train Robbery (1903), for all its revolutionary elements, might not have had the impact it did had he not made the decision to base his story on actual, topical events. What Porter seems to have intuited is that the realism of a filmed story is not impaired by temporal or spatial gaps in the narrative. His decision to follow his shot of the fleeing train robbers with a return shot to the Telegraph office and the assaulted operator is arguably one of the most significant in the entire history of film making. In one cut he freed film narrative from the shackles that had tied it to the stage play, and gave it the flexibility necessary to its future development.

The effect of Porter’s innovations was sensational but it required a further development before the magnitude of their implications could be realised. As early as 1902 Thomas L. Tally had opened a ten cent Electric Theater in Los Angeles, and its success had spurred other businessmen to imitate his venture across the country. It was in 1905 though, that John P. Harris made the important breakthrough when he set about remodelling a store-room in McKeesport, Pennsylvania. In it he provided continuous performances of twenty minute film shows from eight o’clock in the morning until midnight at a cost of five cents. It was the first Nickelodeon. Within four years it has been estimated that there were, throughout the country, some eight to ten thousand of them, clamouring for more and more material.[3]

For years The Great Train Robbery was the most widely exhibited film in the nickelodeons, but there was an obvious limit to its continued popularity, even for very unsophisticated audiences. Demand ran far in excess of production; a situation in which the industry could easily have foundered had it not been for the energy and inventiveness of its pioneers. A related difficulty was that of distribution, and though the attempts to solve this problem constitute a fascinating episode in American business history, crammed with as much melodrama and farce as the films being fought over, it is both long and tortuous, and has, in any case, been thoroughly chronicled in a number of places.[4]

So, too, have the early histories of the film production companies, but there are still several important points to be made about the individual talents employed by them, and their contribution to early film culture.[5] The name that dominates film history in America is that of D. W. Griffith. Historians, following Griffith’s own lead, have attributed to him almost every significant innovation in the art of film making. Griffith, like so many of the early film makers, was an enthusiastic, amateur inventor, who, in later life when he had been toppled from his position of eminence, partly as the result of his own ambitious mistakes and partly by the machinations of the threatened philistines who controlled the industry, would ruefully regret that he had not patented such devices as the “fade-out” which he wrongly- believed he had invented. In fact, as Kevin Brownlow has pointed out, every device of cinematic storytelling – the close-up, the tracking shot, the high angle, the flashback, the insert, affect lighting, masking, fades, dissolves – had been established by 1912.[6]

Griffith’s real genius emerged in his unique talent. for using such devices to create compelling narratives that could incorporate individual stories and broad social issues, fuse disparate elements into thematic unity, and generally establish the fluidity and flexibility which are two of film’s defining characteristics. Yet even when Griffith’s masterpieces from The Birth of a Nation (1915) to Orphans of the Storm (1922) are seen under ideal conditions, projected at the correct speed, it requires an act of the historical imagination to appreciate the magnitude of his achievement. Modern audiences tend to be embarrassed or bored by other features of his work: his didacticism, racial prejudices and sentimentality. More precisely, they react against particular, dated manifestations of these characteristics. The same audiences do not appear to suffer any similar discomfort when watching The Front (1976), The Deer Hunter (1978), or On Golden Pond (1981).[7]

Griffith once defended his technique by referring his critics to Dickens. Eisenstein in a famous essay elaborated the comparison,[8] and it has now become almost obligatory in any discussion of Griffith’s work. The similarities go beyond those of technique, however. As prolific geniuses working in a popular art form they not only extended the possibilities of their respective media, but also moulded mass consciousness, and in a very real sense gave art back to the people.

The most moving tribute paid to Griffith at the time of his death in 1948 was written by James Agee. Agee believed that The Birth of a Nation was equal to the best art that had been produced in America and compared it to Whitman’s Civil War poems. It is when he came to catalogue the handicaps that Griffith had to overcome in order to be a great artist, though, that the Dickensian qualities – bad and good – begin to emerge:

He had no remarkable power of intellect, or delicateness of soul; no subtlety; little restraint; little if any “taste”, whether to help his work or harm it; Lord knows (and be thanked) no cleverness no fundamental capacity, once he had achieved his first astonishing development, for change or growth. He wasn’t particularly observant of people; nor do his movies suggest that he understood them at all deeply. He had noble powers of imagination, but little of the intricacy of imagination that most poets also have. His sense of comedy was pathetically crude and numb. He had an exorbitant appetite for violence, for cruelty, and for the Siamese twin of cruelty, a kind of obsessive tenderness, which at its worst was all but nauseating[9]

This is almost a perfect blueprint for the creation of the various film genres that were to dominate American film production in the future. It gives weight to the claim that it was Griffith who really invented Hollywood.

Ironically, one of Griffith’s mistakes was his failure to recognise the importance of the Los Angeles location for the future development of the industry. His decision to return to work in the East was an important factor in his progressive isolation from the world he had done so much to create. It was during these years that the foundations of the great film empires were being laid in the complex financial deals and mergers involving directors, stars and businessmen. Whilst Sennett, Ince, Chaplin, Fairbanks, Pickford, Loew, Zukor, Fox, Selznick and Goldwyn carved up the territory, Griffith remained aloof in Mamaroneck, New York, slipping further and further into debt with every film he made.

Not so the other great creative genius of the early period, Charles Chaplin. Like Douglas Fairbanks, Mary Pickford and Lillian Gish, Chaplin fully appreciated his own commercial value and he was able to amass a fortune in the first few years of his career in films by signing only short term contracts and demanding higher fees after every few successful pictures. Unlike Griffith, who had joined with Chaplin, Fairbanks and Pickford to create United Artists and thereby obtain a greater measure of artistic freedom, the film stars remained in the position of being able to sell their talents to the highest bidder.

Some of those bids were phenomenal. Mary Pickford, for example, was already earning $10,000 a week in 1915 and was shortly to leave Zukor for First National in the reasonable hope of bettering that. Chaplin, too, was offered $1,000,000 a year to stay at Mutual, but he also signed for First National for slightly less money and more directorial freedom. He contracted to make eight films in eighteen months, and given that he had made over sixty for Keystone, Essanay and Mutual in the three preceding years, this was not a particularly demanding schedule. Nevertheless, it took him four years to complete his assignment. And in the next forty-five years he made just ten films. His later ones were, of course, longer, progressively more ambitious in scope, and infinitely costlier to make. The situation which allowed an artist like Griffith to learn his craft on nearly five hundred films between 1908 and 1913 quickly gave way to one in which the rising cost of the product inhibited experimentation and encouraged the repetition of proven skills and formulae.

The line between success and failure in conditions like these can be a narrow one. Griffith always believed that he had been forced out of the industry when on the very brink of a break-through, and in other circumstances Chaplin’s first film for United Artists, A Woman of Paris (1923), could have led him to a similar fate. He spent almost a year and nearly a million dollars on it. It opened in New York without Chaplin’s name on the Bill, and though it was a critical success, the subtle portrayal of a complex human relationship assured its failure with audiences accustomed to a cruder style of melodrama and slapstick comedy.

Chaplin made five more films for United Artists in the next twenty years, and with them a reputation as the world’s most loved and admired clown. In them he reverted to playing variations of The Tramp, a character he had created in 1914.

The Tramp’s child-like egotism makes him an outcast by choice who operates only upon society’s margins, unsuccessfully opposing his need for love and dignity to the materialistic world’s vulgar indifference, before disappearing alone in the famous final “fade out”. His efforts in The Gold Rush (1925), The Circus (1928), City Lights (1931), Modern Times (1936), and The Great Dictator (1941) to create, as it were, a private Utopia, give them the quality of hallucination, as Parker Tyler has pointed out.[10] They also established Chaplin’s art within the mainstream of American culture. The Tramp’s encounters with the intractable material of his environment are as funny as anything in the comedies of Buster Keaton or Harold Lloyd. But they also have an extra dimension of pathos that links them to the world of Scott Fitzgerald’s Gatsby, for example.

During the Depression and after the Second World War the Tramp’s relations with society became progressively more strained as his individualistic eccentricities were less and less tolerated. The tone of Chaplin’s films grew sombre and the margin upon which the Tramp could clown grew smaller until it disappeared altogether in Monsieur Verdoux (1947). In this film the opposition to an uncaring, Capitalist society takes the form of murder, and Chaplin, like many other artists at this time, suffered from that society’s retaliation. The campaign against him was particularly vicious, however, and led to his eventual exile.

3. The Middle: Factories

Throughout his career at United Artists, Chaplin’s stature and his own determination guaranteed him a measure of financial independence and artistic freedom. This was not the case with the great majority of people making films during the thirties, forties and fifties. During this period when the industry was dominated by eight major studios, individuals were more often tyrannised by the studio executives in control of them. The history of Hollywood is full of stories in which famous writers were taken off assignments, eminent directors supplanted by others, and senior actors and actresses suspended from work for minor infringements of the rules. In order to achieve maximum efficiency these large business organisations had not only to have the luck or foresight to make the right deals at the right time, but also to maintain a steady output of saleable goods with factory-like precision. In their hey-day MGM, Paramount, Universal, Columbia, Warners, Twentieth Century Fox, RKO, and United Artists were producing up to 500 feature films a year; roughly one a week at each of the companies. Such schedules demanded considerable discipline and left little room for the idiosyncrasies of individual talents.

Even United Artists, which had started out with the object of making itself into the Tiffany’s of the industry by producing just twelve films a year from its four founders, soon had to change its policy. The services of a new partner were acquired, and it was he, Joseph Schenk, who masterminded the company’s vertical integration by buying a first run theatre circuit, thus bringing it into line with the other studios. He also persuaded the other partners to take on Sam Goldwyn as a producer to provide a further three films a year. None of them welcomed this move, least of all Chaplin, but it proved to be both a financial and an artistic success. Among the films Goldwyn contributed to United Artists were Arrowsmith (1932), The Wedding Night (1935), Stella Dallas (1937), Dead End (1937), Wuthering Heights (1939), and The Westerner (1940). Schenck also tried to persuade his new partners to enter into an arrangement with MGM in order to develop a more efficient distribution system but this move was vetoed by Chaplin, thus losing United Artists $5,000,000 a year, according to Schenck.

In such ways the patterns that were to characterise the film industry for the next thirty years were taking shape in the twenties, but there was still a good deal of flexibility in the system and room for some manoeuvring, as is shown by the activities of Goldwyn himself. Born Samuel Goldfisch, he had taken his new name from that of his own corporation, formed in 1916 with partners called Selwyn. Goldwyn’s inability to work harmoniously with others was legendary and it was not long before his fellow board members began their attempts to get rid of him. He only survived in his own company for as long as he did – until 1922 – because he was the best producer there. When he was finally fired, it was his own protege, the film star Mabel Normand, who provided his enemies with ammunition, by her association, in the public’s mind at least, with a notorious Hollywood murder.

This scandal, coming hard on the heels of another involving the famous comedian Roscoe (Fatty) Arbuckle and a film starlet, Virginia Rappe, had far reaching consequences in Hollywood, but the immediate effect on Goldwyn was to precipitate his career as an independent producer.

The company he left did not survive for long without him, and in 1924 was forced to merge with Marcus Loew’s Metro Pictures Corporation and Louis B. Mayer Productions to form one of the most powerful companies in the history of Hollywood, MGM. Ironically, Goldwyn himself was never actually part of the organisation that bears his name, and the most influential creative influence at MGM was that of Irving Thalberg, who, when he joined the studio as a twenty-four year old producer, kept his name off all publicity material, saying “Credit you give yourself isn’t worth having”.[11]

At the same time that the movie moguls were fighting tooth and claw to establish their respective empires, they were also suffering some disquiet as a result of the reputation being earned for the industry by their more unruly employees. Given that popular art succeeds by encouraging spectator identification rather than distanciation, the lurid and sometimes sordid events involving members of the film colony in the early 1920s had an inevitable effect on the images they projected from the screen. It therefore seemed logical to the author of an anonymous pamphlet published just after the Arbuckle and Normand scandals, called The Sins of Hollywood, to begin his first chapter by saying that “The Sins of Hollywood are facts – NOT FICTION” but to end it, having detailed some of those facts, by calling for action – not against the people concerned – but against the films they appeared in.[12]

The same logic prevailed with the studio executives who, following the lead given by professional baseball after the scandal of the 1919 World Series, appointed an outsider to set its house in order. In order to forestall the introduction of State or other external censorship, they appointed Will H. Hays to be the official spokesman of the newly formed Motion Picture Producers and Distributors of America. Over the next twelve years, the Hays Office brought out a series of Codes which finally came into effect in 1934 after a campaign by the Catholic Legion of Decency had forced the appointment of Joseph Breen to administer the Motion Picture Production Code of 1930.

The Code was based upon a problematic distinction between entertainment which improves the race by recreating and rebuilding “human beings exhausted with the realities of life”, and entertainment which “tends to degrade human beings or to lower their standards of life and living”. Its major working principle was that evil should never be made to appear attractive, or good, unattractive. Whilst a case can be made for the view that art influences moral and social behaviour, and that it should therefore be subject to certain restraints, the real weakness of the Code lay in the inadequate notions of good and evil implied by its detailed proscriptions. Hollywood’s self-imposed censorship militated against the development of film as a mature art form by insisting that all forms of”evil” – illicit passion and irreligion as well as criminal activity – must be shown in an obvious and immediate fashion “not to pay”.

It also manifested in its actual working, different, covert assumptions about the nature of evil. In its determination to rid the screen of nudity, “excessive and lustful kissing”, “scenes of actual childbirth”, and “sex hygiene”, it clearly demonstrated a belief in the dangers and evils of sexuality itself and anything associated with it. As we shall see, it took a rather different attitude towards violence.

Molly Haskell, in her study of the treatment of woman in film,[13] suggests that the period between 1930 and 1933 was one of the few in the history of cinema when women’s sexuality was honestly portrayed. In films like Morocco (1930), Blonde Venus (1932), Shanghai Express (1932), Dinner at Eight (1933), and She Done Him Wrong (1933), women were allowed to experience and express sexual desire, even initiate sexual encounters, without necessarily being branded as monsters, prostitutes, or criminals. The films of this liberated interlude, moreover, do not portray life in terms of an amoral, orgiastic happiness. If anything, they are more pessimistic, even tragic, than the falsely Utopian productions of the late thirties. Of course, the Hays Office cannot be held solely responsible for the change in represented values on the screen. The Code itself was the product of broader social, economic and political forces that were working to shift the nation’s values and ideals, so that the substitution of the career woman in a tailored suit for the lover in a satin negligee reflected the re-emergence of a native American philosophy compounded in equal parts of Puritanism and the Success Ethic. The Depression, Roosevelt, The New Deal, political developments in Europe and Isolationism at home all played a part in changing people’s perception of women and their sexuality.

The same could be said about the treatment of violence in the period, though here the attitudes of all concerned were more ambivalent. Unlike sex, violence was not seen as inherently evil, and was not dealt with as such by the Production Code. The relevant sections concentrate entirely on violence perpetrated in the pursuit of criminal activities, and whilst the implementation of the Code brought an end to the classic gangster movie, inaugurated in the early thirties by Little Caesar (1931), Public Enemy (1931), and Scarface (1932), Hollywood attempted – vainly, as it happened – to maintain the basic genre by switching character roles and having James Cagney and Edward G. Robinson practise their brutality in defence of society as G-men and Special Agents, rather than in defiance of it. What made it difficult for audiences to identify with these later figures was not their violence but the fact that they had surrendered their former status as romantic outsiders doomed to tragic deaths at the hands of a vengeful society. Even when films of the late thirties such as Dead End (1937) or Angels With Dirty Faces (1938) did place gangsters at their centre, audiences were not likely to identify with them because they were portrayed, not as antagonists of society, but as victims of it. More care was taken in the later films to create a convincing relationship between the criminal and his background; the Environmentalist point often being reinforced by having two generations of characters within a single film, demonstrating different stages of moral decay brought about by poverty in the inner city slums.

The entry of the United States into the Second World War in 1941 temporarily diverted film makers from the problems posed by the gangster and gave them a new field in which to explore and represent violence without fear of censorship; the War film. It also gave more reflective directors a different subject, more compelling than that of the criminal in society. The examination of ideological differences between Democracies and Totalitarian regimes was begun in such films as Watch on the Rhine (1943), Mission to Moscow (1943), and Tomorrow the World (1944). It has continued to occupy the attention of Hollywood ever since, both in actual films and, notoriously, in the series of Hollywood investigations carried out in 1947 and 1951 by the House Un-American Activities Committee.[14]

Meanwhile, such gangster films that were made in the post-war years were heavily influenced by the prevailing atmosphere of fear, suspicion, pessimism and paranoia – characteristics that informed a great many other films of the period too, and created the style of film noir. This was a time when directors who had begun their careers in Austria or Germany in the twenties and thirties were just beginning to re-emerge as a force in Hollywood, bringing with them many of the stylistic qualities of the German Expressionist cinema. In the work of Lang, Siodmak, Preminger, and Wilder, public and private dread found their perfect embodiment:

… interrogation rooms filled with nervous police, the witness framed at their center under a spotlight, heels clicking along subway or elevated platforms at midnight; cars spanking along canyon roads, with agonised faces beyond the rain-splashed windscreen. . . here is a world where it is always night, always foggy or wet, filled with gunshots and sobs, where men wear turned-down brims on their hats and women loom in fur coats, guns thrust deep into pockets.[15]

Though the iconography of films like Double Indemnity (1944), The Big Sleep (1946), and The Killers (1946) may bear a family resemblance to the gangster films of the thirties, the differences represented by the style of Cagney and Jean Harlow on the one hand, and Humphrey Bogart and Barbara Stanwyck on the other, are every bit as great as those between the Hippie outlaws of the sixties and the patriarchal businessmen who people Coppola’s Mafia epics of the seventies.

Similar evolutions and mutations can be traced in the development of other Hollywood genres, even when, as in the case of the Musical or the Western, contemporary social pressures bearing upon the film maker are not translated to the screen with the same immediacy.

In some respects the Musical of the 1930s is a mirror image of the Gangster film. Where the latter presents a tragic hero whose overreaching ambition and egotistical individualism leads to his death, the former gives us an Horatio Alger figure whose energy and skill are harnessed by a strong leader and combined with the talents of a team to ensure success. Both forms are ritualistic in that they are only incidentally concerned with exploring the actual conditions of contemporary life, while being centrally occupied with the myth of the American Dream.[16] Nevertheless, films like Gold Diggers of 1933, Footlight Parade (1933) and 42nd Street (1933) do reflect very vividly the ethos of the early New Deal,just as, in a different way, the Astaire-Rogers musicals produced by RKO in the late thirties, and the famous MGM musicals of the Arthur Freed unit in the late forties and early fifties carry an implicit commentary on different ideologies, or lack of them, in their respective periods.

As one might expect, Musicals of the later 1930s incorporated many of the elements noted earlier in the changing presentation of women. Two related concepts – spontaneity and integration – came to dominate the structure of the genre, and gave it an entirely new set of values. Where the early Warner Musicals had typically involved its characters in parallel plots of personal romance and the effort to finance and stage a Broadway Musical show, later forms, even when they did use the ‘show within a show’ formula, move towards a greater degree of integration of the two stories. Far from having to sacrifice their emotional lives to their professional ones, the successful love of Astaire and Rogers is closely bound up with their success as performers. From this derives the notion of music and dance as spontaneous products of a happy, fulfilled life, not just a separated, professional spectacle.[17]

When, as usually happens, these Musicals break through the fictional framework of their meagre plots to set up a discourse with the audience, they exhort the spectator to musicalise his own life too, as an antidote to the Depression:

Shall we give in to despair, or shall we dance with never a care? Life is short, we’re growing older, Don’t you be an also ran, You better dance little lady, dance little man. Dance whenever you can.

The film from which this number comes, Shall We Dance (1937), marks the true culmination of the Astaire-Rogers partnership. It manifests a powerful reaction against the regimentation and manipulated impersonality of Busby Berkeley’s choreography, in which human beings – especially women – are reduced to the level of decorative objects. It also exhibits the positive influence of Frank Capra’s popular success, Mr. Deeds Goes to Town (1936).

Capra’s hero, like Astaire in Shall We Dance?, is subjected to a great deal of pressure to succumb to the demands of High Art and High Society. Longfellow Deeds is a tuba-playing “farm boy” from Ohio who is pitchforked, via a $20,000,000 inheritance, into the corrupt world of New York lawyers, cultural entrepreneurs, and society belIes. His native shrewdness and innocence enable him to resist successfully all the various attempts upon his money and his person, and to redeem the world through his “screwball” philanthropy. Astaire too has to fight to assert his true self – Pete Peters from Philadelphia, PA – against those who would confine him within his assumed identity as the European ballet dancer, the Great Petrov. His redeemer is, of course, Ginger Rogers in the role of a nightclub dancer. Her American spontaneity finally brings out the “Philadelphia” in Astaire, frees him from the repressive forces of an artificial society, and releases both his artistic and his emotional energies.

It should be stressed that the celebration of individualism in these films of the late thirties does not so much reflect a contemporary reality, as a reaction against the current preoccupation with corporate ideals, and a yearning for an imagined pre-industrial innocence. At the same time, it also needs to be said that this in no way diminishes their cultural significance. It is never very helpful to categorise works of popular art as “escapist” and leave it at that. The questions that need to be asked about them concern the quality of life that prompts the need for escape, the ideals represented by the life that is escaped to, and the imaginative energy involved in the creation of both. For instance, it has been pointed out that the Utopian sensibility behind Hollywood Musicals is a strictly limited one. Though the films treat such problems as scarcity, exhaustion, monotony, manipulation and fragmentation, and attempt to substitute a world of abundance, energy, intensity, and individual as well as communal values, there are other, more specific problems, such as race, class and sexual caste, that are denied validity by this, and indeed, by most Hollywood genres.[18]

By common critical consent the great age of the Musical is reckoned to be more or less co-terminous with the life of the Arthur Freed production unit at MGM. Freed had worked on the MGM lot as a songwriter since 1929, but he is best remembered for the films he produced in the late forties and fifties including Easter Parade (1948), On the Town (1949), An American in Paris (1951), Singin’ in the Rain (1952), The Band Wagon (1953), Silk Stockings (1957), and Gigi (1958). The team he gathered around himself in those years developed an expertise in every aspect of the Musical that enabled the studio to produce with apparent effortlessness a world of colour, movement and music, in which Gene Kelly, Fred Astaire, Judy Garland and Cyd Charisse allowed their audiences to glimpse and even share a life very different from that developing in the bleak confrontations of the Cold War abroad, or the suspicion and fear induced by the McCarthy witch hunts at home.

The structure of these films, depending upon the management of contradictions between conflicting lifestyles, had been set up with the form itself, much earlier, and it is arguable that, of the Freed Musicals, the very first example, based upon the actualities of Depression America, was the best. The Wizard of Oz (1939) was released at the end of an era at a time which in retrospect appears as the high water mark of the studio system. Within just a few months a handful of films were made – Stagecoach (1939), Gone With The Wind (1939), The Wizard of Oz (1939), and The Grapes of Wrath (1940), which in their various ways set the standards and defined the limits of the Hollywood narrative film. Such is the accelerated history of Hollywood, though, that within another year Orson Welles had brought out his masterpiece, Citizen Kane (1940), a film which broke every existing convention, substituted self-conscious Modernist techniques for the transparent Hollywood style, and pointed the way forward to a period in which it would no longer be possible for either Hollywood or Europe to maintain a separate artistic identity.

4. The End: Independence

Orson Welles had been brought to Hollywood by the President of RKO, George Schaefer, and on the strength of his reputation in radio and the theatre, given complete freedom to make any film he liked without being subjected to the normal restraints or interference. He did so, and the result convinced Schaefer that he had a film which could save RKO from the bankruptcy it was facing.

Before the film could be shown, however, Schaefer was approached by Nicholas Sehenck, an associate of Louis B. Mayer of MGM, who offered him over $800,000 in return for the destruction of the negative and all prints. It will probably never be known for certain who put up the money, but it is more than likely that it came from a consortium of the top movie moguls, all of whom were worried by the possibility of reprisals against the industry by the film’s thinly disguised subject, the newspaper magnate, William Randolph Hearst. Schaefer reused the offer even though he was warned that the large theatre circuits owned by the other big four companies, Fox, Paramount, Warners and Loew’s, would make sure that the film could not be widely exhibited. In addition, his refusal brought down upon RKO the considerable weight of Hearst’s own wrath, expressed in a variety of ways, from straightforward attacks in his newspapers, to the blackmail of anyone prepared to help Schaefer, and political harassment of Welles and his colleagues. It was like a full dress rehearsal for the infamous Hollywood witch hunts. Eventually, when threatened with legal action, the other companies half-heartedly backed down and the film had a very limited release.

The lack of advertising and promotion ensured the film’s box office failure, in spite of excellent reviews. Though it won the New York Film Critics award, it failed to gain an Academy award in eight out of the nine categories for which it had been nominated. More significantly, the audience at the ceremony – the Hollywood community – greeted every mention of its name, or that of Welles, with loud boos and hisses. Not surprisingly, in the face of such powerful opposition, it took many years for Citizen Kane to be properly assimilated into film culture and history.[19]

Shortly before Citizen Kane was made, a complaint was filed against the five major companies that was eventually to have far more important effects upon the entire industry than any single film could. They were charged with combining and conspiring to restrain trade unreasonably, and to monopolise the production, distribution, and exhibition of films. In addition, the three minor companies, Columbia, Universal and United Artists, were charged with conspiring with the major companies for the same purposes.

The aim of the action was to divorce exhibition from production and’ thus put an end to such restrictive practices a- the block booking of films, the fixing of film licence terms, and the control of admission prices. The legal battle dragged on through the 1940s, but the Supreme Court decision against Paramount Pictures in 1947 signalled the end of all the old empires.[20] As if this were not enough, the studios had to face another major threat posed by the rapid spread of television throughout the country during the 1950s. Of course, the industry fought back in predictable ways, most of which were based on the belief that the smallness of the television screen could best be countered by increasing the size of films. Budgets began to expand in order to finance block busters, and the size of cinema screens and film stock followed suit in order to accommodate them. The increased cost of making and showing these films, and the recognition that the old, regular, undiscriminating audience had gone forever, had a drastic effect upon the number of films produced. Warner Brothers, for example, released sixty-seven films in 1937, but only fifteen in 1977.

Another factor that helped to determine the direction taken by the film industry at this time, was the growing influence of European film. A small but significant following for the work of Godard, Bergman and Fellini not only helped to further segment a shrinking audience, but also persuaded film makers to experiment more with subject and style. If one examines the films that now make most money, it is obvious that, though Hollywood still relies very heavily on certain formulae, the shape and the texture of modern films differ radically from those that provided a staple diet in the thirties. The big hits of 1977, for example, were a space spectacular (Star Wars), a supernatural horror film based on an earlier success (Exorcist II), a Rock Musical (Saturday Night Fever), a “Roadeo” comedy (Smokey and the Bandit), a wide screen, stereophonic, underwater adventure (The Deep), and a children’s “Fantasy-Marvel” (Close Encounters of the Third Kind). The same year marked the release of four more American films which, though less successful financially, did well in the “art houses”, and won a great deal of praise from European critics. These were Fred Zinnemann’s Julia, Woody Allen’s Annie Hall, Robert Altman’s Three Women, and Richard Brooks’ Looking for Mr. Goodbar.

The evolution of native American genre films in this period may be profitably explored in relation to general developments in American culture and society, though the nature of such causal connections must remain to some extent problematical, as we shall see. Perhaps the best example is the Western. By what seems like a nice historical coincidence, but is probably not, Frederick Jackson Turner’s famous essay, “The Significance of the Frontier in American History”, in which he argued that the existence of free lands in the West had been responsible for the creation of important American character traits, was delivered to a scholarly audience just a few months before the first cowboys made their appearance on the screen. Buffalo Bill’s Wild West Show was photographed for the Edison Kinetoscope in 1894, and Cody himself, like many other survivors from Western history, was drawn into the business of re-enacting, or re-creating, the immediate past on film.

Cody’s own stated aim was to make an historically accurate documentary film about the Indian wars, and though the film itself has now been lost, recent scholarship suggests that it, and indeed many other films of this early period, wittingly or unwittingly captured some sense of the West’s actuality.[21] The Western as Hollywood knows it, however, was being created, not so much in these films, or even in early masterpieces like James Cruze’s The Covered Wagon (1923), or John Ford’s The Iron Horse (1924), both of which combined fact with fiction, as in the dozen of horse operas churned out as “program” pictures by obscure little companies, many of which were still filming in New Jersey or New York. It was the simplification of the formula in these low budget one-reelers that provided the pattern for thousands of films during the next forty years. The great bulk of subsequent Westerns were made in the “B-Hive”, the name given to a group of small independent studios producing B films, or fillers, for double feature programmes throughout Hollywood’s lucrative years. The iconography, plots and characters of these movies were just as uncomplicated as those in their silent predecessors. Given that a studio like Republic was committed to two Westerns a month, each of them shot in seven days on a $50,000 budget, they had to be. At the same time, the very existence of these simple formula films and the audience’s familiarity with them, is what enabled directors such as Ford, Hawks, Mann and Boettischer, to play such subtle and interesting variations on the form far at least three decades.

Any attempt at social or psychological interpretation of the form depends to some extent upon the kind of definition one makes. To merit the name at all, a Western must surely, as John Cawelti argues,[22] first of all take place in the West, near the Frontier, at a time in history when social order and anarchy were in tension, and its action must involve some form of pursuit. This is a very basic prescription and many other commentators and critics have elaborated on what they take to be necessary elements of setting, plot or character. In his structural analysis of narrative, for example, Will Wright[23] lists the basic plot functions of what he calls the Classical Western as follows: (1) The hero enters a social group. (2) The hero is unknown to the society. (3) The hero is revealed to have an exceptional ability. (4) The society recognises a difference between themselves and the hero; the hero is given a special status. (5) The society does not completely accept the hero. (6) There is a conflict of interest between the villains and the society. (7) The villains are stronger than the society; the society is weak. (8) There is a strong friendship or respect between the hero and a villain. (9) The villains threaten society. (10) The hero avoids involvement in the conflict. (11) The villains endanger a friend of the hero’s. (12) The hero fights the villains. (13) The hero defeats the villains. (14) The society is safe. (15) The society accepts the hero. (16) The hero loses or gives up his special status.

Wright goes on to contrast this morphology with that of a later form, the Professional Western, and to interpret the change from one to the other in the context of the American shift, after the Second World War, from an individualist market economy to a corporate or managed one. It is a very ingenious interpretation of the Western in terms of the understanding and communication of social tensions, but it is based on data and definitions that not everyone would accept. For instance, he deliberately restricts his analysis to the sixty.four top grossing Westerns made between 1931 and 1972, on the assumption that such films correspond most exactly to the expectations of the audience, and to the meanings viewers demand from the myth. The difficulties presented by a study based on this procedure can be seen when Cat Ballou (1965) and Shane (1953) are both described in his list as Classical Westerns. Cawelti’s model, constructed more loosely but on a firmer empirical base, may not produce such a neat reading of American social tensions, but it does grant significance to other important elements in films, besides narrative functions, and it also recognises the fact that movies relate to a multitude of other relevant contexts.

In the case of the two films mentioned, differences that are not revealed by an examination of their narrative patterns become only too obvious if one attends to the crucial elements of setting or mis-en-scene. The gun fighter in Cat Ballou, Kid Shelleen, is played by Lee Marvin as an ageing drunk who has to be physically and mentally supported by the cowboy’s traditional props: whisky, women, his horse and his costume. Throughout the film, however, these iconic elements are’ subjected to gross comic exaggeration. When, for example, he is helped into his clothes before the final “showdown”, we see him being laced first into a tight corset, then enormous boots, and finally a heavy, jewel encrusted waistcoat; appurtenances which transform him completely but also make it virtually impossible for him to walk! His antagonist, Tim Strawn, is also played by Lee Marvin (in itself a parody of a traditional Western motif: the idea of hero and villain, representing the “light” and “dark” sides of personality). Like many of his villainous predecessors on the screen, Strawn’s moral ugliness is given physical embodiment, but not in his case a scar, a twisted mouth, or a drooping eyelid, but an enormous, artificial, silver nose, ludicrous in its clown-like, dehumanising effect. In these, and a hundred other ways, the film systematically parodies the codes and values of the Western by relentlessly subjecting its revered images and symbols to various forms of comic pastiche.

Shane moves in a diametrically opposite direction in order to transcend rather than subvert the genre, and to create a mood of elegiac heroism. If it were not for the fact that the world created on screen is filtered through the viewpoint of a young boy, the visual enhancement of every traditional image might make for a disabling sentimentality. As it is, we are offered – through the child’s eyes a landscape of overwhelming beauty and grandeur, in which an idyllic rural community nestles beneath snow-capped mountains, cherishing a dream of democracy and social order, but threatened by violence and lawlessness. “The Spirit of the West” Alan Ladd in fringed buckskins – descends into this arena to meet and ritualistically defeat the sinister, blackclad “Spirit of Evil”, Jack Palance, before fading away as mysteriously as he arrived.[24] Even ii, as some critics have claimed, the crystallisation of the myth in Shane brought one tradition to an end and left room only for such versions of the anti-Western as Cat Ballou, an interpretation that refuses to take account of their basic differences of tone cannot do justice to the genre.

In the Western, no less than in Hemingway’s novels, what finally matters is not so much what is done, as how it is done; heroism becomes a function of style. It is possible to discern the same narrative pattern in a great many other films of the post-war period besides the Classical Western. It is arguable that one of the defining characteristics of popular narrative film is its rigid insistence upon the resolution of conflict by narrative closure. It follows then that attempts to treat the social or psychological problems inherent in the myth are much more likely to be made at the level of setting or character than by opening up the narrative structure.

One of the more persistent themes in post-war Hollywood cinema has been the futility of individual action, heroic or otherwise, in the context of a vast military/industrial society, It has been reflected in the Western in a multitude of images that have helped to undermine, or at least revise its traditional codes. In Lonely Are the Brave (1962), Kirk Douglas and his horse are finally taken out, not by the sixguns of a sheriff, but on a wet, dark, highway, by a huge truck carrying lavatory bowls !; the hero of Kid Blue (1973) finds himself in a factory making ash trays; Hud (1963) has traded his Colt for a Cadillac; and the shots heard by the two cowboys riding through the Western landscape at the beginning of Comes a Horseman (1978), are not what we (and they) imagine, but the rifles of a military Honour Guard at the burial of a World War II hero.

Another way of demythologising the form is by creating characters whose physical and mental attitudes are in direct contrast to those of the archetypal hero. In films like Will Penny (1967) and Monte Walsh (1970) we are given, in place of the cool, gun twirling dandy, ageing workmen whose primary aim is not that of making the West a fit place to live in by ridding it of evil, so much as surviving the rigours of bad weather and hard physical work, and saving enough money to give themselves security in old age. Alternatively, some directors, like Arthur Penn in The Left Handed Gun (1958), and Philip Kaufman in The Great Northfield Minnesota Raid (1971), have taken well known Western prototypes like Billy the Kid and Jesse James, only to reinterpret their exploits in terms of neurotic or psychotic anti-social behaviour.

During a period in which there has been a radical revision of public and private attitudes to American minorities, some of the minor stereotyped figures essential to Hollywood genre films, have also disappeared, sometimes only to return in new guises.[25] Little Big Man (1971) and A Man Called Horse (1970) are interesting examples of films which explore a much more ambivalent and problematic relationship between Indians and Caucasians than in the days when “the only good Injun was dead ‘un”. The terms of the modern debate are more likely to be psychological and cultural than moral and social, with the Indian representing the survival of non-rational and non-aggressive modes of perception and behaviour, pitted against the technologically supported power of White imperialism.

The image of Woman in the Western has also undergone a transformation. The traditional dichotomy of Eastern schoolteacher and Western whore is denied once and for all by such characters as that played by Julie Christie in Robert Altman’s McCabe and Mrs. Miller (1971). Like her blonde predecessors she is associated with social progress, but the means by which she attains it in the embryonic community of Presbyterian Church is by taking over Warren Beatty’s inefficient brothel and setting it up as a profitable business venture. Like her dark haired sisters in earlier films she is also a sensualist who is not above exploiting men’s sexual appetites, but her own are best served, not in bed but in the opium den. McCabe and Mrs. Miller is an excellent example of a film which, like many of Altman’s, radically subverts the convention of a genre whilst maintaining the basic narrative structure.[26]

It would be wrong to infer from all this, however, that the growing sophistication of a Hollywood genre like the Western is indicative of a similar growth in the sensibilities of the mass audience for popular art. Most of that audience was, by 1960, firmly entrenched in front of television screens, and the major networks took advantage by filling their schedules with material that had hitherto only been accessible in the cinemas: Westerns, Gangsters, and Soap Operas. TV took over and surpassed – in quantity at least – the output of the studios producing B features and Serials, leaving Hollywood to search desperately for new formulae with which to win back its mass audience.

One of the ways in which Hollywood prepared itself to fight the threat of television was by relaxing the stringency of its own censorship laws. In 1968 a new set of Code Objectives was published by the Motion Picture Association of America, designed in its own words “to keep in close harmony with the mores, culture, the moral sense and change in our society”. The new Code still railed against detailed or protracted acts of violence and illicit or intimate scenes of sexuality, and also issued grave warnings about the possible spread of license. But these rhetorical flourishes failed to obscure the real point of the document, which was to introduce a rating system for films whereby these sensitive areas could be treated in “X” rated films for safe exhibition to adult audiences.[27]

Whilst this was an effective counter to the blandness of television at the time, and helped to ensure audiences for such excellent films as Last Tango in Paris (1972) as well as for a multitude of sexploitation” films that flooded the market in the 1970s, there was also some truth in the Association’s claim that in changing its Code it was responding to actual shifts in society’s values. Throughout the 19505 and 19605 American psychologists and sociologists had been publishing the results of research that seemed to suggest a growing dichotomy between public and private attitudes to moral issues. These changes had already begun to be reflected in mainstream films. As early as 1959 Robert Brustein was commenting on a new sexual “realism” that was beginning to pervade Hollywood,[28] and this was given further impetus in the 19605 by the growth and spread of independent film making completely outside the control of the conservative industry. Jack Smith’s Flaming Creatures (1963), Andy Warhol’s The Chelsea Girls (1966), Paul Morrissey’s Flesh (1968), and Kenneth Anger’s Scorpio Rising (1964), which was at the centre of a famous Los Angeles court case, were all fairly widely seen during the decade and even more widely talked about. Directly reflecting a different though important aspect of the ethos of the 19605, they had an inevitable influence on the content of Hollywood films.

It is also likely that the small scale success of these film makers, and of Jonas Mekas’s Film Makers’ Cooperative which encouraged independent artists throughout the decade, his Film Distribution Center, the Film Makers’ Cinematheque in New York City, and Mekas’s tireless written support of “Underground” cinema in the magazine Film Culture which he founded, and in The Village Voice, had some effect on the way other young film makers thought about the structure of their careers in relation to Hollywood.

More important still, though, was Hollywood’s own changing attitude to film production. After the crisis of 1969-71, when the attempts of the major studios to combat television by investing in expensive “superproductions” had led them to the brink of disaster, there followed a period of readjustment and retrenchment. In 1972, for example, out of 296 films produced, no fewer than 170 were classified by Variety, the trade magazine, as “Independents”,[29] and though many of these were actually financed by the major companies, the salient fact is that the studios had begun to relinquish the kind of artistic control that had created and sustained the “Hollywood style” and, within it, various studio styles, for half a century.

Of course, there were still directors, Hitchcock for example, who continued to work quite happily within the studio system, though in his particular case the developed style was so powerful that he could move quite comfortably between MGM, Universal, Paramount, Warners, and even Pinewood, without any major disruption to his work. But younger directors like Scorsese, Milius, Spielberg, Lucas, and Coppola, who began their careers in this more fluid situation, necessarily developed in different ways, often experiencing from film to film both the risks and rewards – artistic and financial – of their semi-independent status. Of these, the one whose career to date is perhaps the most instructive, is Francis Coppola.

Unlike most of the older, established directors who ended up working in Hollywood, Coppola actually planned his career as a film maker, and to facilitate it enrolled in the Film School of the University of California at Los Angeles. By 1963 he had already made a few short “exploitation” films and a longer one for the legendary Roger Corman Productions, Dementia 13 (1962). His aim at this stage was to get as much experience as he possibly could in every aspect of the business, so that when he was offered a job as a writer for Seven Arts, he took it and began a long association that enabled him to work towards the artistic and financial independence he so wanted. Seven Arts itself was a company quite unlike the older, more orthodox Hollywood studios From small beginnings selling television movie rights, it had moved into the business of “packaging” films for other companies to produce and finance. Under their aegis Coppola soon graduated from writing to directing, and in addition to working on stich strictly commercial projects as the musical Finian’s Rainbow (1968), was also able to invest his energy and money into more personal projects such as You’re a Big Boy Now (1966) and The Rain People (1969).

These latter films, though different in style and content from normal studio products, were certainly not intended to be “uncommercial”, and are not “personal” in the same sense that Warhol’s are. Coppola has indicated his own goal by declaring that “The way to come to power, is not always merely to challenge the Establishment, but first make a place in it, and then challenge and double-cross the Establishment”.[30] His aim has always been to make fictional, narrative films, but to do so in his own way, free from the interference of financiers and bureaucrats. It was for this reason that he decided in 1969, with the help of George Lucas and other friends who had travelled and worked with him during the shooting of The Rain People, to set up an independent studio hundreds of miles away from Hollywood, to be called American Zoetrope.

Here in San Francisco, with financial backing from Warners Coppola was able to set up a number of schemes dear to his heart. George Lucas began work on his science fiction film, THX 1138 (1971), John Milius started writing Apocalypse Now (1979), and the first discussions about American Graffiti (1973) took place. If American Zoetrope was modelled on Roger Corman’s company, it also sounds – in George Lucas’s description – very similar to the blueprint for United Artists half a century earlier: “The real concept was that it would be an independent, free production company that would make seven or eight films a year in varying degrees of safeness. We might do a couple of films that seemed fairly safe and reasonable, and then do some really off-the-wall productions. The theory was that it would all balance itself out and the operation would make money”.[31] Coppola was also involved in another short lived venture, the Directors’ Company, which only produced two films, The Conversation (1974), and Paper Moon (1973).

Coppola’s film The Conversation, like Bogdanovich’s, Paper Moon, did not make much money, but it was a great critical success, winning the Grand Prix for the best film at the Cannes Festival. Its story revolves around a professional eavesdropper who is trying, with the help of very sophisticated technology, to piece together details of a possible murder plot, but who finds his own privacy violated by similar means. The film reflects not only Coppola’s own fascination with technology, but a sensitive response to the social and psychological implications of such contemporary incidents as Watergate. Like Antonioni’s Blow Up (1966) which bears a similar relation to President Kennedy’s assassination, it also hints at a concern with the moral issues involved in the restructuring of reality in the act of film making itself. For all the praise that has deservedly been heaped upon The Godfather, Coppola proved, in The Conversation, his ability to create, without the massive resources of Hollywood, a film that is every bit as good.

More significantly, the scale of The Conversation suggested to some observers that Coppola had managed to avoid the “syndrome of escalating giganticism” so pervasive in the modern industry, but his subsequent work has done little to confirm this. The money he earned from The Godfather was used to buy up the old Hollywood General Studios in 1979 and to bring back to life his dream of controlling his own artistic output, and of helping like-minded film makers. Sadly, his first film for the newly christened Zoetrope Studios, the ambitious, Conradian exploration of”the horror, the madness, the sensuousness, and the moral dilemma of the Vietnam War”, Apocalypse Now (1979), only plunged him back into the same difficulties he has always had in maintaining his independence. As Michael Dempsey pointed out, “in spite of his genuine artistic goals, he got caught up in the same wheeler-dealer’s recklessness – pyramiding a top-heavy, complex, multi-million dollar set of interlocking deals and schedules on to the quicksand of a fuzzy, unshaped screenplay – which the crassest hacks in the international film industry, coldassed businessmen who feel nothing but contempt for artists, continually get involved in.”[32]

Though Coppola survived the experience, an even worse fate was to befall his more recent personal” film, One From the Heart (1982); a disaster of such proportions that it has forced him to sell his entire studio. At about the same time, Columbia Pictures, the company that had finally stepped in to distribute it only to withdraw it again after a disastrous seven weeks, was taken over by The Coca Cola Company. Unlike his earlier personal films, this one has proved to be as unpopular with the critics as it was with the public, one of them even going so far as to discover in it a revelation of”the schizoid d,ream of Coppola’s Zoetrope Studios”, “an old-style Hollywood studio stacked with all the latest video technology” and a film that “lacks the style to unite its human and technological elements”.

Coppola himself seems undaunted by the experience, according to Lillian Ross who wrote a long essay about the making of the film.[33] He believes that One From the Heart, in which he was trying to create an entirely new film vocabulary, will eventually secure an important place in film history. It is a large claim to make, reminiscent of earlier ones by his iconoclastic predecessors, Welles and Griffith.

Nevertheless, his judgement is more likely to be proved right than that of his critics or his public. One From The Heart is one of the most startlingly original films to be made since Citizen Kane or The Wizard of Oz, which it closely resembles. It gives the lie to Detweiler’s complaint in Fedora about “the kids with beards” who have taken over Hollywood. “They don’t need scripts” he remarks bitterly, just give them a hand-held camera with a zoom lens”. In fact it is doubtful whether any film in history has been more rigorously planned than this one. In the pre-visualisation stage Coppola made more than a thousand video tapes and as many stills. He created hundreds of storyboards and sketches, and even made a filmed walkthrough of the story in the real Las Vegas before shooting no less than two hundred thousand feet of film on the elaborate set created in the Zoetrope studios. None of this is any guarantee of quality, of course, but the finished film does triumphantly justify the painstaking care that was lavished upon it. Coppola has taken hold of the most inflexible of film genres, the Musical; and vastly extended the possibilities of both its structure and texture to create a metaphoric discourse on the American Dream that makes every film produced before it suddenly look very old fashioned.

Moreover, he has made good his claim that financial disappointments will not be allowed to quench his creative spirit by making-two more films in quick succession, The Outsiders (1983), and Rumblefish (1983). He has shown that it is possible to go on making excellent films with or without the help of the established studios; and his emphatically reiterated belief that “you can’t be an artist and be safe” demonstrates a determination and a maturity that augur well for his own future, and, if his example is followed, for that of Hollywood.

 

5: Guide to Further Reading

Despite the proliferation of literature on every aspect of film, there are very few books that adequately cover the subject of this pamphlet. The nearest approaches are in The Classical Hollywood Cinema, by David Bordwell, Janet Staiger and Kristin Thompson (London: Routledge, 1984); A Certain Tendency of the Hollywood Cinema, 1930-80, by Robert B. Ray (Princeton: Princeton University Press, 1985) and in the seven volumes devoted to Hollywood in the International Film Guide series.

Individual titles are Early American Cinema, by Anthony Slide; Hollywood in the Twenties, by David Robinson; Hollywood in the Thirties by John Baxter; Hollywood in the Forties, by Charles Higham and Joel Greenberg; Hollywood in the Fifties, by Gordon Gow; Hollywood in the Sixties, by John Baxter; and Hollywood in the Seventies, by Les Keyser; (The Tantivy Press, London; Zwemmer, New York: Barnes, 1968-1981). There are, in addition, two single volume histories which allocate large proportions of their texts to American film. These are Eric Rhodes’ A History of the Cinema from its Origins to 1970 (Harmondsworth: Penguin, 1978), and A Short History of the Movies, by Gerald Mast (Indianapolis: Bobbs-Merrill, 1976). These can be supplemented by David Thomson’s opinionated but fascinating Biographical Dictionary of the Cinema (London: Secker and Warburg, 1975. Rev. ed., 1980). Richard Roud’s two volume compilation of essays on Cinema: A Critical Dictionary (London: Nationwide Book Services 1980) is invaluable, as is Richard Koszarski’s collection of essays by film makers, Hollywood Directors 1914-1940 and Hollywood Directors 1941-1976 (New York: OUP, 1976) Charles Higham deals with most of these directors in his detailed survey The Art of the American Film: 1900-1971 (New York: Doubleday, 1974).

The standard history of American film from its beginning until 1938 is Lewis Jacobs’s The Rise of the American Film: A Critical History (New York: Harcourt, Brace, 1939). Reissued in the series “Studies in Culture and Communication”, by the Teachers College Press, 1968) Jacobs focuses primarily upon films and film makers, though he has much to say about the development of the industry too. The opposite is the case in Benjamin Hampton’s History of the American Film Industry which.was originally published in 1931 and only deals with the silent era. The same is true of Terry Ramsaye’s A Million and One Nights which deals with an even shorter period but which is full of detailed information about the early years of Hollywood. For a more concise treatment of a longer period see The Celluloid Empire, by Robert Stanley (New York: Hastings House, 1978). The two subjects – the industry and its products – are brought together, and the relationship between them analysed, in Harmless Entertainment: Hollywood and the Ideology of Consensus, by Richard Maltby. Hortense Powdermaker occasionally attempts to do the same in Hollywood the Dream Factory (Boston: Little, Brown, 1950), though her approach is that of an anthropologist. Tino Balio has collected several excellent essays together in The American Film Industry (University of Wisconsin Press, 1976) and this is complemented by the documents collected by Gerald Mast in The Movies in Our Midst. Two other excellent books on the films of the silent period are Kevin Brownlow’s The Parade’s Gone By and William K. Everson’s American Silent Film (New York: OUP, 1978).

A series of useful studio histories is in process of being published, each of which details every film made by a particular studio in chronological order with credits and synopses. Already available are The MGM Story, by John Douglas Eames, The Warner Bros. Story, by Clive Hirschorn, and The RAO Story, by Richard B. Jewell with Vernon Harbin (London: Octopus, 1977, 1979, 1982). A general study of the major studios is Roy Pickard’s The Hollywood Studios (London: Muller, 1978).

For those whose primary interest is in America rather than films, there are three very different but equally rewarding books that use film to illustrate a variety of aspects of American society. Larry May’s Screening Out the Past (New York: OUP, 1980) shows how the movies both reflected and helped to effect the transformation of American values from Victorian to modern; Michael Wood in America in the Movies (New York: Delta, 1975), explores a system of assumptions and beliefs that found expression in American films of the 1940s and 1950s, and David Thomson in America in the Dark (London: Hutchinson, 1978) takes some of these myths and shows how Hollywood, in its dependence upon them, necessarily fails to treat the real world.

As one might expect, a great deal more has been written about the so-called “Golden Age” of Hollywood, from the introduction of sound until the decline of the studio system in the late 1950s. Film production in this period was so intensive that most scholars have tended to specialise, but there is one book on the 1930s which purports to deal with all 5,000 films. Roger Dooley’s From Scarface to Scarlett (New York: Harcourt Brace Jovanovich, 1981) is not notable for its critical rigour but it does serve as a valuable reference book. For social and political reasons more attention has been paid to Warner Bros. than any other studio in the 30s. Nick Roddick’s A New Deal in Entertainment: Warner Brothers in the 1930s (London, BFI, 1983) is ‘the obvious example, but to a lesser extent the same is true of Andrew Bergman’s We’re in the Money (New York: Harper and Row, 1973) and The Hollywood Social Problem Film, by Peter Roffman and Jim Purdy (Indiana UP, 1981). Robert Sklar’s cultural history of American film, Movie-Made America (New York: Chappell, 1975) is broader in its scope, but particularly good on Frank Capra and Walt Disney.

Studies of classic American genres have been particularly popular in the last few years, and the following represents only a very small selection. On the Western the major works of scholarship are Kevin Brownlow’s The War, the West and the Wilderness, and John Tuska’s The Filming of the West (New York: Doubleday, 1976, London: Robert Hale, 1978). These should be read with John Cawelti’s The Six-Gun Mystique, Jim Kitses’s Horizons West (London: BFI/Thames and Hudson, 1969), and The Western: From Silents to the Seventies, by George N. Fenin and William K. Everson (New York: Orion Press, 1962. Harmondsworth: Penguin, 1977).

Eugene Roscow’s Born to Lose (New York: OUP, 1978) is the definitive study of the Gangster movie, but Colin McArthur’s Underworld US.A. (London: BFI/Secker and Warburg, 1972) and Jack Shadoian’s Dreams and Dead Ends (MIT Press, 1979) are both excellent studies of the genre. The Musical has spawned more picture books than serious analyses, but Jane Feuer’s The Hollywood Musical and the essays in Genre: The Musical: A Reader are outstanding exceptions. In addition to these, Hugh Fordin’s history of the Freed unit at MGM, The World of Entertainment: Hollywood’s Greatest Musicals (New York: Avon Books, 1975) is essential reading.

Studies of themes in genre films have also produced some excellent criticism in books such as Stanley Cavell’s Pursuits of Happiness: The Hollywood Comedy of Remarriage (Harvard UP, 1981), and Charles Affron’s Cinema and Sentiment (University of Chicago Press, 1982). The classic study of women in the movies is Molly Haskell’s From Reverence to Rape, but this is an area of escalating scholarship and more specialised works continue to be produced, such as Brandon French’s On the Verge of Revolt: Women in American Films of the Fifties (New York: Ungar, 1978), Women in Film Noir, edited by E. Ann Kaplan (London: BFI, 1978) or Women and their Sexuality in the New Film, by Joan Mellen (London: Davis-Poynter, 1974). The same author has also studied the subject of masculinity in American films in Big Bad Wolves (London: Elm Tree Books, 1978), as has Donald Spoto in Camerado: Hollywood and the American Man (New York: New American Library, 1978). Blacks in American film have also been extensively studied, and Thomas Cripps includes an excellent bibliography in his own book, Black Film and Genre, (Indiana UP, 1979).

James Monaco has produced an excellent guide to American Film Now (New York: New American Library, 1979). It treats, in the words of its sub-title, the people, the power, the money, and the movies. So too does Michael Pye in his long introductory essay on the modern industry in The Movie Brats (London: Faber and Faber, 1979). Pye and co-author, Lynda Myles, then go on to examine the work of Coppola, Lucas, De Palma, Scorsese, and Spielberg. Robert Kolker also deals with Coppola and Scorsese in his excellent book, A Cinema of Loneliness (OUP, 1980), and adds to these studies of Penn, Kubrick, and Altman.

Restrictions of space make it impossible to mention here the many line studies of individual directors, actors or films, though it is worth noting that there are several series specialising in this work, such as the International Film Guide series, Studio Vista’s Movie Paperbacks, the BFI’s Cinema One series, and Spectrum’s Film Focus series. Mention should also be made here of the excellent series of screenplays published in the Wisconsin/Warner Bros. screenplay series.

Finally, there are several journals, in England and America, which either specialise in American film, or devote generous amounts of space to it. These include Sight and Sound, Movie, Framework, The Velvet Light Trap, Film Comment, Film Quarterly, Monthly Film Bulletin, and Variety.

6. Notes

  1. The advertisement from which this is taken is reproduced by William K. Everson in his book American Silent Film (N.Y.: Oxford University Press, 1978), p.25. Back
  2. Gilbert Seldes, Movies for the Millions (London: Batsford, 1937), p. 12. Back
  3. The best history of the film industry considered as a business enterprise is Benjamin P. Hampton’s A History of the Movies (N.Y.: Covici, Friede, 1931) It was later republished under a new title History of the American Film Industry from its Beginnings to 1931 (N.Y.: Dover, 1970). My account is based on Chapter Two of this book. Back
  4. In addition to Hampton, one should consult Lewis Jacobs’s The Rise of the American Film (N.Y.: Harcourt, Brace, 1939) republished with additional material including his essay, “Experimental Cinema in America 1921-1947” (N.Y.: Teachers college Press, 1968). For a contemporary account’ see Terry Ramsaye, A Million and One Nights (N.Y.: Simon and Schuster, 1926). Back
  5. Roy Pickard deals briefly with each of the major companies in The Hollywood Studios (London: Muller, 1978) but there are also good individual histories as well; for example, Tino Balio’s United Artists: the Company Built by the Stars (University of Wisconsin Press, 1976). Back
  6. Kevin Brownlow, The Parade ‘s Gone By (N.Y.: Ballentine Books, 1968), p.30. Back
  7. Most of the modern controversy surrounding Griffith has concentrated on the portrayal of Blacks in The Birth of a Nation. Griffith was a Southern conservative with aristocratic pretensions, and not surprisingly his social and moral philosophy reflects that background. His racism was very mild indeed though, compared to that of Thomas Dixon, whose novel The Clans man formed the basis for the film, or to that of contemporary films portraying Blacks such as Lubin’s Coon Town Suffragettes or Turner’s In Slavery Days. Back
  8. Sergei Eisenstein, Film Form (trans. Jay Leyda) (N.Y.: Harcourt, Brace, Jovanovich, 1949). Back
  9. James Agee, Agee on Film (London: Peter Owen, 1963), pp.316-17. Back
  10. Parker Tyler’s analysis of the character in Chaplin, Last of the Clowns (N.Y.: Horizon Press, 1972) is primarily psychological in orientation, but the study also contains many insights into the cultural and social implications of Chaplin’s art. Back
  11. Quoted in Samuel Marx’s Mayer and Thalberg: the Make Believe Saints (N.Y.: Random House, 1975), p.49. Details of Goldwyn’s career can also be found in Arthur Marx’s Goldwyn: a Biography of the Man Behind the Myth (N.Y.: Norton & Co., 1976). The scandals that accompanied these deals are graphically described by Kenneth Anger in Hollywood Babylon (N.Y.: Delta, 1975). Back
  12. Part of this pamphlet, along with many more important documents from every period, is reprinted in The Movies in Our Midst, edited by Gerald Mast (University of Chicago Press, 1982). Back
  13. Molly Haskell, From Reverence to Rape: The Treatment of Women in the Movies (N.Y.: Penguin Books, 1974), p.91. Back
  14. Several accounts of HUAC and Hollywood have been published including The Inquisition in Hollywood: Politics in the Film Community 1930-1960 by Larry Ceplair and Stephen England (N.Y.: Anchor/Doubleday, 1980), Hollywood on Trial: The Story of the Ten Who Were Indicted, by Gordon Kaln (N.Y.: Boni and Gaer, 1948). However, the best treatment of this, and other political and social issues, in terms of the effect on films produced in Hollywood, is Richard Maltby’s Harmless Entertainment: Hollywood and the Ideology of Consensus (Metuchen, NJ. and London: The Scarecrow Press, 1983). Back
  15. Hollywood in the Forties, by Charles Higham and Joseph Greenburg (London: Tantivy Press, 1968), p.20. Back
  16. These ideas are elaborated in respect of the two genres in essays by Robert Warshow, “The Gangster as Tragic Hero”, in his book, The Immediate Experience (N.Y.: Atheneum, 1974), pp.127-133, and Mark Roth, “Some Warners Musicals and the Spirit of the New Deal” in Genre: The Musical: A Reader, edited by Rick Altman (London: Routledge and Kegan Paul, 1981), pp.41-56. Back
  17. Jane Feuer’s essay, “The Self-reflective Musical and the Myth of Entertainment” (in Genre: The Musical) deals specifically with the concepts of spontaneity and integration, whilst her book The Hollywood Musical (London: BFI, 1982) extends the analysis considerably. Back
  18. See Richard Dyer, “Entertainment and Utopia” (in Genre: The Musical), pp. 175-189. Back
  19. The full history of Citizen Kane‘s remarkable career has been told by Pauline Kael in her long essay “Raising Kane”, first published in The New Yorker in 1971, and re-printed in The Citizen Kane Book (St. Albans: Paladin, 1974), pp.1-71. Back
  20. For details of the Supreme Court opinion and extracts from Michael Conant’s book, Anti-trust in the Motion Picture Industry, see The Movies in Our Midst, pp. 594-604. Back
  21. See especially Kevin Brownlow’s The War, The West and The Wilderness (London: Secker and Warburg, 1979). Back
  22. John G. Cawelti, The Six-Gun Mystique (Bowling Green University Popular Press, 1975), p.31. Back
  23. Will Wright, Six-Guns and Society: A Structural Study of the Western (University of California Press, 1975), pp.48-9. Back
  24. The best description of the Western hero is Robert Warshow’s “Movie Chronicle: The Westerner”. It is printed in The Immediate Experience and can be profitably read as a companion piece to “The Gangster as Tragic Hero”. Back
  25. For further discussion of this topic see Leslie A. Fiedler, The Return of the Vanishing American (N.Y.: Stein and Day, 1968). Back
  26. The most thorough examination of this aspect of Altman’s work has been undertaken by John H. Quinn in an unpublished M.A. dissertation, The Films of Robert Altman (University of Exeter, 1975). Back
  27. Details of the official Code Objectives (1968) can be found in The Movies in Our Midst, pp.704-707. In another article in the same volume “The Movie Rating Game”, pp.707-715, Stephen Farber writes interestingly about the administration of the new Code. Back
  28. Robert Brustein, “The New Hollywood: Myth and Anti-Myth” in Film Quarterly, 1959. Back
  29. These figures are taken from David Gordon’s essay, “Why the Movie Majors are Major” Sight and Sound, 42 (Autumn, 1973), pp. 194-96. Back
  30. Quoted in The Movie Brats, by Michael Pye and Lynda Myles (London: Faber and Faber, 1979), p.83. Back
  31. Ibid., p.86. Back
  32. Michael Dempsey, “Apocalypse Now”, Sight and Sound, 49, Winter 1979/80, pp.7-8. Back
  33. Lillian Ross, “Onward and Upward With the Arts: Some Figures on a Fantasy”, The New Yorker, November 8th, 1982. Back

7. Filmography

-A-

American Graffiti: (1973 Lucas film/Coppola Co./Universal). Directed by George Lucas, produced by Francis Ford Coppola and Gary Kurtz, photographed by Ron Eveslage and Jan D’Alquen, with Richard Dreyfuss, Ronny Howard, Paul Le Mat. (110 mins.)

American in Paris, An: (1951, M.G.M.). Directed by Vincente Minelli, produced by Arthur Freed, written by Alan Jay Lerner, music by George and Ira Gershwin, choreography by Gene Kelly, with Kelly, Leslie Caron, Oscar Levant. (113 mins.)

Angels with Dirty Faces: (1938, Warner Bros.). Directed by Michael Curtiz, produced by Sam Bischoff, written by John Waxley and Warren Duff, photographed by Sol Polito, with James Cagney, Pat O’Brien, Humphrey Bogart, Ann Sheridan, George Bancroft. (94 mins.)

Annie Hall: (1977, United Artists). Directed and written by Woody Allen, produced by Charles H. Joffe, photographed by Gordon Willis, with Allen, Diane Keaton, Tony Roberts, Carol Kane. (93 mins.)

Apocalypse Now: (1979, Omni Zoetrnpe/Columbia E. M. I.-Warner). Directed and produced by Francis Ford Coppola, written by Coppola and John Milius, photographed by Vittorio Storaro, with Marlon Brando, Robert Duvall, Martin Sheen. (153 mins.)

Arrowsmith: (1931, United Artists). Directed by John Ford, produced by Samuel Goldwyn, written by Sidney Howard from the novel by Sinclair Lewis, photographed by Ray June, with Ronald Colman, Helen Hayes, Richard Bennett, Beulah Bondi, Myrna Loy. (108 mins.)

-B-

Band Wagon, The: (1953, M.G.M.). Directed by Vincente Minelli, produced by Arthur Freed, choreographed by Michael Kidd, with Fred Astaire, Cyd Charisse. (112mins.)

Big Sleep, The: (1946, Warner Bros.). Directed and produced by Howard Hawks, written by William Faulkner and Jules Furthman from the novel by Raymond Chandler, photographed by Sidney Hickox, with Humphrey Bogart, Lauren Bacall, Dorothy Malone. (114 mins.)

Birth of a Nation, The: (1915, Epoch). Directed by D. W. Griffith, written by Griffith and Frank Woods from The Clansman by Thomas Dixon, photographed by Billy Bitzer, with Lillian Gish, Mae Marsh, Henry Walthall, Miriam Cooper, Robert Harron. (12 reels)

Blonde Venus: (1932, Paramount). Directed by Josef von Sternberg, written by Jules Furthman and S. K. Lauren, photographed by Bert Glennon, with Marlene Dietrich, Herbert Marshall, Cary Grant. (80 mins.)

Blow Up: (1966, M.G.M.). Directed by Michelangelo Antonioni, photographed by Di Palma, with David Hemmings, Vanessa Redgrave. (111 mins.)

-C-

Cat Ballou: (1965, Columbia). Directed by Elliot Silverstein, produced by Harold Hecht, with Jane Fonda, Lee Marvin. (96 mins.) Chelsea Girls, The: (1966). Directed by Andy Warhol. (195 mins.)

Circus, The: (1928, United Artists). Written, directed, and produced by Charles Chaplin, photographed by Rollie H. Totheroh, with Chaplin, Allen Garcia, Merna Kentiedy, Betty Mossissey. (7 reels)

Citizen Kane: (1941, Mercury/R.K.O.). Directed and produced by Orson Welles, written by Herman J. Mankiewicz and Welles, photographed by Greg Toland, with Welles, Joseph Cotten, Everett Sloane, Dorothy Comingore, Ruth Warwick, Ray Collins, Agnes Moorehead. (119 mins.)

City Lights: (1931, United Artists). Written, directed, and produced by Charles Chaplin, photographed by Rollie H. Totheroh, with Chaplin, Virginia Chervill, Florence Lee, Harry Mayers. (87 mins.)

Close Encounters of the Third Kind: (1977, Columbia/E.M.I.). Directed by Stephen Spielberg, produced by Julia Phillips and Michael Phillips, written by Spielberg, photographed by Vilmos Zsigmond, with Richard Dreyfuss, Francois Truffaut, Teri Garr. (130 mins.)

Comes a Horseman: (1978, United Artists). Directed by Alan J. Pakula, produced by Gene Kirkwood and Dan Paulson, written by Dennis Lynton Clark, photographed by Gordon Willis, with James Caan, Jane Fonda, Jason Robards. (118 mins.)

Conversation, The: (1974, Paramount). Directed and written by Francis Ford Coppola, produced by Coppola and Fred Roos, photographed by Bill Butler, with Gene Hackman,John Cazale, A1len Garfield, Frederic Forrest. (113 mins.)

Covered Wagon: (1923, Famous Players-Lasky). Directed and produced by James Cruze, from a story by Emerson Hough, photographed by Karl Brown, edited by Dorothy Arzner, with J. Warren Kessigan, Lois Wilson. (6 reels, originally 10 reels)

-D-

Dead End: (1937, United Artists). Directed by William Wyler, produced by Samuel Goldwyn, written by Lillian Hellman, from the play by Sidney Kingsley, photographed by Gregg Toland, with Humphrey Bogart, Joel McCrea, Sylvia Sidney, Claire Trevor. (93 mins.)

Deep, The: (1977, Columbia-E.M.I.). Directed by Peter Yates, produced by Peter Guber, written by Peter Benchley and Tracy Keenan Wynn from the novel by Benchley, photographed by Christopher Challis, with Jacqueline Bisset, Nick Nolte, Robert Shaw. (124 mins.)

Dementia 13: (1962, Fi1mgroup Inc./American International). Directed by Francis Ford Coppola, produced by Roger Corman, photographed by Charles Hannawalt, with William Campbell, Luana Anders. (81 mins.)

Dinner at Eight: (1933, M.G.M.). Directed by George Cukor, produced by David O. Selznick, written by Herman Mankiewicz, from the play by Edna Ferber, photographed by William H. Daniels, with John Barrymore, Lionel Barrymore, Marie Dressler, Jean Harlow, Wallace Beery. (108 mins.)

Double Indemnity: (1944, Paramount). Directed by Billy Wilder, produced by Joseph Sistrom, written by Wilder and Raymond Chandler from the story by.James M. Cain, photographed by John F. Seitz, with Fred MacMurray, Barbara Stanwyck, Edward G. Robinson. (107 mins.)

-E-

Easter Parade: (1948, M.G.M.). Directed by Charles Walters, produced by Arthur Freed, music by Irving Berlin, photographed by Harry Stradling, with Judy Garland, Fred Astaire, Peter Lawford, Ann Miller. (113 mins.)

Exorcist II (1977, Warner Bros.). Directed by John Boorman, produced by Boorman and Richard Lederer, written by William Good heart, photographed by William A. Fraker, with Linda Blair, Richard Burton, Louise Fletcher, Max Von Sydow, Paul Henreid. (102 mins.)

-F-

Fedora: (1978, Geria-Bavaria-Atelier). Directed and produced by Billy Wilder, written by I. A. L. Diamond and Wilder from a story in the book Crowned Heads by Thomas Tryon, photographed by Gerry Fisher, with William Holden, Marthe Keller, Hildegard Knel, Jose Ferrer. (113 mins.)

Finian’s Rainbow: (1968, Warner Bros./Seven Arts). Directed by Francis Ford Coppola, produced by Joseph Landon, choreography by Hermes Pan, photography by Philip Lathrop, with Fred Astaire, Petula Clark, Tommy Steele. (144 mins.)

Flaming Creatures: (1963). Directed by Jack Smith.

Flesh: (1968). Directed by Paul Morrissey, produced by Andy Warhol. (105 mins.)

42nd Street: (1933, Warner Bros.). Directed by Lloyd Bacon, photography by Sol Polito, choreography by Busby Berkeley, with Warner Baxter, Dick Powell, Ginger Rogers. (85 mins.)

-G-

Gigi: (1958, M.G.M.). Directed by Vincente Minelli, produced by Arthur Freed, music by Alan Jay Lerner and Frederick Loewe from the novel by Colette, with Leslie Caron, Maurice Chevalier, LouisJourdan. (116 mins.)

Godfather, The: (1972, Paramount). Directed by Francis Ford Coppola, produced by Albert S. Ruddy, written by Coppola and Mario Puzo from the novel by Puzo, photography by Gordon Willis, with Marlon Brando, Al Pacino, James Caan, Robert Duvall, Diane Keaton, (175 mins.)

Godfather, Part II The: (1974, Paramount). Directed by Francis Ford Coppola, produced by Jonathan T. Taplin, written by Coppola and Puzo from the novel by Puzo, photography by Gordon Willis, with Al Pacino, Robert Duvall, Diane Keaton, Robert De Niro. (200 mins.)

Gold Diggers of 1933: (1933, Warner Bros.). Directed by Mervyn LeRoy, produced by Hal Wallis, photography by Sol Polito, choreography by Busby Berkeley, with Joan Blondell, Dick Powell, Ginger Rogers. (96 mins.)

Gold Rush, The: (1925, United Artists). Directed and produced by Charles Chaplin, with Chaplin, Mack Swain, Tom Murray, Georgia Hale. (9 reels)

Gone with the Wind: (1939, M.G.M.). Directed by Victor Fleming, produced by David O. Selznick, written by Sidney Howard from the novel by Margaret Mitchell, photography by Raymond Rennahan and Ernest Haller, with Clark Gable, Vivien Leigh, Leslie Howard, Olivia de Havilland. (220 mins.)

Grapes of Wrath, The: (1940, Twentieth Century-Fox). Directed by John Ford, produced by Darryl F. Zannuck, written by Nunnally Johnson from the novel by John Steinbeck, photography by Gregg Toland, with Henry Fonda, Jane Darwell, Russell Simpson, John Carradine. (128 mins.)

Great Dictator, The: (1940, United Artists). Directed and produced by Charles Chaplin, photography by Rollie H. Totheroh, with Chaplin, Paulette Goddard, Jack Oakie, Reginald Gardiner, Billy Gilbert. (126 mins.)

Great Northfield Minnesota Raid, The: (1971, Universal). Directed and written by Philip Kaufman, photography by Bruce Surtees, with Cliff Robertson, Robert Duvall. (90 mins.)

-H/I-

Hud: (1963, Paramount). Directed by Martin Ritt, photography by James Wong Howe, with Paul Newman, Melvyn Douglas, Patricia Neal. (112mins.)

Iron Horse, The: (1924, Fox). Directed by John Ford, photography by George Scheidermann, with George O’Brien. (119 mins.)

-J/K-

Julia: (1977, Twentieth Century-Fox). Directed by Fred Zinnermann, produced by Richard Roth, written by Alvin Sargent from a story by Lillian Hellman, photography by Douglas Slocombe, with Jane Fonda, Vanessa Redgrave, Jason Robards, Maximilian Schell. (117 mins.)

Kid Blue: (1973, Twentieth Century-Fox). Directed by James Frawley, with Dennis Hopper, Warren Oates. (100 mins.)

Killers, The: (1946, Universal). Directed by Robert Siodmak, produced by Mark Hellinger, written by John Huston (uncredited) from the story by Ernest Hemingway, with Burt Lancaster, Ava Gardner, Edmund O’Brien. (102 mins.)

-L-

Last Tango in Paris: (1972, United Artists). Directed by Bernardo Bertolucci, produced by Alberto Grimaldi,. photographed by Vittorio Storaro, with Marlon Brando, Maria Schneider. (129 mins.)

Left Handed Gun, The: (1958, Warner Bros.). Directed by Arthur Penn, produced by Fred Coe, written by Leslie Stevens from a play by Gore Vidal, photographed by J. Peverell Marley, with Paul Newman, Lita Milan, John Dehner. (102 mins.)

Little Caesar: (1930, First National). Directed by Mervyn LeRoy, produced by Hal Wallis, written by Francis Faragon from the novel by W. R. Burnett, photographed by Tony Gaudio, with Edward G. Robinson, Douglas Fairbanks Jr., Glenda Farrell. (80 mins.)

Little Big Man: (1970, Cinema Center Films/National General Pictures). Directed by Arthur Penn, produced by Stuart Millar, written by Calder Willingham from the novel by Thomas Berger, photographed by Harry Stradling Jr., with Dustin Hoffman, Faye Dunaway, Martin Balsam, Chief Dan George. (150 mins.)

Lonely are the Brave: (1962, Universal). Directed by David Miller, written by Dalton Trumbo, photographed by Philip H. Lathrop, with Kirk Douglas, Walter Mathau. (107 mins.)

Looking for Mr. Goodbar: (1977, Paramount). Directed by Richard Brooks, – produced by Freddie Fields, written by Brooks from the novel by Judith Rossner, photographed by William A. Fraker, with Diane Keaton, Tuesday Weld, William Atherton, Richard Gere. (136 mins.)

-M-

McCabe and Mrs. Miller: (1971, Warner Bros.). Directed by Robert Altman, produced by David Foster, written by Altman and Brian McKay from the novel McCabe by Edmund Naughton, photographed by Vilmos Zsigmond, with Warren Beatty, Julie Christie. (109 mins.)

Mission to Moscow: (1943, Warner Bros.). Directed by Michael Curtiz, written by Howard Koch, photographed by Bert Glennon, with Walter Huston, Cyd Charisse. (123 mins.)

Modern Times: (1936, United Artists). Directed, produced and written by Charles Chaplin, photographed by Rollie H. Totheroh, with Chaplin, Paulette Goddard, Henry Bergman, Chester Conklin. (85 mins.)

Monsieur Verdoux: (1947, United Artists). Directed, produced, and written-by Charles Chaplin, photographed by Rollie H. Totheroh, with Chaplin, Mady Correll, Allison Roddan, Robert Lewis, Audrey Betz, Martha Raye.(122 mins.)

Monte Walsh: (1970). Directed by William A. Fraker, produced by Hal Landers and Bobby Roberts, written by Lukas Heller and David Z. Goodman from the novel by Jack Schaefer, photographed by David M. Walsh, with Lee Marvin, Jeanne Moreau, Jack Palance. (99 mins.)

Morocco: (1930, Paramount). Directed by Josef von Sternberg, produced by Hector Turnbull, written by Jules Furthman, photographed by Lee Garmes, with Gary Cooper, Marlene Dietrich, Adolphe Menjou. (90 mins.)

Mr. Deeds Goes to Town: (1936, Columbia). Directed and produced by Frank Capra, written by Robert Riskin, photographed by Joseph Walker, with Gary Cooper, Jean Arthur, George Bancroft. (115 mins.)

-N/O-

Nashville: (1975, A.B.C. Entertainment/Paramount). Directed and produced by Robert Altman, written by Joan Tewkesbury, photographed by Paul Lohmann, with David Arken, Barbara Baxley, Ned Beatty, Karen Black, Keith Carradine, Geraldine Chaplin, Shelley Duvall, Henry Gibson, Keenan Wynn. (161 mins.)

On Golden Pond: (1981, I.T.C./I.P.C.), Directed by Mark Rydell, produced by Bruce Gilbert, written by Ernest Thompson from his play, photographed by Billy Williams, with Katharine Hepburn, Henry Fonda, Jane Fonda. (109 mins)

On the Town: (1949, M.G.M,). Directed by Gene Kelly and Stanley Donen, produced by Arthur Freed, music by Leonard Bernstein, photographed by Harold Rosson, with Kelly, Frank Sinatra, Ann Miller, Vera-Ellen. (98 mins.)

One From the Heart: (1982, Zoetrope Studios). Directed by Francis Coppola, produced by Gray Frederickson and Fred Roos, photographed by Vittorio Storaro, music by Tom Waits, with Frederic Forrest, Teri Garr, Raul Julia. (107 mins.)

Orphans of the Storm: (1921, United Artists). Directed and produced by D. W. Griffith, with Lillian Gish, Dorothy Gish,Joseph Schildkraut, Frank Losee. (12 reels)

Outsiders, The: (1983, Pony Boy Inc./Zoetrope). Directed by Francis Coppola, produced by Gray Frederickson and Fred Roos, written by Kathleen Knutsen Rowell from the novel by S. E. Hinton, photographed by Stephen H. Burnam, with C. Thomas Howell, Matt Dillon, (91 mins.)

-P-

Public Enemy: (1931, Warner Bros.). Directed by William A. Wellman, written by Kubec Glasmon and John Bright from Bright’s story “Beer and Blood”, photographed by Dev Jennings, with James Cagney, Jean Harlow, Joan Blondell. (74 mins.)

-R-

Rain People, The: (1969, Warner Bros./Seven Arts). Directed by Francis Ford Coppola, produced by Bart Patton and Ronald Colby, written by Coppola, photographed by Wilmer Butler, with James Caan, Shirley Knight, Robert Duvall. (101 mins.)

Rumble Fish: (1983, Hot Weather Films/Zoetrope). Directed by Francis Ford Coppola, produced by Fred Roos and Doug Claybourne, written by S. E. Hinton and Coppola, from the novel by Hinton, photographed by Stephen H. Burnum, with Matt Dillon, Dennis Hopper, Mickey Rourke. (94 mins.)

-S-

Saturday Night Fever: (1977, Paramount). Directed by John Badham, produced by Robert Stigwood, written by Norman Wexler, choreography by Lester Wilson, with John Travolta, Karen Lynn Gorney, Barry Miller. (119 mins.)

Scarface, Shame of a Nation: (1932, United Artists). Directed by Howard Hawks, produced by Howard Hughes, written by Ben Hecht, W. R. Burnett, et al., from the novel by Armitage Trail, photographed by Lee Garmes, with Paul Muni, Ann Dvorak, George Raft. (90 mins.)

Scorpio Rising: (1964). Directed by Kenneth Anger.

Seven Year Itch, The: (1955, Twentieth Century-Fox). Directed by Billy Wilder, produced by Wilder and Charles K. Feldman, written by Wilder and George Axlerod, from Axelrod’s play, photographed by Milton Krasner, with Marilyn Monroe, Tom Ewell, Evelyn Keyes, Sonny Tufts. (105 mins.)

Shall We Dance: (1937, R.K.O.). Directed by Mark Sandrich, produced by Pandro S. Berman, music by George and Ira Gershwin, choreography by Hermes Pan, with Fred Astaire, Ginger Rogers. (106 mins.)

Shane: (1953, Paramount). Directed and produced by George Stevens, written by A. B. Guthrie Jr., from the novel by Jack Schaefer, photographed by Loyal Griggs, with Alan Ladd, Jean Arthur, Jack Palance, Van Heflin, Brandon de Wilde. (118 mins.)

Shanghai Express: (1932, Paramount). Directed by Josef von Sternberg, written by Jules Furthman, photographed by Lee Garmes, with Marlene Dietrich, Clive Brook, Anna May Wong. (82 mins.)

She Done Him Wrong: (1933, Paramount). Directed by Lowell Sherman, written by Mae West from the play DiamondLil, photographed by Charles B. Lang Jr., with West, Cary Grant. (66 mins.)

Silk Stockings: (1957, M.G.M.). Directed by Rouben Mamoulian, produced by Arthur Freed, music by Cole Porter and Andre Previn, with Fred Astaire, Cyd Charisse, Peter Lorre. (118 mins.)

Singin’ in the Rain: (1952, M.G.M.). Directed by Gene Kelly and Stanley Donen, produced by Arthur Freed, music by Freed and Nacio Herb Brown, with Kelly, Donald O’Connor, Debbie Reynolds, Cyd Charisse. (103 mins.)

Smokey and the Bandit: (1977, Universal). Directed by Hal Needham, produced by Mort Engelberg, photographed by Bobby Byrne, with Burt Reynolds, Sally Field, Jerry Reed, Jackie Gleason. (97 mins.)

Some Like It Hot: (1959, Mirisch). Directed and produced by Billy Wilder, written by Wilder and I. A. L. Diamond, photographed by Charles Lang Jr., with Marilyn Monroe, Tony Curtis, Jack Lemmon, George Raft, Joe E. Brown, Joan Shawlee. (121 mins.)

Stagecoach: (1939, United Artists). Directed and produced by John Ford, written by Dudley Nichols, photographed by Bert Glennon, with John Wayne, Claire Trevor, John Carradine. (97 mins.)

Star Wars: (1977, Lucadilm/Twentieth Century-Fox). Directed and written by George Lucas, produced by Garry Kurtz, photographed by Gilbert Taylor, with Mark Hammill, Harrison Ford, Carrie Fisher, Peter Cushing, Alec Guinness. (121 mins.)

Stella Dallas: (1937, United Artists). Directed by King Vidor, produced by Samuel Goldwyn, photographed by Rudolph Mate, with Barbara Stanwych, John Bowles, Anne Shirley, Barbara O’Neil. (104 mins.)

Sunset Boulevard: (1950, Paramount). Directed by Billy Wilder, produced by Charles Brackett, written by Brackett, Wilder and D. M. Marshman, Jr., photographed by John F. Saitz, with Gloria Swanson, William Holden, Nancy Olson, Erich von Stroheim. (111 mins.)

-T-

Three Women: (1977, Lion’s Gate/Twentieth Century-Fox). Directed, produced and written by Robert Altman, photographed by Charles Rosher, with Shelley Duva1l, Sissy Spacek, Janice Rule. (213 mins.)

THX 1138: (1971, Columbia-Warner). Directed by George Lucas, produced by Francis Ford Coppola, with Robert Duvall, Donald Pleasance, Maggie McOmie.

Tomorrow the World: (1944, United Artists). Directed by Leslie Fenton, produced by Lester Cowan, written by Ring Lardner and Leopold Atlas from the play Deep are the Roots by James Gow and Arnaud D’Usseau, with Frederic March, Agnes Moorehead, Skip Homeier.

-W-

Watch on the Rhine: (1943, Warner Bros.). Directed by Herman Shumlin, produced by Hal Wallis, written by Dashiell Hammett from the play by Lillian Hellman, photographed by Hal Mohr, with Bette Davis. (114 mins)

Wedding Night, The: (1935, United Artists). Directed by King Vidor, produced by Samuel Goldwyn, photographed by Gregg Toland, with Gary Cooper, Anna Sten, Walter Brennan. (82 mins.)

Westerner, The: (1940, United Artists). Directed by William Wyler, produced by Samuel Goldwyn, written by Jo Swerling and Niven Busch, photographed by Gregg Toland, with Gary Cooper, Walter Brennan. (100 mins.)

Will Penny: (1967). Directed and written by Tom Gries, produced by Fred Engel and Walter Seltzer, photographed by Lucien Ballard, with Charlton Heston, Lee Majors, Ben Johnson, Bruce Dern, Slim Pickens. (108 mins.)

Wizard of Oz, The: (1939, M.O.M.). Directed by Victor Fleming, produced by Mervyn LeRoy, music by Harold Arlen, photographed by Harold Rosson, with Judy Garland, Frank Morgan, Ray Bolger, Bert Lahr, Jack Haley. (101 mins.)

Woman of Paris, A: (1923, United Artists). Directed, produced and written by Charles Chaplin, photographed by Rollie H. Totheroh, with Chaplin, Adolphe Menjou, Edna Purviance. (8 reels)

Wuthering Heights: (1939, United Artists). Directed by William Wyler, produced by Samuel Goldwyn, written by Ben Hecht and Charles MacArthur, photographed by Gregg Toland, with Merle Oberon, Laurence Olivier, David Niven, Donald Crisp. (103 rnins.)

-Y/Z-

You’re a Big Boy Now: (1967, Warner-Pathe). Directed by Francis Ford Coppola, produced by Phil Feldman, written by Coppola from the novel by David Benedictus, photographed by Andy Laszlo, with Peter Kastner, Elizabeth Hartman, Geraldine Page, Julie Harris. (97 mins.)

Back to beginning

 

Top of the Page

W.A. Speck, British America 1607-1776

BAAS Pamphlet No. 15 (First Published 1985)

ISBN: 0 946488 05 3
  1. British America
  2. The Mother Country
  3. The Colonies in the Seventeenth Century
  4. The Colonies in the Eighteenth Century
  5. Conclusion
  6. Guide to Further Reading
  7. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1. British America

During the past-twenty years or so a generation of historians on both sides of the Atlantic has transformed our knowledge and understanding of early modern England and her American colonies. For the most part, however, these scholars have worked independently, studying English or colonial societies in isolation, and only rarely both together. But their many contributions now make it possible for this pamphlet to attempt a synthesis seeking to demonstrate what light the current state of scholarship about early modern England throws on colonial history, and vice versa. Such a comparison can be highly illuminating, as American history in the century and a half before the crucial confrontations which culminated in the War of Independence was in many significant respects an extension of British history. Indeed, it could even be argued that on the accession of George III the colonies had more in common with Britain than they had ever had before.

Although few modern American historians made detailed studies of England during the colonial period, their conclusions about the colonies often’ contained explicit or implicit models of the mother country. Broadly speaking the extent to which they detected similarities or differences between the two communities reflected their convictions as to whether a conflict or a consensus model was more appropriate to American society. Those who saw a clash of social and economic interests behind colonial politics by implication at least drew comparisons between the colonies and England, both displaying similar tensions. Those who were impressed by the apparent absence of such strains in colonial society tended to contrast them with the mother country.1

Twenty five years ago the consensus model was the more fashionable among American historians. Perhaps its most committed advocate was Daniel Boorstin, whose prize-winning book The Americans: the Colonial Experience asserted the differences between the colonists and their English cousins. The colonial experience was the boyhood of Crevecoeur’s “new man.” In Boorstin’s pages effete Englishmen were constantly contrasted with audacious Americans. The English were idealists and thinkers, while the colonists were pragmatists and doers. Philosophies formulated to interpret European realities were irrelevant to the facts of the wilderness which had to be interpreted afresh in the light of new experiences.2 Common sense thus became a sound American virtue long before Thomas Paine appealed to it in his famous Revolutionary pamphlet of 1776.

The panorama portrayed in bold colours on a broad canvas in Boorstin’s survey seemed to be substantiated by a miniature study of one corner of the landscape, Sumner Chilton Powell’s Puritan Village: the formation of a New England Town, which also won a prize. Powell concentrated on the establishment of Sudbury, Massachusetts, and concluded that its first townsmen created a community so different from its English counterparts as virtually to represent a fresh start. He found no trace of several institutions and offices which administered local affairs in England.

Gone were the courts-baron, courts-leet, vestries, out-hundred courts, courts of election, courts of record, courts of the borough, courts of orders and degrees, courts of investigation, courts of ordination and views of frank-pledge… Gone were the seneschal, bailiff, jurymen . . . rector, curate, sexton . . . church-wardens, sidemen, questmen, overseers of the poor… Abolished too were the quarter sessions [and] justices of the peace.

Instead there were just selectmen and townsmen to supervise secular business, pastors and deacons to govern ecclesiastical activities.3

These two influential studies, one of the complete colonial experience, the other of that seminal American institution, the New England town, reinforced contrasting stereotypes of the Mother Country and the colonies. England was commonly represented as feudal and static in comparison with egalitarian, flexible and fluid societies across the Atlantic ocean. English society was held to be rigidly hierarchical: people not only knew their place but stayed in it. Generation after generation lived and died in the same villages, surrounded by familiar objects and faces. They enjoyed the comfort of extended families, in which three generations, grandparents, parents and grandchildren, uncles, aunts and cousins, lived under one roof or in close proximity. The decision to leave these sheltered communities and to cross an ocean to a New World was consequently traumatic. Those who took it must have been imbued with unusual qualities as individuals to sever their ties with traditional ways of life and values. In North America their rugged individualism triumphed over the wilderness. The ever-shifting frontier ensured that they were never psychologically confined to one spot of ground. When they did settle down it was in scattered farmsteads, not in compact villages. Their families were nuclear rather than extended, comprising just parents -and children. In every way, therefore, the colonial experience seemed to substantiate the words of Jared Eliot which Boorstin chose for the title page of his book: “It may be said, That in a Sort, they began the World a New.”

Studies produced since the 1950s enable these assumptions to be thoroughly tested. Was the social structure of early modern England feudal, rigid, and static? Did colonial America create an egalitarian, flexible and fluid society? Were the differences between the two societies more fundamental than the similarities? Were they growing more and more apart in the seventeenth and eighteenth centuries, until political independence set the seal on the separation of what had already become two distinct nations in all but name? These are the questions which this pamphlet seeks to answer.

 2.The Mother Country

The notion that the social structure of early modern England was feudal, rigid and static is really no longer tenable. Few if any British historians currently working on English society in the seventeenth and eighteenth centuries would consider the word ‘feudal’ to be applicable to it. If the word has any precise meaning, it must include holding land as a feudal tenure or fief, with obligations other than the payment of money rents to the landlord, such as the dues of wardship and marriage owed to the Crown by tenants in chief. Yet such tenure was finally abolished in England in 1660, after having lapsed effectively since 1642.

The term ‘feudal’ is sometimes employed of the landlord/tenant relationship. Certainly tenancy was far more extensive in Britain than in North America and, in theory at least, tenants were bound to their landlords by more than the cash nexus. There was a normative relationship in which the ideal landowner acted as a patriarch in the community in return for the deference of those dependent upon him. Judging by persistent complaints that it was breaking down, however, the ideal appeal to have been more honoured in the breach than in the observance.4

One test of whether England was ‘feudal,’ in this restricted sense of tenants deferring to their landlords in ways which the latter could exploit to assert and maintain their hegemony, is provided by parliamentary elections. If a constituency were contested- then a landlord could translate deference directly into votes by ordering those of his tenants who were enfranchised to poll for candidates he supported. In this respect the fourth Duke of Bedford has been cited as an archetypal feudal landlord.5 Yet so far from ordering his tenants which way to poll his Grace had to bribe, cajole and coax them to cast their votes for his preferred candidates.6 The language employed by most landlords when soliciting electoral support from their tenants was generally obsequious rather than imperious. In two-member constituencies they usually restricted their requests to a single vote for one candidate, leaving their dependants free to cast the other vote as they chose. Moreover even those modest solicitations did not automatically meet with success. It is true that poll books recording votes cast at elections in counties, which were large enough geographically for the territorial interests of landed magnates to he plotted, reveal voters from particular parishes polling en bloc for the candidates preferred by the leading landowner in the vicinity. Yet sufficient numbers of them also voted contrary to the inclinations of the local magnates to make the notion of patriarchal landlords conveying a deferential tenantry to the polls an exaggeration of the degree to which deference maintained the gentry in power.7

The extent to which the electorate in early modern England was generally subordinate to the political elite has been exaggerated too. Although there was far from being an adult male franchise, nevertheless the right to vote was extensive enough to make the total number of voters at least 300,000, roughly a quarter of all adult males in the seventeenth century, and a fifth by the mid eighteenth. The growth of oligarchy, and with it of political stability, which culminated in the ascendancy of Walpole, was achieved not by subordinating so much as by evading the electorate. In 1716 the Whigs extended the maximum interval between elections from three to seven years. Furthermore, where previously under the Triennial Act of 1694 the gap between general elections had on average been two years, by and large between 1716 and 1761 parliaments were allowed to go their full allotted span. And when general elections were called, the number of contests fell as the oligarchs preferred to carve up the seats between them rather than risk a reference to the electors. In those few constituencies where the voters were allowed to express their preferences, they tended to show their opposition to the Whig oligarchy at election after election. So far from the electorate demonstrating deference to the point where the English polity could be described as feudal, therefore, its evasion by the elite indicated their concern that the electoral system as it had operated under the later Stuarts made the House of Commons dangerously dependent upon volatile voters.8

Articulate opposition to the oligarchy’s growing grip on the electoral system largely took the form of demands for the repeal of the Septennial Act, and a return to triennial if not annual elections, coupled with the elimination of corruption from the constituencies. The fact that these arguments were largely deployed by opponents of the oligarchs from within the political elite, while there was no reaction from the electorate to their deprivation, or radical demands for an extension of the franchise, until the late eighteenth century, seems to substantiate the view of those historians who maintain that the political stability achieved under Walpole rested upon a consensus. As Professor Plumb defines such stability, it is “the acceptance by society of its political institutions, and of those classes of men or officials who control them.”9

Yet other historians, led by E.P. Thompson, have denied that’ there was any such consensus. On the contrary, they assert that the oligarchy was extremely repressive, and sought by such legislation as the Riot Act of 1715 and the Waltham Black Act of 1723 to sustain their property rights against the propertyless by draconian measures. Moreover the lower orders did not acquiesce in their subordination, but expressed their resentment in less articulate ways, such as popular Jacobitism, rioting and community support for the ‘social crimes’ which the repressive legislative sought to control.10

Even this Marxist model of eighteenth century English society, however, does not represent it as feudal. Mr. Thompson depicts it as being polarised between a patriciate and a plebs. Although he does not discern the existence of an urban bourgeoisie between them, except in London, until after 1760, he nevertheless does not ‘regard the predominantly landed ruling class as a feudal aristocracy. Because their methods of estate management are to him more capitalist than feudal he prefers to call them an agrarian bourgeoisie.11

If England cannot be described as feudal, so English social structure was less rigid than is sometimes alleged. That it was a hierarchy cannot be denied. The handful of peers at the very top, those 180 or so noblemen who attended the House of Lords, became in many ways more of a closed caste as time passed. Abolished in the Interregnum (1649-1660), they came back with a vengeance at the Restoration. Almost all the leading politicians before Walpole either were nobles or sought to be ennobled. Where Queen Anne made twelve commoners peers at a stroke, both George I and George II preserved the number and with it the dignity of the peerage.

It would, however, be wrong to characterise English society as ‘aristocratic’ because of the entrenchment of the nobility at the top. One hundred and eighty men scarcely constituted a dominant class. Though legally distinct from the’ 16,000 or so gentry, they were economically indistinguishable. Peers and country gentlemen alike owned landed estates, and enjoyed similar life styles, centred on the country house, with visits to London or Bath for the season, while some had their own town-houses either in the capital or the local county town. These substantial landowners formed a distinct ruling class.

Below this elite distinctions tended to become blurred. How far the distinction between the gentry, and farmers, merchants and professional men can be regarded as a difference of class is debateable. Some historians argue that the gentry formed the only real class in pre-industrial England, since they developed a consciousness of themselves as a group at the top of English society. others, it is argued, had no awareness of occupying a place in a national entity, but identified with the local community. They therefore thought socially in vertical rather than horizontal terms, gauging their status by those above or below them in the immediate neighbourhood.12 The fact that their desire for upward mobility expressed itself in the appropriation of the title ‘gentleman’, so that hybrids such as gentleman farmer, gentleman merchant, gentleman lawyer and even gentleman tradesman crept into the language during the seventeenth and eighteenth centuries, seems to confirm that they had no separate class consciousness of their own. other historians, however, do see the business and professional men at least, if not the freeholders and tenant farmers, developing as a middle class with interests distinct from those of the elite.13 Such expressions as ‘the middling sort’ and ‘the middle station of life’ which also manifested themselves in this period, are cited as testimony to a growing awareness of their being a separate category. They indicate too a stratification into three classes, upper, middle and lower.

A distinction should be drawn between the social structures of the countryside and the town. England was still predominantly rural, and village communities could be seen as societies where the principal social division remained that between the genteel and the vulgar. Even so there was a clear differentiation to be made between country gentlemen who lived off rents; tenant farmers, who paid rent, and freeholders, sometimes termed yeomen; and agricultural labourers. The tenants and freeholders, along with the clergy and the increasing number of rural attorneys and doctors, formed a middle group, if not a middle class, in the countryside. One figure conspicuous by his rarity if not complete absence was the peasant. The word itself was not used in early modern England, the nearest equivalent to the owner occupier who farmed his own family farm. directly being the term “husbandman.” By contrast with Europe England was not a peasant society, a fact sometimes overlooked by historians who make sweeping comparisons between – the American colonies and ‘traditional’ societies.

In towns social structure was more complex. Quite what constituted a town in itself has aroused debate amongst historians. A minimum population of 2500, however, seems to be the best guide.14 Even below this level the difference between agricultural and trading villages was striking. Where those with under 2000 inhabitants would have few tradesmen other than those supplying basic services, those with bigger populations could contain industries such as brewing and toolmaking. Towns with over 5000 inhabitants had quite diffuse economies, while London’s trades were manifold, over 350 being listed in trade directories by the mid eighteenth century. There was a hierarchy of trades, with a huge gap between luxury tradesmen such as master gold and silversmiths and coachbuilders.at one end, and journeymen tallow chandlers and weavers at the other. Despite these differentials, however, the emergence of a threefold stratification can be detected. By the eighteenth century there were distinct districts in the metropolis for ‘the quality,’ ‘the middling sort’ and ‘the poor.’ The town houses of the aristocracy and gentry in the new squares near the court at St. James represented the top of society. The City proper was the habitat of the business community, while there were professional districts too, for example the Inns of Court. Eastwards along the river were already to be found poorer districts such as Shadwell and Wapping.

The poor themselves formed the mass of the English population, rural or urban. Cottagers and landless agricultural labourers, unskilled town workers, the casually employed and the vagrant accounted for about half the inhabitants of England and Wales. Provision for the destitute among them became a major public concern with the passing of the Elizabethan poor laws. The burden on parishes, which were responsible for raising parochial rates for poor relief, was probably greatest in the first half of the seventeenth century, when the population was still rising, from 2,984,500 in 1560 to 4,892,500 by 1630, putting pressure on resources and fuelling inflation. Food prices rose fastest, while wages failed to keep pace, so that conditions for those at the bottom of English society were probably as bad in the fifty years 1600 to 1650 as at any time in early modern history.

Thereafter, for at least a century, the economic situation improved. The population of England and Wales stopped growing in the 1650s and even shrank from 5,281,000 to 4,865,000 by the mid 1680s. Thereafter a slow increase occurred, though even as late as 1731 the total was still only 5,263,000 – less than it had been in the 1650s. Although growth then continued unchecked, even by 1751 the overall population was only 5,772,000. So the period was marked by a rise, a levelling off, a decline, then a slow recovery which gained momentum in the last two or three decades.15

This trend eased the pressure on resources. Food prices in particular fell, as the agricultural revolution simultaneously increased the productivity of English farms. Although this created problems for landowners and farmers, by and large the economic effects were beneficial. Extra spending power was generated in most sections of society. More substantial citizens spent on luxury goods and services, generating an expansion of the manufacturing and service sections of the economy. Towns particularly benefited from these demands, and grew accordingly. London experienced most growth, increasing from 200,000 to 675,000 inhabitants between 1600 and 1760. Other urban centres also expanded. Where in 1600 only Bristol and Norwich could number their populations in five figures, each having about 20,000 inhabitants, by 1760 there were at least fourteen towns with over 10,000 inhabitants.

Another consequence was an expansion of the business and professional classes. The middle sections of society bulged. In the fifty years 1680 to 1730 alone the numbers in the professions increased by nearly seventy per cent.16 The business section of society probably grew commensurately. In part this expansion was caused by downward mobility, as younger sons of-the landed gentry entered the professions and even trade. Since there were fewer younger sons to provide for, however, then mostly it was fuelled by upward mobility, sucking apprentices out of the lower classes.

Even the lot of those left in the lower echelons of English society improved in these years, especially in towns whose growth stimulated all kinds of economic activity. The physical expansion of London and other urban centres created employment for all trades connected with building. Demands for luxury goods and services employed craftsmen and a growing army of domestic servants.

These considerations attracted increasing numbers who moved from the countryside into towns and above all to London. This geographic mobility probably did more than anything to break down local communities and merge them into a national society.

In the early seventeenth century the horizons of many Englishmen had been constricted to their county boundaries. They felt themselves to belong to Cornwall, Kent or Yorkshire first, and to England second. Below the great aristocrats, who might own land in two or more counties, the majority of landowners held estates inside one. They tended to restrict their social activities within their county, or country as they significantly called it, making marriage alliances, for example, with other families from the local community. This loyalty apparently extended to men below the elite, for during the Civil War it was difficult to overcome it-and.to persuade people that they were involved in a nationwide struggle.17

In the century after the Civil War, however, one hears less and less about the county community. National consciousness was perhaps developed by the astonishing fact that, statistically at least, between 1650 and 1750 one in six of all Englishmen spent part of their lives in London.18

Internal migration on such a scale shatters the myth that people lived and died in the same communities in which they were born. Research on parish records has revealed that it was rather the exception than the rule. Of the families living in a particular parish in 1600 the descendants of only 16 per cent would still be there in 1700. While most might have moved only a few miles away, many migrated far afield. The decision to emigrate, therefore, was not necessarily the traumatic choice that has been imagined. For many it was to decide to make just another move, albeit a big one.19

3:The Colonies in the Seventeenth Century

It is crucial at the outset of a comparison of English with colonial societies to establish how far the social structure of the mother country was recreated in North America. Among the assumptions of ‘consensus’ historians-is the notion that only the middling sort went in any significant numbers across the Atlantic. American society thus from the start lacked an aristocracy at its head or a long tail of the labouring par Those more persuaded by the conflict model of society, on the other hand, while they accept that no aristocrats permanently settled in the colonies, and precious few gentlemen went there, are convinced that enough representatives of the middle and lower orders emigrated to carry across the ocean the social divisions of early modern England.

The main evidence for emigration in the seventeenth century concerns some 3000 indentured servants who sailed from Bristol between 1654 and 1661, and a further 750 who went from London in 1683 and 1684, all bound for the Chesapeake Bay colonies of Maryland and Virginia. Its documentation of the status of the colonists, however, is ambiguous.

Mildred Campbell claimed that, apart from the tiny handful of gentlemen and professional men, amounting to about one per cent of the total, the bulk of this sample can be identified as yeomen and husbandmen, with another substantial proportion comprising tradesmen and artisans. “The majority” she asserted “were England’s middling people.” She even endowed them with qualitative as well as quantitative values: they were “drawn from the middling classes: farmers and skilled workers, the productive groups in England’s working population.”20 So the myth of the transplantation of essentially middle class values – egalitarianism, individualism, self-improvement, thrift – received a powerful statistical boost.

Then David Galenson challenged the statistics. He pointed out that the Bristol records only systematically record occupations for the years 1654 to 1657, and argued that the registrations thereafter distort the data, exaggerating the proportion of farmers and minimising the number of labourers. On the basis of the earlier records he concluded that, apart from a tiny handful of gentlemen, the Bristol migrants can roughly be divided into four quarters: yeomen and husbandmen; tradesmen and craftsmen; apprentices or servants in husbandry; and unskilled labourers. In the case of the London records he claimed that Campbell had effectively ignored indentured servants with no occupation ascribed to them in the records, and that these should be regarded as labourers. His conclusion from these statistical adjustments was that the indentured servants destined for the Chesapeake Bay in the seventeenth century represented a much wider spectrum of English society than “the middling people.” “They came from all levels of England’s “common sort,” and together made up a cross section of English society that cut from the gentry to the paupers.”21

James Horn’s scrutiny of the quantitative evidence led him also to conclude that “about half of them were either minors or unskilled workers of various types, while the rest came from agricultural occupations and a miscellany of crafts and trades… They were mainly non-householders and had acquired little personal wealth. They came from the middle and lower echelons of that section of society that contemporaries labelled ‘the Commons’: the ordinary people who made up the vast majority of England’s population and who were obliged to work with their hands to earn a living.”22

The indentured servants who went to the Chesapeake therefore, and who accounted for the bulk of settlers in seventeenth century Maryland and Virginia, cannot be ascribed to the English middle class exclusively. On the contrary, they covered a wide cross-section of society, with only the extremes of aristocracy and landed gentry at the top, and penniless vagrants at the bottom, being conspicuously absent.

Quantitative evidence for the settlers of the northern colonies is much more sparse than for the southern colonies. Timothy Breen and Stephen Foster concluded from the most fruitful source that most of the adult males who emigrated to New England had been urban tradesmen in England. Although these took their domestics with them, so that the master-servant relationship was reproduced, few labourers accompanied them.23 Another indication that the first settlers were disproportionately from “the middling sort” is that “the rate of male literacy among the arrivals in New England was nearly double the base rate prevailing in England.”24 The original New England colonies, therefore – Plymouth, Massachusetts, Rhode- Island, New Haven and Connecticut – were different in this respect not only from the mother country but also from the early settlements in Virginia and Maryland.

There were other significant differences between New England and the Chesapeake Bay, as well as between them and England, in the first decades of colonisation. Social stability was achieved much earlier in the northern than in the southern colonies, for a variety of reasons.

One element making for a stable society in the north and an unstable pioneer community in the south was the fact that it was quite usual for whole families to emigrate to New England while individuals went to Virginia. The families which crossed the Atlantic were not distinct from those of urban tradesmen left behind. The nuclear family was not emerging in the colonies while the extended family survived in England. On the contrary, demographic research has established that the prevalent model was for the kin in an English household to consist solely of parents and children. Indeed, given the low life expectancy that prevailed, the statistical chances of any individual surviving to share a home with his or even her grandchildren were very small, especially since marriage were deferred until the late twenties. Life expectancy at birth was very low, perhaps no more than twenty. This was, however, due to the formidable incidence of infant mortality. Those who lived to be ten had a reasonable chance of seeing their fortieth birthday. Even so, the mean age of marriage in early modern England was about 26 for women. Only the tiny minority who endured into their sixties, therefore, were likely to live long enough to see their grandchildren. Paradoxically there was more opportunity for the creation of three-generation families in New England in the seventeenth century, since people lived longer as a result of the. healthier environment, while they also tended to marry younger.25

The case was very different, however, in the settlements around the Chesapeake Bay. The individuals who went to Virginia tended to be young men, with few women or children in their company. The result was an imbalance between the sexes of three men to one woman, which was clearly an obstacle not only to family formation but also to social stability. Indeed, the horrendous death rate in the first generation of settlement at Jamestown, from dysentery, typhoid and even salt water poisoning, was a deterrent to the establishment of a stable society of any sort. Men died like flies; so much so that in the first forty years it took at least 15,000 migrants to produce a population of about 7,500.

Another distinction which has been drawn between the first settlements in Virginia and those in New England is in the motives which impelled the colonists to venture across the Atlantic ocean to North America. By and large gain has been cited as the main motivation of the early Virginians and godliness that of the original settlers in New England.

The first colonists in the Chesapeake appear to have been in search of easy pickings, lured by tales of fabulous wealth. An astonishing motley of ne’er-do-wells and adventurers sailed to Jamestown under the auspices of the Virginia Company, to whom the hard work of creating a permanent colony was the last consideration. As John Smith, looking back to those bizarre early days, complained26

All this time we had but one carpenter in the country, and three others that could do. little, but desired to be learners: two blacksmiths; two saylers and those we write labourers were for the most part footmen, and such as they could persuade to go with them, that never did know what a dayes worke was, except the Dutchmen and Poles, and some dozen other. For all the rest were poore Gentlemen, tradesmen, serving men, libertines and such like, ten times more fit to spoyle a commonwealth, than either begin one or but help to maintain one.

Jaundiced though Smith’s account undoubtedly was, it has been substantially confirmed by Edmund Morgan’s reconstruction of the early years of settlement. Apparently colonists did loaf around, totally dependent for even the most basic supplies upon Indians, whom they nevertheless were not averse to fighting.27 There may be a medical explanation for such otherwise unaccountable behaviour since Jamestown’s water supply, especially in summer, was polluted with salt from the sea. This could have caused saline poisoning which in turn induced indolence. Only when the colonists spread further up the James river did they escape the deadly disease environment which had threatened to wipe them out.28

By then the Virginia Company had been dissolved, the colony was under the Crown, and those colonists who had survived had discovered their economic salvation too in the cultivation of tobacco. The profits to be made from this cash crop attracted settlers who were prepared to invest the capital and labour needed to exploit it.

By contrast the great migration to Massachusetts in the 1630s, involving thousands of Englishmen, has been ascribed to religious motivation. Their most articulate spokesmen certainly claimed that they fled from religious persecution in England under Charles I, to found a puritan commonwealth in the wilderness. That religious considerations were a major factor in the motives of some emigrants is undeniable. What is questionable is the proposition that they motivated the majority.

Certainly the degree of religious persecution has been exaggerated. The notion that the Church of England under Archbishop Laud relentlessly suppressed its puritan critics, arraigning them before the arbitrary jurisdiction of the courts of Star Chamber and High Commission, and sentencing them to savage and even barbaric punishments, has been exposed as a myth, albeit one which puritan sympathisers have always sedulously fostered.29 Certainly some early settlers were refugees from Laudian suppression. Thomas Hooker and John Cotton, for instance, fled abroad to avoid answering to charges brought against them by High Commission. But the numbers who were subjected to ecclesiastical censure and discipline were too small to account for the massive movement across the Atlantic. It was the positive, assertive aspects of Arminianism rather than its negative, repressive features which most perturbed puritans. Ezekiel Rogers’ decision to leave Rowley in protest against the Book of Sports, which sanctioned Sunday recreation, was more typical than Cotton’s flight from Laudian justice.

Those who were dissatisfied with the Elizabethan church, and were consequently labelled puritans, objected to many different elements in it. Few disagreed with its theology, which was basically Calvinist. Objections ranged from disliking such ‘relics of popery’ as the exchange of rings in marriage, crossing an infant’s head in baptism and bowing at the name of Jesus, to the whole system of episcopacy. The term puritanism is consequently unsatisfactorily vague, since it has to cover such a gamut of attitudes. It is even more unsatisfactory to use the term Anglican to distinguish the opponents of puritanism, as almost all those who desired further reformation did so from within the Anglican community. Only a few, such as the Brownists and the Pilgrim Fathers, seceded from the Church of England.

Those who worked for reform inside the Church did so in the belief that they were ‘tarrying for the magistrate.’ Elizabeth had, after all, restored Protestantism after the Catholic Mary, and had partly reformed the Church. Puritans could be grateful for that, even if they wanted what they called a thorough godly reformation. What they meant by this varied. Some wanted to replace the episcopal system of government with one based on the Scottish Presbyterian model. Others desired each church to be a ‘gathered.community’ of saints, in which communion would be confined to ‘visible saints’ who could provide convincing evidence that they had received ‘saving faith’ and were therefore with the elect in the covenant of grace.

They could all entertain the hope that the Queen or her successor would eventually effect a more thorough reformation. Those who wished to confine church membership to visible saints, however, seem to have despaired first. Some gave up hope shortly after James I’s accession, when he seemed to turn his face against the reforms suggested by the Hampton Court Conference. Among these were William Bradford and his community, who left England at this time, and eventually settled Plymouth colony. But the new reign was not a major turning point in Anglican history, as has traditionally been claimed. On the contrary, many objections to the discipline of the church which were frequently raised under Elizabeth were rarely heard under her successor.

The rise of Arminianism after Charles I’s accession, however, posed a direct threat to the concept of the gathered church. Arminian criticism of the Calvinist doctrine of the election of the saints to salvation, and damnation of the unregenerate, was a fundamental challenge to their theology. If there were no elect, then church membership could not be confined to them. There was precious little hope of a thorough reformation in line with covenant theology from Arminian authorities. The best way to realise the ideal of the convenanted congregation from these ‘innovations in religion’ was to move out of their jurisdiction. By the late 1620s it was no longer feasible to join with the reformed churches on the continent, since the outbreak of the Thirty Years’ War in 1618 had been followed by the collapse of Protestant resistance to the Habsburg forces, and the apparent. triumph of the Counter Reformation. The alternative was to follow Bradford’s example and move to New England. Yet Winthrop and his associates in the Massachusetts Bay company did not intend to join Plymouth colony in schism. They still clung stubbornly to the notion that they were tarrying for the magistrate, to the point of insisting that they remained part of the Anglican community, and were merely preserving the ideal of the gathered church until the day when it would be used as the model for a thorough godly reformation. That day, of course, never came in England. There was a false dawn during the Interregnum, especially with the rise of the Independents under Cromwell. But ‘puritanism’ in England developed a dynamic under the impact of Civil War which it never acquired in New England. The rise of the sects – Presbyterian, Independent, Baptist, Fifth monarchist, Quaker, etc. – and their toleration by Cromwell, created a confusion of creeds which the upholders of orthodoxy in Massachusetts would never have tolerated.

Nevertheless the rise of a puritan commonwealth under Cromwell has been held to have stopped migration to Massachusetts, and even to have reversed it. Yet it was not renewed when the Restoration of Charles II in 1660 revived religious persecution. What had changed meanwhile was the standard of living of the middling and lower orders in England. As we have seen, the early seventeenth century was a bleak period for Englishmen who depended for their livelihood on their labour, skilled or unskilled, while the late seventeenth century saw a significant improvement in their lot. It would seem that these improving economic circumstances made emigration less attractive than it had been earlier, in which case the differences between the motives of those who went to New England and those who went to the Chesapeake have probably been exaggerated. The mass of colonists who settled both areas of North America were probably prompted to do so by the hope of improving their material conditions across the Atlantic.

The drying up of the pool of English labour which had previously supplied the colonies, however, had a very different impact on the Chesapeake than on New England, adding to the differences between the two areas. In the northern colonies the main industries were farming and fishing. Although these were labour intensive, the units involved were small enough to be managed by individual households. The natural increase of the population, due to a decline in the death rate and a rise in the birth rate, at least kept pace with the geographical expansion of the area, and arguably outstripped it, producing a labour surplus rather than a shortage for the New England economy. Certainly there was no great demand for fresh emigration to the area from England or elsewhere.

It was quite the reverse in Maryland and Virginia. There the tobacco economy was insatiable in its demands for labour. In the first half of the seventeenth century this was mainly supplied by indentured servants from England, who earned their passage by selling their labour for a period of years. There were a few black slaves very soon after the extensive cultivation of tobacco began, but these were a small minority of the total labour force. Certainly slavery began in the Chesapeake before it became an economic necessity. With the cessation in the supply of indentured servants, however, tobacco growers turned to slaves to replace them. The institution of chattel slavery was not unknown in the north, but the numbers involved were quite disproportionate. Blacks never became more than three per cent of the total population of New England in the colonial period, while they eventually formed forty per cent of the inhabitants of Virginia.

They became an even bigger proportion of the population of South Carolina, which was established after the Restoration when indentured servitude began to dwindle. From the start South Carolina cultivated rice, another cash crop which the colonists exploited by. employing slave labour. Eventually a majority, some sixty per cent, of the colonists of South Carolina were black.

In the seventeenth century, of course, an even larger proportion of colonial North Americans consisted of red men. Quite how many native Americans existed north of the Rio Grande before white colonisation began is a matter of some dispute. What seems certain is that the traditional estimate of 1,000,000 must be revised upwards many times, perhaps as many as ten. Contact with European diseases, from which they had no natural immunity, produced a demographic catastrophe, reducing their numbers drastically.

At first the colonists depended on the Indians for their very survival. Without supplies of foodstuffs from the natives the settlements at Jamestown and Plymouth would have failed. Yet initial symbiosis turned to attempted genocide by both sides, in Powhatan’s rebellion in Virginia in 1622, and in the Pequot war in New England in 1636. These hostilities created a permanent state of cold war between the races in North America. In the mid 1670s this turned to open conflict in New England, with the so-called King Philip’s war, which witnessed considerable losses of life amongst both red and white men, and in Virginia.

By then, however, the Europeans had effectively settled the coastal strip, and pushed all but friendly natives to the frontier. The English consolidated their North American empire after the Restoration of Charles II with the acquisition of the middle colonies: New York, New Jersey, Pennsylvania and Delaware. Since these were settled after the emigration from England dwindled from a torrent to a trickle, they were colonised largely from elsewhere. New York and New Jersey, of course, had been Dutch colonies, and already had a European population before the English Crown acquired them, while there were other settlers from Europe, principally Scandinavians, on the Delaware. Their numbers, however, while sufficient to retain some cultural features such as the Dutch Reformed Church, were not high enough to remain the dominant element in these areas. Many New Englanders moved into the middle colonies, Newark, New Jersey, being founded by migrants from New Haven. Many more Scots and Ulstermen (Scots-Irish as American historians call them) also moved there, while from the outset William Penn encouraged Europeans to colonise Pennsylvania, and attracted immigrants from many parts of Germany.

By 1700, therefore, there was no single colonial society but three distinct societies. New England retained the most English characteristics, having been settled largely by Englishmen. These retained not only practices from the mother country, but even regional variations of them. Sudbury was founded by men from Hampshire and Wiltshire, who practised open field agriculture, while Watertown was founded by settlers from East Anglia who recreated the enclosed fields they were familiar with before emigrating. In his reconstructions of the English communities which provided the settlers for Hingham, Rowley and other Massachusetts towns, David Grayson Allen has demonstrated how far the colonial settlements developed “in English ways.”30 Thus the agricultural structure, land system, leadership patterns and local government of Holme-on-Spalding-Moor, which provided a nucleus of those who accompanied Ezekiel Rogers to America, were remarkably similar to those of Rowley. Even fuel supplies were carefully conserved by byelaws in both communities despite the fact that, though they were scarce on the East Riding wolds, they were abundant in the forests of the frontier. Such placenames as Boston, Cambridge, Ipswich, Plymouth and Sunderland bore witness to the persistent localism of the Englishmen transported to New England.

Placenames such as Brooklyn in New York, Hoboken in New Jersey and Germantown in Pennsylvania testified to the greater ethnic diversity of the middle colonies. There were also a higher proportion of blacks in this region, especially in New York, twelve per cent of the inhabitants of which were black in the seventeenth century. In the next century the black population of the middle colonies combined was between six and eight per cent.31 Although this region, like others, retained its overwhelmingly rural character it also contained, in New York and Philadelphia, what were to become the principal cities of colonial America. Pennsylvania was a great wheat-growing region, the breadbasket of the colonies, and Philadelphia developed rapidly as a centre for processing the grain into flour and distributing it widely.

Apart from Charleston there was no town which was to grow into a city throughout the entire south during the colonial period. The Chesapeake colonies developed along the river systems, with plantations, small and large, abutting the waterfronts. In North Carolina there were two separate settlements, one along the Albemarle sound, the other on Cape Fear, mainly exploiting the abundant timber resources of an otherwise naturally impoverished region. Only South Carolina developed a major entrepot for its principal product, rice.

During the seventeenth century these three colonial societies differed in many respects from England. Even New England, which was the most English, had not recreated the conditions of the mother country. Its social structure was more truncated, lacking the elite of very rich landowners and the mass of very poor labourers. Its political institutions were based on criteria markedly different from those presided over by the Stuarts. During the 1630s, and again from 1660 to 1684, when parliamentary elections were few and far between in England, elections for Governors and General Courts of the various colonies were held annually. In Massachusetts and New Haven the franchise was vested in church members. Although this did not make these bible commonwealths theocracies, nevertheless dominion was founded in grace. Religion, indeed, remained paramount in the puritan colonies long after secularism had made inroads into its authority in the mother country. King Charles II challenged their religious qualification for the vote. His court set the tone for the sceptical, scoffing attitude to revealed religion which characterised the English ruling class in the late seventeenth century, when the leaders of New England could seriously believe that the hysteria at Salem in 1692 was due to witches, and preside over tribunals which sentenced those found guilty of witchcraft to death.

Yet there were trends in seventeenth-century New England moving it more into line with the mother country. During the second generation there were complaints of declension following the crusading idealism of the original puritan settlers. The so-called Halfway Covenant, offering baptism but not full communion to the grandchildren of church members, was seen by many as a lamentable fall from grace. Jeremiahs found a whole host of afflicting providences to chronicle alleged apostasy from the strict ways of the faithful. Luxury was especially singled out by preachers as one of the more deadly sins which had earned a just rebuke from Providence. All these were indications that materialism was eroding faith even in New England. This was especially true of the ports, which were the first to be contaminated with the commercial spirit. The outbreaks of witchcraft in Salem village have been seen as manifestations of resistance to the growth of capitalism in Salem itself, and its resulting breakdown of the traditional values upheld by the first settlers. Above all Boston, as it became the major port of New England, ceased to be ‘the City upon a hill’ which John Winthrop had urged the puritans to establish as a beacon to the world, and developed into a thriving commercial centre.32

Boston was to lead the way in the reactions of the colonies to imperial initiatives which culminated in what has been termed the Glorious Revolution in America. The fact that several colonial centres resisted James II and those they associated with him can be seen as a sign that developments in North America were pulling it closer to England by the late seventeenth century.

Colonial responses to imperial policies initiated by the mother country only became significant after 1676. Before then, to be sure, the English government had by no means neglected the colonies. On the contrary, there had been determined attempts to ensure that their economic development benefitted England, notably with the passing of the Navigation Acts of 1651 and 1660 and the Plantations Duties Act of 1673. These measures were aimed at restricting the carrying trade between England and the colonies as far as possible to English shipping, and at confining the export of certain enumerated colonial products to England. They remained, however, pronouncements of intent rather than actual policies until 1676, after which Charles II, and even more so his brother, James II, initiated moves to bring their American dependencies more effectively under control.

Reactions to these English initiatives culminated in what has been seen as the first American Revolution. Perhaps significantly, although all the original colonies except Georgia had been established by 1688, the Revolution occurred only in the longer settled provinces of Massachusetts, Maryland and New York. It is true that New York was only acquired by the English in 1664, but it had been colonised by the Dutch for over half a century.

The bold decision of the members old the Massachusetts Bay Company to move its headquarters to the colony, and to use the charter granted in 1629 as the colonial constitution, had always run the risk of challenge. Indeed it was challenged several times before 1676, by Archbishop Laud, by the Presbyterians when they temporarily enjoyed power in England after the first civil war, and by Charles II after his Restoration in 1660. The restored monarch had objected to the persecution of Quakers in the Bay colony, and had insisted on breaking the monopoly of power in provincial politics held by congregational church members, forcing them to accept a property franchise too. But it was not until Edward Randolph arrived in 1676 to investigate evasions of the Navigation Laws by New England that their semi-autonomy was threatened in earnest. Randolph concluded that the only way to force Massachusetts to obey Whitehall was to make it a Crown colony, an argument which the English government eventually accepted. In 1679 the jurisdiction of Massachusetts over New Hampshire was removed when the Crown took over that colony. Five years later the charter of the Bay company was revoked after the colony had been accused of “usurping to be a body politic.”33

If Charles II lashed the Bay colonists with whips, his brother James, who succeeded to the crown in 1685, chastised the whole of New England with scorpions. All the northern colonies, Connecticut, New Hampshire, Plymouth and Rhode Island, as well as Massachusetts, were incorporated in 1686 into the Dominion of New England, to which New York and New Jersey were later added.

The Dominion was primarily concerned with defence. A military man, Sir Edmund Andros, was put in charge of this virtual viceroyalty. As its historian concluded, it was “a solution of the colonial problem of defense. It had the desired effect upon the French and hostile Indians, for it checked their encroachments upon the English settlements in North America. It strengthened the confidence of the Five Nations in the English and made the alliance more secure. It… brought credit to Andros, whose military policy was the strongest force of his administration.”34

This represented a new departure, for colonial defence had not previously been a prime concern of the English government. Charles II was far more interested in the revenues which the colonies could bring in to his insatiable exchequer. It has been claimed that military considerations became paramount even tinder Charles, and that the despatch of troops to suppress Nathanial Bacon’s rebellion in 1676 represents the introduction of ‘garrison government’ into the American colonies.35 But Charles actually disbanded the companies despatched to the Chesapeake once their task was completed. It was his brother who had genuine military priorities.

The Dominion was autocratic as well as defensive. Provincial representative assemblies were suppressed and the rights of town governments curtailed. Andros and his council legislated and taxed by decree. With the revocation of the charter all land titles were also revoked. Some colonists were deprived of their property completely, while most had it restored only on condition that they paid quitrents. Quitrents, while they were raised elsewhere in the colonies, were previously unknown in New England. Andros also saw to it that the Navigation Laws were strictly enforced, the number of ports of entry being reduced to five. Perhaps even more traumatic for the dominant Congregationalists, religious toleration was enforced, an Anglican Church being established in Boston, while Anglicans served on juries and in the militia. These policies alienated many vested interests in Massachusetts. As J. M. Sosin concludes “By 1688 there were relatively few prominent men with reason to support the governor in time of crisis.”36

The crisis arose when news reached Massachusetts in April 1689 that James II had been overthrown in England by William of Orange the previous December. An apparently spontaneous uprising occurred in Boston against Andros, who was thrown into jail along with other officials of the Dominion. The ringleaders chose the octogenarian Simon Bradstreet, a former governor of the colony, as president of a provisional council. Elections were held for a convention, which endorsed these revolutionary measures. Similar rejections of the Dominion and restoration of the old order occurred in Connecticut, Plymouth and Rhode Island.

News of what had happened in Boston reached New York City before the end of April. New York had been annexed to the Dominion of New England in 1688; consequently the fall of Andros and his imprisonment along with other agents of the Dominion in Boston was an ill omen for Francis Nicholson, the Lieutenant Governor of New York. The fact that France was now at war with England, and that the French in Canada threatened to advance on Albany and then on Manhattan, was even more ominous. Nicholson took steps to fortify New York City, placing some units of the Manhattan militia in Fort James.

His motives for these actions were misrepresented. It was said that he was hand in glove with the Papists to betray the city to the French. Among those who believed the rumours were officers in the militia, led by Jacob Leisler, who mutinied against Nicholson and took over the fort in the name of the inhabitants and soldiers. Nicholson chose to flee. leaving the colony in June 1689 for England. Leisler and his confederates proceeded to set up a Committee of Safety, proclaiming it to be the provisional government. In August it declared Leisler to be Commander in Chief.

If alleged sympathy with Popery helped to topple Nicholson, genuine Catholicism in high places precipitated revolution in Maryland. The proprietor, Lord Baltimore, was a Roman Catholic, and on the council which ruled from 1666 to 1689 there was always a slight majority of his co-religionists. This caused friction between the council and the assembly, which was aggravated in January 1689 when the councillors called in all public arms, ostensibly for repairs. There ensued a wild rumour of a Popish plot to kill Protestants with the help of Indians. Protestant assemblymen, led by John Goode, led an armed uprising against the Council. The Councillors surrendered without a struggle and proclaimed William and Mary King and Queen. The Protestant Association, as Goode and his colleagues called themselves, summoned a convention which set up a grand committee to govern the colony.

These disturbances in America can be seen as the reaction of the colonists to arbitrary power. Consciously copying the leaders of the Revolution in England they asserted their rights and liberties as Englishmen against the absolute monarch in league with France and Rome. The role of representative institutions in the events of 1685-1689 symbolises this fundamental conflict of interests. James II suppressed them in New England, where they had enjoyed a continuous history since the original settlements, and in New York, where, though the Dutch had never convened one, he himself had been forced to summon one in 1683, when he was proprietor. The revolutionaries in Massachusetts legitimised their actions with the summoning of a convention. Leisler also called one in New York. In Maryland it was the Lower House which led the Protestant Association against the proprietor and the council. Their criticisms of arbitrary government, taxation by decree, and confiscation of property have been seen as a prototype of the resistance to a similarly perceived threat under George III.37

At the same time the upheavals in Maryland and New York, and even to some extent Massachusetts, can be seen as adjustments within the social structures of those colonies rather more than between them and the mother country. Even historians who interpret the events of 1689 largely in imperial terms concede that they also represented struggles for power within colonial elites. The model for this interpretation is Bailyn’s analysis of Bacon’s rebellion of 1676 in Virginia. His thesis is that in the seventeenth century Englishmen accepted the argument of James Harrington that political power should rest with those who possessed economic clout in the community. In England these were the landowners, who acquired social status commensurate with their political and economic influence. Incidentally the framers of the constitution of the Carolinas, probably including John Locke, used Harrington’s scheme to try to preserve the link between property and power which he had advocated in Oceana. In America, however, this link did not prove easy to establish. A pioneer society took time to settle down, and meanwhile its social structure was very flexible. Moreover land, being both abundant and cheap, did not necessarily confer upon its owners the authority which it could in England. This was especially true in the early decades of settlement in Virginia, where a most unstable society threw up elites on a very different basis from that which the English ruling classes enjoyed. When members of the ruling class, sons of gentry like Nathaniel Bacon, migrated there, they discovered that they did not acquire the influence in Virginia society which they had come to expect from their status. Kept from power by the current ruling group in the entourage of Governor Berkeley, they became increasingly resentful and eventually took out their frustrations in rebellion.

These were, however, the teething troubles of an infant colony. When Virginian society matured, a gentry class emerged which united in itself the twin attributes of economic and political power. Thereafter the gentry exercised their authority undisturbed from below for generations. By the eighteenth century a handful of families, such as the Byrds, the Carters, the Washingtons, dominated life in the Old Dominion.38

The same ‘model has been applied to the Glorious Revolution in Maryland. Coode and his colleagues Nehemiah Blakiston, Kenelm Cheseldyne and Henry Bowles have been seen as English gentlemen who migrated to the colony, and like Bacon found themselves ostracised by the ruling clique. They had prospered after emigrating to Maryland. Coode married the daughter of a wealthy Roman Catholic, yet he rose no higher politically than to the rank of militia officer. This built-up frustration found expression in an abortive uprising in 1681 and the Protestant Association’s successful resistance to the proprietorial faction.39

Leisler’s rebellion has also been seen in this light. Although Leisler was a German, while his chief accomplice, Jacob Milborne, was English, they were supported by the Dutch in New York City. Leisler married a wealthy Dutch widow, a marriage which should have introduced him to New York’s elite. Instead his political ambitions were thwarted until he took it into his own hands to achieve them by force. Significantly the predominantly English inhabitants of Suffolk county on Long Island, who also rose up against the Dominion, dissociated themselves from Leisler, preferring to join with Connecticut. On the other hand, the Dutch outpost at Albany also kept aloof from Leisler, and only reluctantly accepted his leadership in 1690 when they turned to him for protection after an Indian raid on Schenectady.40

In Massachusetts too there was a dislocation between economic and political power. The growing business community of Boston and other towns were kept out of provincial government by the Congregationalists who insisted that the franchise should be confined to church members. Charles II tried to exploit this tension by obliging the puritan oligarchy to concede a property qualification for voting too, though this was too high to make a significant impact on the electorate. Initially the Dominion of New England found some favour with those outside the ruling church, but eventually Andros drove the colonists to combine against his rule. Even so they remained divided in their objectives for replacing the Dominion. Church members desired the restoration of the Charter of 1629, while the others did not wish to see the puritan oligarchy restored.

The Glorious Revolution was the last violent threat to political stability in the colonies before the reign of George III. Although the turbulence it created took time to calm down, especially in New York, British America entered the eighteenth century as a settled society rapidly acquiring maturity.

4. The Colonies in the Eighteenth Century

As the colonies grew and society in them became more complex, so they developed comparable characteristics’. The differences between them persisted, but similarities also developed so that on the eve of the Revolution an American society is discernible.

The numbers of the colonists grew dramatically from zero in 1600 to about 223,000 by 1700, 934,000 by 1750 and 1,688,000 by 1770. Contemporaries became aware that the population was doubling roughly every twenty-five years, which was a much faster rate of growth than obtained in England during these years. Its distribution was, however, uneven. New England did not grow as rapidly as it had done in the seventeenth century, for the mean age of marriage rose and the birth rate correspondingly declined. The middle colonies filled up with immigrants, Ulstermen and Germans dominating a new influx from Europe. The communities which they created differed markedly from the settled, almost static, townships of New England. In Germantown, for instance, there was a staggeringly high turnover of population as immigrant families moved in, stayed a few years, then split up and moved on, to be replaced by newcomers from Europe.41 Those who moved on poured down into the backcountry of the Carolinas, creating whole communities in the piedmont in a few years around mid-century. The longer-settled tidewater strip in the south also saw a natural increase of the white population, especially around the Chesapeake Bay when the ecology became much healthier for human life.

Signs that land along the eastern seaboard was becoming relatively less abundant begin to emerge by the eighteenth century. The average size of holdings in the longest settled areas diminished while their value rose. Thus Suffolk county in eastern Massachusetts had only thirteen men out of three hundred with estates worth more than £900 in 1660, compared with fifty-three out of three hundred and ten by 1765. Over the same century the number of men with estates worth less than £100 also increased from fifty-seven to seventy-one. As Kenneth Lockridge concluded from this analysis, “not only were the rich becoming more numerous and relatively more rich, but the poor were becoming more numerous and relatively poorer.”42 A similar pattern is discernible in Maryland. In the 1690s only 1.6 per cent of colonists had estates worth £1000 or more. By the 1750s the proportion had risen to 3.9 per cent. Poorer planters also acquired more wealth between 1690 and 1740.43 By the middle of the eighteenth century, however, many white farmers in Maryland were not planters but leaseholders.

Although leaseholding was less extensive in the colonies than in Britain it did exist, and apparently on an increasing scale. In New York the vast manors of Rensselaerswyck and Livingston had only 115 tenants between them in the second decade of the eighteenth century. By the 1770s they had 1,460 out of a total of between six and seven thousand tenant farmers in the whole colony. By then, too, a majority of farmers in eastern Maryland were tenants rather than owner occupiers. Whether men became leaseholders because they could not afford freeholds is debateable. It appears that in colonial New York many tenants chose that status, either as a stepping stone to a freehold or even for life, while in Maryland most had no choice.44 In general, however, it would seem.that opportunities for indentured servants to acquire land were diminishing, so that perhaps three quarters of them never became landowners. Some of these sought their livelihoods not on the land but in the growing towns.

Urban growth occurred along the seaboard, with Boston, Newport, New York, Philadelphia and Charleston becoming major ports. All had several thousand inhabitants by 1760, Philadelphia, with a population of 18,000, being the largest. These ports generated an elite of merchants and professional men. Merchant princes like the Hancocks of Boston, the DeLanceys of New York and the Pembertons of Philadelphia were true plutocrats. As elsewhere in the colonies, the rich were getting richer and the poor were getting poorer. From the evidence of inventories it emerges that the distribution of wealth in Boston and Philadelphia changed for the worse between 1684-1699 and 1756-1765. In the first period the top four per cent owned 25.9 per cent of personal wealth in Boston and 21.7 per cent in Philadelphia, while by the second period they commanded 46.4 and 55.8 per cent respectively. Meanwhile the share of the poorest 30 per cent in both cities had declined from 3.3 to 2.0 per cent in Boston, and from 4.5 to 1.0 per cent in Philadelphia. Average annual expenditure on poor relief in the three ports over the century also indicates a substantial increase in the numbers of paupers in them all. From the 1700s to the 1750s it rose from £173 to £1204 in Boston and from £119 to £1803 in Philadelphia, while in New York it increased from £249 in the second decade of the century to £1667 by the seventh.45

The changing social structures of these towns heralded the trend of social change in the colonies as a whole between 1607 and 1760, away from a hierarchy of orders towards a class system. The ideal of a hierarchical society upheld by John Winthrop in his celebrated lay sermon on board the Arbella gave way to Benjamin Franklin’s world of “great and rich men, merchants and others,” “middling people, the farmers, shopkeepers and tradesmen,” and the poor.

How far these developments led to the rise of class tensions in eighteenth-century America is a question upon which historians are currently very much divided. Some claim that class played no significant part in the major conflicts which arose between the colonists, and that these can be better explained in terms of other divisions: for example, imperial, colonists against the mother country and its agents; religious, New Lights and Sides against Old; and sectional, the frontier against the east. Others insist that class tensions underlay all these struggles.

During the eighteenth century the colonies came to be regarded as constitutional microcosms of the mother country. It was a commonplace in contemporary England that the Glorious Revolution had either restored or secured the finest constitution conceivable. Where in the Classical Polybian theory there were only three types of polity; monarchy, aristocracy or democracy, Englishmen enjoyed a mixture of all three in the institutions of the Crown, the Lords and the Commons. Moreover, while the pure types tended to degenerate, monarchy becoming tyranny, aristocracy turning to oligarchy, and democracy sinking into anarchy or mob rule, the mixture could preserve the original purity of all three by keeping them in balance. Thus the Lords and Commons could check the Crown, Crown and Commons the Lords, and Crown and Lords the Commons.

This balanced constitution was allegedly reflected in the colonies by the trinity of Governor, Council and Lower House of Assembly. However, the Glorious Revolution did not in fact secure for the colonists this enviable balance. For one thing, the whole analogy between the King and the Governors, the Lords and the Councils, and the Commons and the Lower Houses of Assembly was illusory. Where the Kings, at least in the eighteenth century, ruled by hereditary succession, Governors were appointed at the pleasure either of the Crown or the proprietors. In Crown Colonies the average tenure of office was a mere five years. The Lords, too, sat in the Upper House of Parliament by virtue of their hereditary titles, whereas Councils were chosen by the Governors. Only the Lower Houses of Assembly could legitimately claim to derive their authority from a source similar to that which upheld the House of Commons in England.

Another flaw in the argument that the colonies constitutionally mirrored the mother country was that many of the benefits which Englishmen claimed to enjoy by virtue of the Glorious Revolution were not exported across the Atlantic. Even those colonies which rose up against their rulers in 1689 did not benefit from their actions as much as they might have hoped.

The reactions of William III to the upheavals in Massachusetts and New York at the time of the Glorious Revolution revealed that the new king was not going to allow the colonies to benefit from the downfall of his predecessor as much as their English cousins benefitted. Even in England William did his best to minimise the curtailment of royal authority in the Revolution Settlement. In Massachusetts he insisted on retaining as much of the powers of the Dominion of New England as he could, and by the new charter nominated the chief executive positions in the colony. In New York his agents overrode the claims of those who had led resistance to the Dominion there, and executed their ringleaders. Although in Maryland the Protestant Association was more successful in achieving its aims than were the revolutionaries in Boston or New York, these coincided with the King’s aspirations for that colony to become a Crown Colony.

In other ways also the colonies as a whole failed to benefit from the fruits of the Glorious Revolution. As Bernard Bailyn has shown, three aspects of the Revolution Settlement in England were not extended to North America. First, the limitations on the duration of parliaments achieved in the Triennial Act of 1694, albeit subsequently modified in the Septennial Act of 1716, acted as a check on the prerogative of summoning and dissolving the Houses of Parliament at pleasure. Only New Hampshire and South Carolina had Triennial Acts, while New York obtained a Septennial Act in 1743. Otherwise colonial assemblies could be summoned, prorogued and dissolved at the pleasure of the governors. Second, the Act of Settlement of 1701 established the independence of the English judiciary by stipulating that judges should be appointed upon good behaviour and could not be dismissed at the pleasure of the Crown. No such restriction was placed upon judicial appointments in the colonies. Justice in North America, from the creation of Vice Admiralty courts to the appointment of JPs, remained very much subordinated to the prerogative powers of the executive. Thirdly, the royal veto of parliamentary bills lapsed in England in Anne’s reign. The vetoing of measures passed by colonial assemblies, however, was vigorously maintained, both by governors and by the Privy Council. Indeed it actually increased towards the end of the colonial period. Governors were instructed to veto a whole range of measures should they ever result in potential legislation on the grounds that they were incompatible with the interests of the mother country. They were also required to insist, even in bills involving apparently acceptable legislation, upon clauses which suspended their implementation until they had been approved in England.

The Crown in England found a way round these restrictions of its prerogative partly by the exploitation of its patronage. The curtailing of its power to prolong the life of parliament left it potentially vulnerable to the return of a hostile majority to the Commons after elections brought about by the constraints of the Triennial or Septennial Acts, rather than by the choice of the monarch. Yet in practice the election results even under the Triennial Act rarely went against the royal wishes, while under the Septennial Act no government lost an election in the eighteenth century. This was because the evasion of the electorate, the growth of oligarchy and above all the judicious exercise of patronage brought the electoral system increasingly under the control of the executive. The veto was not taken away from the Crown by statute. It atrophied because the influence of the Crown ensured that objectionable legislation never got as far as requiring the royal consent under the Hanoverians.

In the colonies, as Bailyn has again demonstrated, this was very far from being the case. Gubernatorial patronage was very restricted, while the electoral system was more representative.46

Where royal patronage in England grew to the point that opposition politicians could seriously maintain that “the influence of the crown has increased, is increasing and ought to be diminished,” colonial governors actually lost out as patrons during the course of the eighteenth century. Some of the posts at their disposal earlier were appropriated by the imperial authorities, while others were acquired by the assemblies. Their ability to influence elections was thus shrinking at a time when the number of constituencies in the colonies was increasing as population became more dense in the east and expanded in the west. There were also no rotten boroughs in colonial North America, as the franchise was wider there than in Britain. How much wider is a matter of some dispute. Consensus historians have claimed that effectively all white adult males could vote in the elections for the lower houses of colonial legislatures, though this has been strongly challenged.47 The debate involves calculations of the extent of the various franchises which obtained in the different colonies. Virginia’s House of Burgesses could boast that it was the oldest representative institution in North America, having first met in 1619. Then it seems likely that all adult males did vote for burgesses, but by the eighteenth century only owners of twenty-five settled acres, or one hundred unsettled acres, were allowed to vote. Massachusetts, too, elected representatives early in its existence, and vied with Virginia for the honour of being the cradle of American democracy. The Puritans who controlled early Massachusetts, however, were far from being democrats, and confined the franchise to church members. How exclusive this made the electorate is also a subject of dispute, though as church membership fell in the seventeenth century the proportion of electors also shrank, perhaps to as few as one in five of the adult male population. In 1691 the British government insisted that a property qualification should prevail, and fixed it in the English county franchise of a forty shilling freehold. Units of property also conveyed the right to vote elsewhere in the colonies, being fifty acres in Delaware, Georgia, Maryland, North Carolina and Pennsylvania, and freeholds worth £40 and £50 in New York and New Jersey.

These appear to have been more rather than less exclusive than the English franchise, but the extent of land ownership was far wider in the colonies than in England. Land was so much more abundant and available in America that estimates of the electorate range from fifty to as many as eighty per cent of white adult males. Even the most conservative estimate, therefore, makes the right to vote at least twice as extensive in the colonies as it was in the mother country.

One result of what Bailyn has summarised as “swollen claims and shrunken powers” was the rise of the assemblies at the expense of the governors.48 Jack P. Greene has shown how they eroded gubernatorial powers by gaining control of a whole range of executive activities.49 Perhaps the most crucial were their successful bids to appoint the colonial treasurers. This transferred from the governors to the assemblies considerable financial leverage. As James Glen, Governor of South Carolina, complained to the Board of Trade in 1748, “the political balance in which consists the strength and beauty of the British Constitution, being here entirely over turned… all the weights that-should trim and poise it [are] by different laws thrown into the scale of the people… Almost all the places of either profit or trust are disposed by the General Assembly.” Having investigated the southern Lower Houses Greene claimed that the phenomenon was general throughout the South, and even applied the model he had established to most of the colonies by the middle of the eighteenth century.

In a more recent investigation of North Carolina, however, Roy Clayton has questioned the assumption that the Lower House rose at the expense of the Council as well as of the Governor.50 On the contrary, he maintains that the Council was not a mere extension of the executive, but functioned independently. Councillors were chosen from the colonial elite, and they stood up to the pretensions of Governors on behalf of their own interests no less staunchly than did representatives in the assembly. Indeed the leaders of opposition to the executive tended to dominate the Upper House, and to head colonists of like views and interests who sat in the Lower. Thus “the quest for power” did not pit Lower House against Council and Governor but both houses against the executive, with the lead being taken by the Council. Clayton too argues that his model, though based largely on North Carolina, is generally applicable.

In fact the situation varied from colony to colony. After 1701, for instance, the Council in Pennsylvania exercised no legislative authority, the legislature being unicameral. Here the Quaker party established itself in the assembly against the proprietorial party. Elsewhere it could depend upon whether the Councils really represented the colonial elites, or were English placemen foisted on the colonies by such imperial agencies as the Secretaries of State or the Board of Trade.

In Greene’s view the assemblies won their contest with the executive by insisting that they possessed the same powers as the House of Commons. They thus rested such claims as the right to initiate money bills on precedents established by parliament in its resistance to the Crown in the seventeenth century.

According to Bailyn, however, the rhetoric of resistance to executive claims in the eighteenth century derived not from struggles with the Stuarts but from Country propaganda against the Court under the Hanoverians.51 The colonial press repeated the arguments of John Trenchard and Thomas Gordon in Cato’s Letters and Lord Bolingbroke and William Pulteney in The Craftsman. These had developed the notion that the balanced constitution was in constant danger of being thrown off balance by the Crown’s attempts to increase its power at the expense of Lords and Commons. The Crown threatened the perfect equipoise reestablished in the Glorious Revolution directly, by building up a standing army, and indirectly, by the use of corruption. Armies were always a threat to liberty, and had been used to stifle representative institutions and establish absolutism in Ancient Rome and in contemporary Europe. Corruption was even more insidious, since it eroded the independence of both Houses of Parliament and of the electorate. It was the duty of the virtuous citizen to be vigilant and to resist these threats.

Whether the rhetoric of seventeenth century precedent or eighteenth-century Country ideology was the more appropriate again depended to some extent on how successful assemblies were in their struggles with the executive. By and large those which significantly curtailed the authority of the Governors cited Stuart history in support of their claims, while those which still encountered resistance borrowed the arguments of the English opposition to the Whig oligarchs. The problem of establishing a typology which fits the political experience of all the colonies in the eighteenth century is that politics in each of the thirteen developed different patterns.

Yet an overall view which seems to command wide acceptance amongst historians is another model proposed by Jack P. Greene, the concept of political stability being established throughout at least the major colonies during the early eighteenth century.52 As he summarises his thesis, “the political development of each of the major colonies followed a generally similar pattern with a relatively long period of drastic, almost chronic, political disorder and flux, which, in most cases, began early in the period of settlement, and lasted through the first decades of the eighteenth century, being followed, beginning in the 1720s and 1730s, by an era of extraordinary political stability and in some places relative public tranquillity that continued at least into the 1750s and l760s in most colonies.” This relative social harmony was achieved by the deference of the lower orders, who accepted the hegemony of the colonial elites.

This concept appears to find support even in the works of scholars who otherwise would not accept Greene’s consensus approach to colonial history, but on the contrary regard social conflict as a major dynamic of change in the colonies. Thus Gary Nash concluded from an analysis of Pennsylvanian politics between 1681 and 1726 that, towards the end of that period,53

Slowly, tentatively… society began to crystallise, to assume a more structured appearance. Hesitantly, and not without interruptions, the wheels of government began to turn again as the management of politics became once more the concern of select groups whose members did not welcome the participation of the lower or middle classes in public affairs. Opposition to political power exercised from above was by no means dead in Pennsylvania, however. A strong tradition of dissent to prescriptive authority remained. But only under special circumstances did it manifest itself, usually during periods of economic difficulties or external threats.

Rhys Isaac has also shown how the leading families of the Virginian gentry established themselves as a patriarchal elite by the eighteenth century. They formed a ruling class which held sway, not by crude repression, but by a subtle cultural blend of authority and deference, which made Virginia one of the most politically stable of all the colonies.54

Yet both Nash arid Isaac see the Great Awakening as a movement which shattered any kind of religious consensus, and divided the colonies along class lines. New Light Congregationalists in New England, and New Side Presbyterians in the middle colonies, both appealed, according to Nash, largely to the lower orders, tradesmen, craftsmen and the labouring poor, especially in the major ports which to him were the crucibles of social change. They were accused by their opponents, the Old Lights and Sides, of being levellers.55 Rhys Isaac’s examination of the Baptists in Virginia during the central decades of the eighteenth century led him too to conclude that they made conversions among socially inferior groups, including women, servants and even slaves, and were criticised by the Anglican elite for subverting the social order. One magistrate actually accused some Baptist preachers of”carrying on a mutiny against the authority of the land.”56

Yet other historians have represented the Great Awakening as primarily a religious movement which cut right across social, economic and even family groupings. In so far as it has been traced to secondary causes rather than to the inscrutable workings of Providence, it has been seen as a reaction to the apostasy of Calvinist churchmen from the traditional teachings of Calvin. The established clergy were condemned as succumbing to Arminianism, Arianism and, perhaps worst of all in some eyes. Anglicanism by their critics. They in turn were criticised for being enthusiasts when they emphasised the need for regeneration in a traumatic conversion by the saving grace of God.

Some historians have even sought secular explanations of the phenomenon. An outbreak of diphtheria has been cited as one contributory factor, the resultant deaths of small children being interpreted as a providential visitation, warning against the cooling of religious zeal in the former godly colonies. Another possible cause has been discerned in the fact that, in New England at least, the colonists began to marry later in life. The theory goes that more and more frustrated young people anticipated their marriages, and felt guilty as a result of breaking the strong religious taboo against pre-marital intercourse. This guilt induced a psychological state which made them ripe for religious conversion.57

Perry Miller saw tensions between traditional societies increasingly confined to the frontiers and the more secular communities emerging along the seaboard behind the Awakening. As he expressed it “the Great Awakening was the point at which the wilderness took over the task of defining the objectives of the Puritan errand.”58

Certainly sectional conflict between east and west seems to have become more pronounced towards the end of the colonial period. In 1755 John Hambright led about 700 men to Philadelphia to protest against the poor protection offered to settlers on the frontier of Pennsylvania. Nine years later his example was followed by the more notorious Paxton boys. In the late 1760s the backcountry of both North and South Carolina witnessed Regulator movements protesting against their treatment by the eastern establishments.

The North Carolina regulation has been the most closely studied, and detailed investigation has brought out vividly the differences amongst historians about the nature of colonial society on the eve of the American Revolution. There are at least three major interpretations of the conflict within the colony. One sees it as a sectional or regional dispute purely and simply, being an issue between the settlers newly arrived in the piedmont and the longer established colonists in the tidewater. A second interprets it as a conflict of interests in the eighteenth-century sense of the term: a struggle between the landed interest of the piedmont counties and the legal and commercial interests which began to threaten the ascendancy of land during the 1760s. The third regards it as a class struggle between poor farmers and the rich merchants and officials who were exploiting them.59

The most recent study of the Regulation, by Roger Ekirch, challenges all three interpretations, and suggests that it arose because of the peculiar circumstances facing the backcountry provinces.60 His stress or: the frontier situation refutes the sectional theory, since he sees the tensions as being paramount inside the west and not between the west and the east. He denies it was a class movement, since many of the regulators were substantial farmers. At the same time he disagrees that it was a conflict of interests, insisting that the regulators were anxious to attract businessmen into the backcountry. Hermon Husband wanted to encourage “men of public generous spirits, who have fortunes to promote trade” to settle in the piedmont.

Instead Ekirch stresses the volatile nature of a rapidly emerging society. Officials in the backcountry were nouveaux riches on the make. They really were corrupt. Between 1754 and 1768 the western counties’ taxes were embezzled far more persistently than were those of the tidewater. At the same time the shortage of currency in the west meant that the settlers were not only unwilling but unable to pay their taxes, and either got into hopeless debt or had their goods distrained. Thus the regulation arose out of specific and genuine grievances experienced in the western counties.

The differences between the frontier and the east, and the even sharper distinction to be drawn between North and South, militate against easy generalisations about an American Society emerging by the middle of the eighteenth century. Yet the thirteen colonies clearly felt that they had sufficient in common to unite against Great Britain, to resist British attempts to subdue them, and to forge a new nation out of that conflict. Paradoxically perhaps what held them together was not only the differences they had with England but also what they had in common with the mother country.

5. Conclusion

The effect of recent work on early modern Britain and her North American colonies has been to emphasise the similarities rather than the differences between them. England at least no longer appears to have been feudal, or as rigid and static as was previously believed. At the same time colonial America seems less egalitarian, flexible and fluid than it was once represented. Moreover the similarities were apparently becoming more rather than less marked. Paradoxically, on the eve of the confrontation which was to divorce them, the two societies were developing along parallel if not along converging lines.

The social structures of the original colonies in the Chesapeake and New England were distinctly different from those of the mother country. In Virginia a highly unstable and predominantly male society created conditions which could not be paralleled in the British Isles. New England was more stable and based on the family, but the upper and lower strata of English society were not reproduced there. By the mid eighteenth century, however, social stratification in North America resembled Britain much more closely. On both sides of the Atlantic the breakdown of traditional hierarchies and the emergence of social classes, with the consequent replacement of vertical by horizontal loyalties, is increasingly discernible.

The two communities were also becoming more and more enmeshed economically, in a system of imperial trade. This system contributed to what has been called the first commercial revolution. The pattern of English exports experienced a significant reorientation during the course of the colonial period. In 1600 far and away the biggest export item was woollen cloth, while by 1700, though it still dominated exports, it had shrunk in relative significance, the buoyant element in overseas trade being the re-export of colonial produce to Europe. By the eighteenth century, therefore, the colonies were involved in a complex international as well as imperial trade.

During the eighteenth century a further realignment of British exports occurred in which the American colonies again played a significant part. This second commercial revolution involved the export of British manufactured goods to overseas customers, many of them colonists. The resultant interlocking of British and colonial economies did more than anything to bring the two societies closer together. American landed proprietors equipping their Palladian houses with furniture and fittings from England were consciously aping English country gentlemen. The tradesmen and professional men below them purchasing Sheffield cutlery and Wedgwood pottery were indistinguishable in their tastes from their counterparts across the Atlantic. Even craftsmen utilising metal utensils made in Birmingham were contributing to the creation of a homogeneous culture.

The development of an Atlantic economy pulled the two societies closer together. Boston, Winthrop’s “city upon a hill,” was a unique town. Within decades, however, it had acquired the characteristics of a commercial entrepot. By the middle of the eighteenth century, Boston, Newport, New York, Philadelphia and Charleston were linked in complex trading patterns with London, Bristol, Liverpool and Glasgow. These links forged together an Atlantic community, making the colonies truly British America.

The closest interlocking was achieved during the Seven Years War, or, as it is perhaps more accurately called by American historians, the French and Indian war. Britain spent more on money, men and resources ridding North America of the French than she had expended on the colonies since the beginning of colonisation.61 It has been claimed that the co-operation of British troops and colonists produced friction rather than harmony, though against this has been cited the boost to colonial morale of working for a common imperial objective.62 The war also channelled ideological energies against French absolutism which might have been directed against the British government, and were to be shortly after the Peace was signed in 1763. Thus the religious enthusiasm aroused by the Great Awakening saw the great enemies of Protestantism and Liberty in Popery and France, the twin representatives of Antichrist and Tyranny. Only after the war, when they could no longer be thus identified, were these roles projected on to Great Britain.

Indeed the Great Awakening cannot be considered as a purely American phenomenon. It was part’ of a great wave of religious enthusiasm which engulfed the Protestant world during the mid eighteenth century. In Britain it took the form of Methodism. George Whitefield, the Methodist minister, played a prominent role in the Awakening too. So far from dividing the colonies from the mother country the religious revival was in many respects a shared experience. It was only when conflict broke out between them that the religious movements diverged, until John Wesley became an outspoken critic of the colonial rebellion.

Before then the Great Awakening and Methodism were remarkably similar phenomena, appealing to the same kinds of people on both sides of the Atlantic. The form religious enthusiasm took in Britain and in her American colonies bore testimony to how similar the two societies had become.

Of course there were obvious differences between them. Among the most glaring was the racial and ethnic diversity of the colonies in comparison with Britain. Yet these centrifugal forces pulling them apart did not overcome the centripetal attraction drawing them together during the colonial period.

Indeed, as far as native North Americans are concerned, they were another fact of life in colonial America which made the colonies more unlike the mother country in the seventeenth century than they became during the eighteenth. The early settlers in the Chesapeake and New England had to face daily contact with the Indians, and the very real threat of being thrown back into the sea by them. Without help from natives the first arrivals in both areas might well have starved. At the same time Powhatan’s uprising in 1622 could have wiped out the Jamestown colony, while the Pequot and King Philip’s wars (1636-7; 1675-6) seriously disrupted life in New England. By the eighteenth century, however, the main centres of Indian habitation had moved westward, either willingly, as they searched ever further for furs to trade with the white settlers, or reluctantly, as a result of being deprived of their homelands by the expanding colonies. Those who remained became absorbed into colonial society, some as neighbours, others as servants or even as slaves. Hostilities were more and more confined to the frontiers. Upper New York and the Connecticut river valley experienced attacks from Indians allied to the French during the wars between Britain and France. Even after they ended, Pontiac’s uprising in 1763 reminded the northern and middle colonies that their westward expansion encroached on the territories of natives who would fiercely defend them, as the Paxton boys discovered. Further south the backwoods of the Carolinas were also disturbed by Indian attacks as late as the 1760s, to the annoyance of the Regulators. But to the long settled communities along the eastern seaboard, and especially to the townspeople of Boston, New York and Charleston, Indians were by then a remote frontier folk.

Something of the way in which native North Americans became assimilated into the culture of the more civilised parts of the colonies can perhaps be gauged from changing perceptions of them on the part of the colonists. Englishmen who went to North America took with them two stereotypes of the natives, formed from folklore and long European acquaintance with the New World. One portrayed them as savage infidels who fought with inhuman ferocity and had no mercy for men, women or children, whom they butchered horribly. Another depicted them as noble savages, cultured if unlettered, who had much to teach white men about how to live in harmony with their new environment. In the course of the seventeenth century experience emphasised the first image. Interaction between the two races was marked by hostilities in which atrocities were perpetrated by both. This reinforced the notion that the natives were devilish, filled with bloodlust for slaughter. As warlike relations receded more and more to the frontier, however, so the second stereotype became predominant along the settled seaboard, reinforced somewhat by the Enlightenment’s stress on the nobility of savages.

The historiography of Colonial North America in this century has followed a curiously similar course. Encipher as historians of the colonies dealt with the natives at all, then, apart from romantic tales of Pocahontas, they tended to treat them as little more than a threat to the survival of white civilisation in the wilderness. They remained largely offstage, to be brought on only when their hostile activities disrupted life for the colonists. Recently, however, they have been studied in their own right. These studies have stressed that those whom the Europeans encountered were not nomadic tribes living primarily by hunting, but were settled communities who lived mainly by agriculture. The more the Indians are investigated by historians the less they appear to have been savages and the more they seem to have had in common with the colonists. Thus recent research, by playing down the differences between the two races, also detracts considerably from the view that coexistence with the Indians made the colonies significantly different from Britain. They may not have reminded Englishmen of England, but they certainly led them to draw comparisons between native North Americans and highland Scots, and above all between Indians and Irish peasants.63

While natives became increasingly less prominent in colonial society, blacks by contrast came to be an ever more significant part of the population. They undeniably constituted a major difference between the colonies and the mother country, as did the institution of chattel slavery. Although blacks were not totally unknown in early modern England, they formed but a tiny proportion of the total population. By 1760, however, there were some 325,000 in North America, most of whom were slaves. The majority lived in the South, forming a third of the population in Maryland, North Carolina and Georgia, forty per cent in Virginia, and a majority, sixty per cent, in South Carolina. They were nevertheless scattered throughout the colonies. Between the Mason-Dixon line and the Hudson river, a region of some 427,900 souls, they numbered about 29,000 or roughly 6.7 percent. In New England they were far fewer, perhaps only 12,700 in an area with an estimated population of 449,600. With fewer than three percent of black inhabitants New England differed as much from the South in the mid eighteenth century as it did from the mother country.

Although the lot of blacks, even in New England, was very different from’ that of whites, it exaggerates the differences between the races to attribute them all to race. Stressing the racial distinctions overlooks many social and even class divisions which separated them. Characteristics which whites attributed to blacks, such as idleness, insolence, insubordination, and physical and sexual potency, reflected images of the lower orders in England as they were seen by their superiors. Also the way in which religion was used as an instrument of social control to suppress the rebellious instincts of the blacks was virtually identical with its exploitation to curb discontent amongst the English poor.

Again black historiography has followed a trend which has narrowed racial distinctions in colonial North America. Like Indians, blacks were rarely mentioned in histories of the colonies except as problems for whites. If they were discussed it was generally in terms of an amorphous mass of slaves. The model of race relations was largely based on the great plantations of the South, emphasising the gulf which stretched between rich planters on the one hand and huge gangs of field labourers on the other. Recent research, studying blacks as individuals in their own right, has corrected misconceptions perpetuated by earlier treatments. For one thing, the large plantation has been shown to have been exceptional even in the South. It was much more normal to own slaves numbered in single rather than double, let alone triple, figures. Moreover slaves were not just field hands or even domestics, but acquired a wide range of skills too. This was another respect in which colonial societies moved from the seventeenth to the eighteenth century in a direction bringing them bore rather than less into line with British society. Where in 1619 the first recorded blacks in Virginia were almost inevitably destined for work in the fields, by 1760 there was a black hierarchy ranging from labourers to luxury craftsmen. Free blacks could rise even higher in the social scale. Blacks not only acquired a wide range of economic functions, they also developed their own social and even family life. North American slavery was unique in sustaining a black community which actually grew spontaneously, unlike other slave societies which required a ceaseless flow of captive Africans to sustain the workforce.64

As black society became increasingly complex and structured, while ever larger numbers were born and bred in the colonies, so it acquired many characteristics of white social structure and even culture. The gap between field hands freshly arrived from Africa, and craftsmen descended from several generations of American ancestors, was. far wider than that between skilled slaves, free blacks, and whites. This was manifested in the different methods of resisting slavery adopted by slaves on different rungs of the black hierarchy. Unacculturated field hands tended to try to escape in a body, unaware that the facts of geography made this a futile form of protest. Those nearer the top of the ladder were more inclined to escape individually into free society, often with success.65

The very existence of slaves in their midst led colonists to cherish their own freedom. This too caused them to stress their similarities with England. They constantly compared themselves to freeborn Englishmen and boasted that they enjoyed English liberties. Whenever these seemed to be challenged, either by proprietors or by the Crown, they were quick to accuse these agencies of being tyrants intent on enslaving them. It was this argument which was ultimately to lead to separation and Independence.

It was not, therefore, the development of a distinctly American society which brought about the conflict with Britain. Rather the reverse, the quarrel between Britain and her colonies was to create an American society. Since the establishment of Virginia and Massachusetts the two societies had developed along parallel if not converging lines, their social structures, economies and cultures becoming more and more similar. By the accession of George III the expression “British America” was more rather than less appropriate than it had been before. Perhaps paradoxically separation came about more because the colonies were too like Britain than because they were not British enough.

6. Guide to Further Reading

Two older surveys of the kind of society in England which supplied North America with colonists have been largely superseded by the new social history: Wallace Notestein’s classic The English People on the Eve of Colonisation (London, Hamilton, 1954) and Carl Bridenbaugh’s Vexed and Troubled Englishmen (Oxford, Clarendon, 1968). The ‘cliometric’ approach was heralded by Peter Laslett, The world we have lost (second edition, 1971).12 The best recent survey of the seventeenth century is Keith Wrightson, English Society 1580-1680 (1982).13 An attempt was made to synthesise current work on the first half of the eighteenth century in W.A. Speck, Stability and Strife: England 1714-1760 (London, Arnold, 1977).
There has been an enormous number of studies on the colonies inspired by the new approach. For a pioneering synthesis of those for the eighteenth century see James A. Henretta, The Evolution of American Society 1700-1815 (New York,, Heath, l973). A superb scholarly overview of all aspects of colonial life is to be found in R.C. Simmons, The American Colonies (London, Longman, 1976). Simmons’ comprehensive and well-organised bibliography is the best starting point for further explorations of particular themes, though a more recent analysis of some aspects is in Jack P. Greene and J. R. Pole, eds., Colonial British America (1984).31

7. Notes

  1. “Whig” historians of the nineteenth century tended to see a consensus behind American resistance towards Britain, while “Progressive” historians of the early twentieth century detected underlying social conflict. See Edward Countryman, The People’s American Revolution (BAAS pamphlet number 13; 1984).Back
  2. Daniel J. Boorstin, The Americans: The Colonial Experience (New York, Vintage Books, 1958), winner of the Bancroft Prize in 1959.Back
  3. Sumner Chilton Powell, Puritan Village: The Formation of a New England Tour (New York, Deubleday; 1965) pp. 182-3; winner of the Pulitzer Prize in 1963.Back
  4. W. A. Speck, Society and Literature in England 1700-1760 (Dublin, Gill and Macmillan, 1983), pp. 45-53.Back
  5. Herman Wellenreuther, “A View of the Socio-Economic structures of England and the British Colonies on the Eve of the American Revolution,” Erich Angermann et al., eds., New Wine in Old Skins: A Comparative View of Socio-political Structures and Values affecting the American Revolution (Stuttgart, Ernst Klett Verlag, 1976), p. 20.Back
  6. E. C. Johnson, “The Bedford Connexion: The fourth Duke of Bedford’s Political Influence 1732-71” (unpublished Cambridge Ph.D. Thesis, 1980).Back
  7. W. A. Speck, Tory and Whig: The Struggle in the Constituencies 1701-1715 (London, Macmillan, 1970), pp. 26~7, 45.Back
  8. Geoffrey Holmes, The Electorate and the National Will in the first Age of Party (Lancaster, 1976).Back
  9. J. H. Plumb, The Growth of Political Stability in England 1675-1725 (London, Macmillan, 1967), p. xvi.Back
  10. E. P. Thompson, Whigs and Hunters: the origin of the Black Act (London, Allen Lane, 1975) and D. Hay et al., eds., Albion’s Fatal Tree: Crime and Society in Eighteenth Century England (London, Allen Lane, 1975).Back
  11. E. P. Thompson, “Patrician Society, Plebian Culture,” Journal of Social History, 7 (1974) pp. 382-405; “Eighteenth Century Society: Class Struggle without Class?”, Social History, 3 (1978), pp. 133-165.Back
  12. P. Laslett, The World we have lost: England before the Industrial Age (second edition, London, Methuen, 1971), pp.23-54.Back
  13. Compare Keith Wrightson, English Society 1580-1680 (London, Hutchinson, 1982), pp. 17-38, and Roy Porter, English Society in the Eighteenth Century (London, Allen Lane, 1982), pp. 63-112.Back
  14. Penelope Corfield, The Impact of English Towns (Oxford, OUP, 1982), p.6.Back
  15. E. A. Wrigley and R. Schofield, The Population History of England 1541-1871 (London, Arnold, 1981), pp. 208-9.Back
  16. Geoffrey Holmes, Augustan England: Professions, State and Society 1680-1730 (London, Allen and Unwin, 1982), p.16.Back
  17. J. S. Morrill, The Revolt of the Provinces (London, Longman, 1980).Back
  18. E. A. Wrigley, ‘A Simple Model of London’s Importance in changing English Society and Economy 1650-1750’, Past and Present, 37 (1967), pp.44-70.Back
  19. David Galenson has persuasively argued that the system of indentured servitude was an adaptation of service in husbandry, whereby “instead of moving from one village to another to enter service, after 1607 English youths frequently moved to another continent”. David W. Galenson, White Servitude in Colonial America (Cambridge, CUP, 1981), p. 9.Back
  20. Mildred Campbell, “Social Origins of some Early Americans”, James Morton Smith,ed., Seventeenth Century America (New York, Norton, 1972), pp. 63-89.Back
  21. David W. Galenson, “‘Middling People” or “Common Sort”? The Social Origins of some early Americans re-examined’, William and Mary Quarterly, 35 (1978), pp. 499-524. See also Mildred Campbell’s spirited reply, ibid, pp. 525-540 and their further exchange in ibid, 36 (1979), pp. 264-286. Galenson’s final position, which is accepted here, is in his White Servitude pp. 34-50.Back
  22. James Horn, ‘Servant emigration to the Chesapeake in the Seventeenth Century’, Thad W. Tate et al., eds., The Chesapeake in the Seventeenth Century (Chapel Hill, University of North Carolina, 1979), p. 65.Back
  23. T. H. Breen, Puritans and Adventurers: Change and Persistence in EarlyAmerica (Oxford, OUP, 1980), pp. 46-67.Back
  24. K. Lockridge, Literacy in Colonial New England (New York, Norton, 1974), p. 46. Literacy in Virginia before 1650 seems to have been lower: ibid, p. 79, Galenson, White Servitude, pp. 65-78.Back
  25. John Demos, A Little Commonwealth: Family life in Plymouth Colony (New York, OUP, 1970); Philip J. Greven, Four Generations: Population, Land and Family in Colonial Andover, Massachusetts (Ithaca, Cornell UP, 1970); Kenneth A. Lockridge, A New England Town: the First Hundred Years; Dedham, Massachusetts (New York, Norton, 1970).Back
  26. Jack P. Greene, ed., Settlements to Society: A Documentary History of Colonial America (New York, Norton, 1975), p. 38.Back
  27. Edmund S. Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (New York, Norton, 1975), pp. 44-91.Back
  28. Carville V. Earle, ‘Environment, Disease and Mortality in early Virginia’, Tate et al., eds., The Chesapeake, pp. 96-125.Back
  29. Kevin Sharpe, ‘Archbishop Laud’, History Today (1983), xxxiii, 26-30.Back
  30. David Grayson Allen, In English Ways: The Movement of Societies and the Transferal of English Local Law and Custom to Massachusetts Bay in the Seventeenth Century (Chapel Hill, University of North Carolina, 1981).Back
  31. Jim Potter, ‘Demographic Development and Family Structure’, Jack P. Greene and J. R. Pole, eds., Colonial British America: Essays in the New History of the Early Modern Era (Baltimore, Johns Hopkins, 1984), p. 138.Back
  32. Darrett B. Rutman, Winthrop’s Boston: A Portrait of a Puritan Town 1630-1649 (Chapel Hill University of North Carolina, 1965).Back
  33. Michael Garibaldi Hall, Edward Randolph and the American Colonies 1676-1703 (New York, Norton, 1960); Stephen Saunders Webb, 1676: The End of American Independence (New York, Knop1, 1984).Back
  34. Viola F. Barnes, The Dominion of New England: A studv in British Colonial Policy, (New Haven, Yale UP, 1923; republished New York, Ungar, 1960), p. 229.Back
  35. Stephen Saunders Webb, The Governors-General: The English Army and the Definition of the Empire 1569-1681 (Chapel Hill, University of North Carolina, 1979), p. 447.Back
  36. J. M. Sosin, English America and the Revolution of1688 (London, University of Nebraska, 1982), p.89.Back
  37. David Lovejoy, The Glorious Revolution in America (London, Harper, 1972)Back
  38. Bernard Bailyn ‘Politics and Social Structure in Virginia’,James Morton Smith, ed., Seventeenth-Century America, pp. 90-115.Back
  39. Lois Green Carr and David William Jordan, Maryland’s Revolution of Government 1689-1692 (Ithaca, Cornell UP, 1974).Back
  40. Thomas J. Archdeacon, New York City 1664-1710 (London, Cornell UP, 1976), pp. 97-122.Back
  41. Stephanie Grauman Wolf, Urban Village: Population, Community and Family Structure in Germantown, Pennsylvania 1683-1800 (Princeton, Princeton UP, 1976).Back
  42. Kenneth A. Lockridge, ‘Land, Population and the Evolution of New England 1630-1780’, Past and Present, 39 (1968), pp. 62-80.Back
  43. Aubrey C. Land, ‘Economic Base and Social Structure: The Northern Chesapeake in the Eighteenth Century’, T. H. Breen, ed., Shaping Southern Society: The Colonial Experience (New York, OUP, 1976), pp. 244-5.Back
  44. Patricia Bonomi, A Factious People: Politics and Society in Colonial New York (New York, Columbia, 1971), pp. 179-228; Gregory A. Stiverson, Poverty in a land of Plenty: Tenancy in Eighteenth-century Maryland (Baltimore, Johns Hopkins UP, 1977).Back
  45. Gary B. Nash, The Urban Crucible: Social Change, Political Consciousness and the Origins of the American Revolution (Cambridge, Mass., Harvard UP, 1979).Back
  46. Bernard Bailyn, The Origins of American Politics (New York, Vintage, 1970).Back
  47. See especially Robert E. Brown, Middle-Class Democracy and the Revolution in Massachusetts 1691-1780 (Ithaca, Cornell UP, 1955); J. R. Pole, Political Representation in England and the origins of the American Republic (London, Macmillan, 1966).Back
  48. Bailyn, Origins of American Politics, p. 96.Back
  49. Jack P. Greene, The Quest for Power (Chapel Hill, University of North Carolina, 1963).Back
  50. T. R. Clayton, “A Study of the Colonial Council in America” (unpublished Cambridge Ph.D. Thesis, 1982).Back
  51. Bernard Bailyn, Ideological Origins of the American Revolution (Cambridge, Mass., Harvard UP, 1967).Back
  52. Jack P. Greene, ‘The Growth of Political Stability: An Interpretation of Political Development in the Anglo-American colonies 1660-1760’, J. Parker and Carol Urness, eds., The American Revolution: A Heritage of Change (Minneapolis, University of Minnesota, 1975), pp.26-52.Back
  53. Gary B. Nash, Quakers and Politics: Pennsylvania 1681-1726 (London, Princeton UP, 1968), pp. 306-7.Back
  54. Rhys Isaac, The Transformation of Virginia 1740-1790 (Chapel Hill, University of North Carolina, 1982), pp. 11-138.Back
  55. Gary B. Nash, Urban Crucible, pp. 198-232.Back
  56. Isaac, op cit., pp. 161-269.Back
  57. Richard Hofstadter, America at 1750 (New York, Vintage, 1971), pp. 217-268.Back
  58. Perry Miller, ‘Jonathan Edwards and the Great Awakening’, Errand into the Wilderness (London, Harvard UP, 1956), p. 153.Back
  59. Hugh T. Lefler and William S. Powell, Colonial North Carolina: A History (Chapel Hill, University of North Carolina, 1973); James P. Whittenburg, ‘Planters, Merchants and Lawyers: Social change and the origins of the North Carolina Regulation’, William and Mary Quarterly, 34 (1977), pp. 215-238; Marvin L. Michael Kay, ‘The North Carolina Regulators 1766-1776: A Class Conflict’, in Alfred F. Young, ed., The American Revolution: Explorations in the History of American Radicalism (DeKalb, 111., Northern Illinois UP, 1976), pp. 73-123.Back
  60. A. Roger Ekirch, ‘The North Carolina Regulators on Liberty and Corruption, 1766-1771’, Perspectives in American History (1977-8), xi, 199-256.Back
  61. Julian Gwyn, ‘British Government Spending and the North American colonies 1740-1775’, in Peter Marshall and Glyn Williams, eds., The British Atlantic Empire before the American Revolution (London, Cass, 1980), pp. 74-84.Back
  62. Jack P. Greene, ‘The Seven Years’ War and the American Revolution: The Causal Relationship Reconsidered’, ibid., pp. 85-105; Alan Rogers, Empire and Liberty: American Resistance to British Authority1755-1763 (Berkeley, University of California, 1974).Back
  63. See especially Francis Jennings, The Invasion of America: Indians, Colonialism and the Cant of Conquest (Chapel Hill, University of North Carolina, 1975).Back
  64. See for example Peter H. Wood, Black Majority: Negroes in Colonial South Carolina from 1670 through the Stono Rebellion (New York, Knopf, 1974).Back
  65. Gerard W. Mullin, Flight and Rebellion: Slave Resistance in Eighteenth-Century Virginia (New York, OUP, 1972).Back

Top of the Page

Michael Allen, Emily Dickinson as an American Provincial Poet

BAAS Pamphlet No. 14 (First Published 1985)

ISBN: 0 946488 04 5
  1. Introduction
  2. The Provincial Stigma: Acceptance and Transformation
  3. British Models and American Attitudes
  4. The Uses of Provincial Society
  5. Reader as Collaborator: Emily Dickinson’s Alternative Audiences
  6. The Poetry and its Tensions
  7. The Poetry and its Genres
  8. Conclusion
  9. Guide to Further Reading
  10. Notes
British Association for American Studies All rights reserved. No part of this pamphlet may he reproduced in any form or by any electronic or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher, except by a reviewer who may quote brief passages in a review. The publication of a pamphlet by the British Association for American Studies does not necessarily imply the Association’s official approbation of the opinions expressed therein.

1: Introduction

Reading T. H. Johnson’s one volume edition of Emily Dickinson’s Complete Poems is a unique experience, disturbing as well as satisfying. Even here [1] (let alone in the same editor’s three volume variorum edition), the 1775 untitled poems suggest her solitary creative activity in her bedroom in Amherst and the build-up of packets of poems in drawers and cupboards to await her death. The insistent originality of her rhythms, imagery and notation reminds us of her intransigent refusals to emend as friends suggested or to deliver poems into the hands of newspaper editors who might not respect her punctuation.[2] The poems are a monument to her faith in her own ultimate ascendency (“If fame belonged to me I could not escape her”[3]): the vigorous endeavours of successive editors to put them before the reading public are as nothing compared to that enormous faith. Indeed that faith (and all its attendant doubt) is ultimately her proposed and enacted subject.

No single poem can adequately represent a poet whose work is so various. But her best-known poem (712) would illustrate some of her characteristic effects and show how that ultimate theme emerges through them. Here is the first stanza:

Because I could not stop for Death –
He kindly stopped for me –
The Carriage held but just Ourselves –
And Immortality.

The poet’s use of dashes is, as Johnson says in introducing the Complete Poems , “a musical device,” a way of establishing a rhythmical staple which can be subsequently varied. But at the same time it defines a speech-rhythm. It is because of the dash at the end of line one that the iambic jog-trot of line two is idiomatically transformed, giving “kindly” its heavy stress, its irony and its precise social nuance. The last line combines a flicker of inevitability (derived from the rhyme) with the impromptu quality of an afterthought (because of the dash after “Ourselves”). The speaker is setting out on her journey with a perfunctory nod towards “Immortality” (the chaperon, an appendage to ‘be taken for granted). The speaker is allowed a full-stop (a punctuation mark which never appears again in the poem) and later on we will recognise this as an indication that such easy confidence is inappropriate to the excursion.

All the variations on the poem’s initial rhythm reinforce this combination of the casual and the preordained. The dash in the first line of stanza two throws the stress firmly onto “He,” giving the driving initiative to Death:

We slowly drove – He knew no haste
And I had put away
My Labor and my Leisure too,
For His Civility –

The line-end position of “put away” gives a great intensity to the speaker’s counter-strategy which reaches beyond effort and relaxation to a single attentive and creative purpose. This we can only associate with the stylistic progress of the poem, establishing as it does an idiom so precisely and giving such an epigrammatic equality of dismissal (enforced by alliteration) to “Labor” and “Leisure.” In the next two stanzas an almost automatic rehearsal of childhood and maturity seems to be contained by an expected stanza-break and then is thrown into question by the,overspill effect at the beginning of stanza four:

We passed the School, where Children strove
At Recess – in the Ring –
We passed the Fields of Gazing Grain
We passed the Setting Sun
Or rather – He passed Us –
The Dews drew quivering and chill –
For only Gossamer, my Gown –
My Tippet – only Tulle –

This disturbing reassessment of the situation is not entirely unexpected. There was a change into half-rhyme after the too easily assured first stanza to prepare us for it. Its reverberations now move in two directions: back through “sun” to “gazing grain” (surrealistically accusing) to the complacent mood of setting out; forward (the nightdress introducing dream-like erotic overtones, expectant and fearful) to the final tableau:

We paused before a House that seemed
A swelling of the Ground –
The Roof was scarcely visible –
The Cornice – in the Ground –

Since then – ’tis Centuries – and yet
Feels shorter than the Day
I first surmised the Horses’ Heads
Were toward Eternity –

The grotesque substitution of repetition for rhyme in the penultimate stanza is such a break with normal poetic decorum that at first it seems as unacceptable as the choked horror at living burial it enacts. Whether the poem’s excursion, so delicately balanced between the provisional and the permanent, can survive such bathos seems in doubt. But there is just enough momentum in the developing play of dashes against hymn-book iambic to effect the transition from stanza to stanza (a two beat becoming a three beat):

The Cornice – in the Ground –
Since then – ’tis Centuries – and yet .

It is because the poem’s characteristic movement (and half-rhyme) survives such an assault on decorum that we can accept the speaker’s emergence into suspended animation. The rhythmical triumph guarantees both her hard-won politesse and the traditional literary figure (of the chariot of the Muses?) with which the poem closes.

What might puzzle a literary historian here is the combination of a mannered idiom (which might almost be seventeenth- or eighteenth century) with highly individualistic modes that read like modernism. How could something at once so socially precise and so idiosyncratically subjective emerge in mid-nineteenth century New England? Even with a poem as autonomous as this one the circumstances which brought it into existence ask to be considered. And this is much more the case as one registers Emily Dickinson’s qualities over a wide range of poems and accepts how many interesting but uncertain and fragmentary pieces one has to deal with. They all demand an extension of our curiosity towards her life. More than that, they indicate the need for a view of nineteenth century literary culture which can explain their genesis.

For me the essential clue is provided by one of Morse Peckham’s powerful insights. The “cognitively estranged” writer in the later nineteenth century was, in Peckham’s view, bearing witness to a cultural crisis; he or she found it necessary to assert continually the invalidity and in authenticity of the dominant and central values and patterns of social interaction in that age.[4] Peckham says that such writers achieved this subversive purpose by withdrawing into isolation from the social and cultural mainstream which enforced the values they could only oppose and by constructing and playing in life and art what Peckham calls an anti-role. Emily Dickinson’s case suggests that we should add to the list of such anti-roles (the artist as Dandy, the artist as Bohemian, the artist as Virtuoso) another strategic construct, the artist as Provincial.

In all such cases it is important that the alienated creative individual is embracing a role which the discourse of the cultural mainstream has invented as a focus of scorn, derision or disapproval; and embracing it in order to demonstrate his supremacy through it. He is saying with Shylock “the villainy you teach me I will execute; and it shall go hard but I will better the instruction.” All writers with literary ambition (even if they claim to have no care for publication or eschew publication altogether) are sitting down alone at their writing-tables to cope with the rules and assumptions of address within their actual society (though not necessarily to obey or respect them). As Peckham says, completely intransigent opposition to the dominant social modes of one’s age could achieve an authentic triumph neither culturally nor artistically because the writer has to cope not only with immense pressure coming from society but also with the attendant pressure emanating from within himself’ all kinds of situation eliciting modes in himself which he finds unacceptable. This is because he has, inevitably, internalised his society’s rules and assumptions, the things that Lionel Trilling (rather extending the term’s usual meaning) calls “Manners” and which as Trilling says “draw the people of a culture together and … separate them from the people of another culture.” (Trilling goes on to describe these crucial codes very carefully as “the part of a culture which is not art, or religion, or morals, or politics, and yet it relates to all these highly formulated departments of culture.”)[5]

By embracing the role pejoratively defined by the central culture and playing it to the full, Bohemian or Dandy could both acknowledge and defy the mainstream rules and assumptions, thus dramatising the tensions which accompany the assertion of a new and subversive social and artistic identity. And so could the “Provincial,” though the deviant role he or she has chosen is less flamboyant. The appeal of Emily Dickinson’s Complete Poems seems to me inseparable from the appeal of precisely such a sense of identity, emerging, as all anti-roles do, out of its opposite, transforming a weakness into a strength through creative activity.

2: The Provincial Stigma:Acceptance and Transformation

Like her fellow New-Englander, James Russell Lowell, Emily Dickinson saw the literary tradition she was to contribute to as centred in Britain and going back to Shakespeare.[6] The pressures against which she would have to fight in the course of cultural and artistic self-definition were powerful both in the, at the time, fairly coherent Anglo-American literary establishment, and in middle-class society at large.

As a nineteenth century social idea, the pejorative notion of “provincial,” “pertaining to a narrow and limited environment,” appealed to that sense of superiority on grounds of mobility and wide acquaintance with the best people on which middle and upper class people increasingly thrived socially. Their own presumably broad and expansive environment they could see as “cosmopolitan” or “metropolitan,” their way of life as “urbane.”[7] The terms appropriate to this way of thinking were self-consolidatingly (or condescendingly) relative and offered their users a kind of sociological myth in which “la cite’ des lumieres” and “le desert provinciale” figured as the worldly equivalents for paradise and purgatory (or hell for those whose provincial immobility was complete).[8] For intellectuals and artists, such socially accepted, opposed or overlapping “catchwords” (A.O. Lovejoy)[9] offered the idea of the whole world as a structured hierarchical system of places. One’s art, one’s style, one’s flow of thought, it was assumed, would profit from location in, or free access to, places high up in the hierarchy like Paris or London. Those doomed to places lower down could be pitied or patronised. The fatal effects of their unfortunate location (or their lack of travel funds) could be detected in their art. And in the constantly interacting communion between artist and culture, writer and reader, the obsession with advantageous location was a ready bond.

Quentin Skinner has suggested that while we can trace the interplay of such assumptions within a culture and from part to part of that culture, it is very difficult to know how widespread and powerful they were at a particular time.[10] But any reader sensitive to the tones with which “manners” are invested in nineteenth century novels, magazines, journals, letters and diaries will have little doubt that notions of “provincialism” (or “provinciality,” attempts to maintain this distinction seem never to have succeeded) were important to literary minded writers and readers as well as, beyond them, to the whole classes to which they belonged. Such usage would be largely (though not exclusively) that of people rising in the social scale who wanted to ally themselves with a longer established upper class and consolidate themselves against those who came beneath. It grew from what Trilling calls “an uneasy pride of status. It always asks ‘Do I belong – do I really belong? And does he belong?’ “[11] “Provincial” was, as Carl Amery[12] puts it, a collective prejudice like “nigger,” the result of and the excuse for discrimination. It was a way of choosing a scapegoat and unloading on him or her some of the burden of social tension.

One can see too why this line of demarcation would be important to intellectuals and particularly to literary intellectuals. A new kind of cosmopolitan literary intellectual (J.R. Lowell, Arnold and James are the ‘first important ones) was asserting the need to maintain “standards” in literary culture. Such critics wanted to promote the values of a literary elite in the face of the relativity and diversity of the situation in which “a free intelligentsia … recruited from constantly varying social strata and life-situations” plays off various possible modes of thought and experience in order to compete for the favour of the public. (Karl Mannheim)[13] They wanted to strengthen their own hands as writers and critics by insisting that the art that they felt to be good was objectively so. But they were in fact looking for support to their potential audience’s cruder impulses also, bringing to the aid of their literary standards the latent snobbishness of the European (and East-Coast American) class system.

The terms I am concerned with were particularly suited to such a purpose at the time since “provincial” and “urbane” had been used in English to describe crudity or polish of a literary style for a hundred years before they acquired their nineteenth century social resonances. Arnold (with his gift for “finding a single convenient name for a complex of features plainly listed”[14]) produced the key formulation for English speaking readers in the course of popularising the critical terminology of Sainte-Beuve.[15] The provincial note, he said, occurred in the writer “left too much to himself” with “ignorance and platitude all round him,” too far from a “supposed centre of correct information, correct judgement, correct taste.” Writing produced in such circumstance would, he said, exaggerate “the value of its ideas” or rather “give one idea too much prominence at the expense of others.”

The essay which expounded these views, “On the Influence of Academies,” appeared in Britain in the Cornhill Magazine for August, 1864 and in America in Littell’s Living Age for September of that year, and then in Arnold’s first series of Essays in Criticism (1865), which circulated widely in both countries. The terms of this essay were powerfully influential. It was one of the “theoretically sophisticated legitimations” that emerge at particular times (according to Peter L. Berger and Thomas Luckman)[16] to integrate the loose assemblages of codes, manners, myths and values which make up what most people “know” about their own society. Before Arnold’s essay set out on its way to influence, a well-travelled visitor to the Bronte’s in Haworth can be found suggesting that their fiction-making was “like growing potatoes in a cellar.” It was the activity of those who are provided with no interests “in actual life.”[17] In the wake of Arnold’s essay this extraordinary (and very common) idea that the non-mobile provincial was somehow cut off from “actual life” can be given all the confidence and precision which comes with a well-made theory: a reviewer in an 1885 Edinburgh Review found in George Eliot’s letters from Geneva:

for the first time a glimpse of her, apart from the everlasting thinkings which make her letters to the Bray family and the other intellectuals of Coventry read like so many little essays. Here, in a strange place and new atmosphere, she has life itself and other living creatures to think of and the change is extremely agreeable. (161, 532)

Here is Arnold’s characteristic linking of the ponderous over-abstract style with the situation too far from cosmopolitan centres, lively writing with the move towards them. Of course the native English metropolitan way of thinking about such matters was still powerful. For Thackeray and his circle, for instance, it was London that was important: they operated similar but less sophisticated distinctions to dignify their own ethos and patronise a Charlotte Bronte[18] or a Thomas Hardy. And what the Brontes visitor meant by “actual life” and the Reviewer by “life itself and other living creatures” is illuminated by one of Hardy’s early experiences. He tried to explain to Thackeray’s daughter that his interest in London fashionable society was so slight that he felt unsure of his vocation as a novelist. “Certainly,” Miss Thackeray responded discouragingly, “a novelist must necessarily like society.”[19] The assumption that metropolitan (or cosmopolitan) life and society were the only life and society (whether crudely or elegantly formulated) gave particularly flamboyant expression to the discriminatory mythology of mobile urban literary people.

The destructive and undermining energies that threatened an ambitious locally rooted writer (whose art might be uniquely dependent on its growth within a particular terrain, a provincial society) were thus rather overwhelming, and Arnold’s formulation expressed them and fed on them. Emily Dickinson was alert to the discriminatory pressures we are concerned with six years before she acquired her copy of Essays in Criticism (First Series). (Sewall, Life, p.678n.) In a poem written in 1860 the daisy which she consistently used to image her own most modest poetic stance (a modified equivalent of Gray’s flower “wasting its sweetness in the desert air”) is “Except for winds – provincial.” (154) Her letters provide frequent prose equivalents of this trope in the period in which her overriding creative drive became clear and strong. She is painfully self deprecating in explaining her literary ambitions to the well-travelled Dr. and Mrs. J.G. Holland in 1861: “Perhaps you laugh at me! Perhaps the whole United States are laughing at me too!I can’t stop for that!” (Letters, p.413) And a few days later she used almost identical words (omitting the reference to the United States and confessing her ignorance of the English literary scene) in asking T.W. Higginson, the Boston literatus, to be her artistic “preceptor.”

It was at about this time that she wrote the lines (441) in which her “countrymen” are entrusted with the delivery of her “letter to the World” and she had recently explained to her brother and sister-in-law that she wanted her writing to make them proud “sometime – a great way off.” (Letters, p.380) Higginson, the representative of the “World” from which she felt she was excluded, saw himself as a cosmopolitan. He thought himself, in fact, a better cosmopolitan than James since “to be really cosmopolitan a man must be at home even in his own country.”[20] He found Amherst, the small provincial town, stifling. (Sewall, Life, p. 171) (James, writing at about the same time in similar terms, surmised that the smallness and sameness and dullness of provincial Lichfield, Samuel Johnson’s birthplace, would turn an intellectual appetite “sick with inanition” and probably occasioned Johnson’s subsequent “almost ferocious fondness for London.”)[21]

Higginson was always brandishing before Emily Dickinson the cultural advantages of Boston (the characteristic assumption being that she was in the wrong place). She resolutely and repeatedly refused his invitations (Letters, pp.450, 453, 460) and Higginson seemed for one moment to admit that his premise might be wrong (though the very form of the admission bears witness to the importance of the premise): “it isolates one anywhere to think beyond a certain point or to have such flashes as come to you – so perhaps the place does not make much difference.” (Letters, p.461) Most of Higginson’s letters to her at this time do not survive but he seems, on the strength of his cosmopolitan credentials, to have been urging on her the advantages which would accrue to her style and rhythm if she would at least take guidance from the literary custom and tradition of which he was a custodian. His situation anticipates that of Bridges in his correspondence with Hopkins, fascinated by this original talent yet unable to forget his role as a representative of “those who love a continuous literary decorum and are grown to be intolerant of its absence.”[22] And her resistance to urbane conventionalism was quite as intransigent as that of Hopkins (despite her apparent humility in her original approach to Higginson).

She refuses to alter her rhythms (which he complains are spasmodic) and explains that what is involved is not in the least the ignorance of “Customs” that he imputes to her (Letters, pp.409, 412): she sees the decorum he wants to impart as irrelevant to her purposes. However quaint and whimsical their expression, her letters to him insist over and over again on the overriding authority of her own gifts. It is fairly clear where Hopkins found the certainty to resist the pressures of the Anglo-American urbane poetic; it is much less clear where Emily Dickinson found a similar (if less invincible) confidence and this is a question I hope to answer in this pamphlet. When Higginson wrote about their encounters late in the century he still assumed (with Arnold and Bridges) that there was such a thing as “correct judgement, correct taste”: “I tried a little – a very little – to lead her in the direction of rules and traditions; but I fear it was only perfunctory, and that she interested me more in her – so to speak – unregenerate condition.”[23] The patronisingly tolerant note here is that of a man sure of the evaluative superiority his mobility invests him with. James saw the “cosmopolitan spirit” as breeding (in the cosmopolitan) a tolerant awareness of the merits of a wide variety of different ways of life. It made, he said patronisingly, “downright preference really very hard.”[24]

Almost from the first, Emily Dickinson’s creative confidence was rooted in her immobility. She would become a fixed point around which the daily light and shadow, the recurrent seasons, of her native terrain could pivot. This central imaginative impulse was obviously in stark opposition to the dominant and central patterns of social interaction of the age, with their heavy emphasis on centralisation and mobility in the direction of the centres. In her youth we find her excitedly writing to her brother that, “The world is full of people travelling everywhere” (Letters, p. 137); a very early poem has a telltale reference (unique in its openness to such a possibility) to “Fern odors on untravelled roads.” (140) But “untravelled roads” soon became part of the world of social and sexual achievement which is renounced by her mature poetry. The snow which is always such a powerful symbol of her own voluntary or necessary (the issue is usually ambiguous) exclusion from physical fulfilment and social interaction obliterates all hint of possible mobility:

… It fills with Alabaster Wool
The Wrinkles of the Road –
It makes an Even Face
Of Mountain, and of Plain –
Unbroken Forehead from the East
Unto the East Again – (311)

The lover, desired yet banned, who is so central a figure in her poetry is not only distant but mobile and this mobility becomes poetically equated with all his other forbidden attractions:

I envy Seas whereon He rides –
I envy Spokes of Wheels
Of Chariots, that Him convey –
I envy Crooked Hills
That gaze upon His journey –
How easy All can see
What is forbidden utterly
As Heaven – unto me! (498)

And throughout her work the symbolic distinction between migratory and non-migratory birds emphasises her acceptance of immobility:

The Southern Custom – of the Bird –
That ere the Frosts are due –
Accepts a better Latitude –
We – are the Birds – that stay. (335)

Her serious commitment to poetry and her serious commitment to fixed location seem to have twined together by 1861:

I shall keep singing!
Birds will pass me
On their way to Yellower Climes –
Each – with a Robin’s expectation –
I – with my Redbreast –
And my Rhymes… (250)

She is clearly comparing herself with more geographically favoured poets (the “Redbreast,” an embarrassingly mawkish fumble, is her own “freckled Bosom” of poem 1737). In a considerable number of poems cosmopolitan mobility (associated with birds, sea, winds, long journeys, Elizabeth Barrett Browning) is contrasted with this determined fixity.One can see how the combination of poetic vocation and commitment to place made her vulnerable to discriminatory pressures from the mainstream culture. A poem written shortly after number 250 begins as a self-consolidating defence against the pejorative force of the word “provincial” and ends by rescuing the word from its pejorative meaning and insisting on the anti-role she intends to play in art and life:

The Robin’s my Criterion for Tune –
Because I grow – where Robins do –
But, were I Cuckoo born –
I’d swear by him –
The ode familiar – rules the Noon –
The Buttercup’s, my Whim for Bloom –
Because, we’re Orchard sprung –
But were I Britain born,
I’d Daisies spurn –
None but the Nut – October fit –
Because, through dropping it,
The Seasons flit – I’m taught –
Without the Snow’s Tableau
Winter, were lie – to me –
Because I see – New Englandly –
The Queen, discerns like me –
Provincially – (285)

This is as theoretical as Emily Dickinson ever became and her way of making her case here is dependent on a use of language which is concerned less to represent concepts than to enact them. Poetic originality is seen as stemming from fidelity to the local terrain and community tradition’ (“we’re Orchard sprung, “-New Englandly-“) which condition its exact perceptions. If this is a limitation then a similar limitation governs those closer to the cultural centres even at the highest point of the social hierarchy there. (Notice how unselfconsciously the definite article in the penultimate line places Emily Dickinson’s work within the ambit of Anglo-American Victorian literature.) Like Hawthorne in Our Old Home[25] she obviously felt that if any centre were even to be envisaged it must be in Old England rather than Higginson’s Boston. So far one can trace the “argument” of the piece. But it is only by reading the lines as poetry that the reader can see how a naive unconfidence of tone, which knows itself to be provincial in the pejorative sense, generates its own transformation from its unpromising materials until the older, more confident meaning of “provincially” is restored.

The quaintness of the idiom and the arbitrariness of the “natural history” should not be allowed to conceal the modernity of the point of view. In terms of social mythology Emily Dickinson’s modulation of tone is expressing the same kind of ambiguous provincial’s self assertiveness as Hardy was when he sent Clym Yeobright to Paris. (Expecting to find the,re a richer and higher culture than his own, the hero of The Return of the Native came to the conclusion that the cosmopolitan life was “not better than the one he had known before, it was simply different.”) (Book 3 Ch. 1) Later in his literary career Hardy provided a more confident and passionate exposition of the adversary role out of which he wrote, explicitly confronting establishment assumptions (though with in-built concessions to them):

Arnold is wrong about provincialism if he means anything more than a provincialism of style and manner in exposition. A certain provincialism of feeling is invaluable. It is of the essence of individuality and is largely made up of that crude enthusiasm without which no great thoughts are thought, no great deeds done.[26]

This is fighting talk by the most powerful proponent of the anti-role that we are concerned with before William Faulkner.[27] But Emily Dickinson’s pioneering piece is more subtle. It was obviously crucially important for her future achievement that the Amherst poet had the confidence to articulate such a rationale before she entered into dialogue with the “cultural centres” in the person of a Boston cosmopolitan.

3: British Models and American Attitudes

Where had she gained this confidence? Ultimately, as we will see, she found it in her own local society which, as poem 285 rightly insists, conditioned her growth, her learning, her basic perceptions. But as an aspiring writer (and in the context of nineteenth century Anglo American literary culture, which at least since Byron had encouraged aspiring writers to combine literary performance with a literary lifestyle) she found valuable models for her developing stance in the work and life of the Brontes. As with all models for a literary life-style, how far the models occasioned self-recognition in the Amherst spinster and how far imitation is the kind of chicken and egg question it is very difficult to decide precisely. All one can do is set down the correspondences. On the evidence of her poems, her letters and her friends (who approved Higginson’s reading of “No Coward Soul is Mine” at her funeral[28]) the Brontes (especially Emily) were of central importance to Emily Dickinson. In 1858 (when she was just beginning to devise what was to become her own characteristic life-style) she gave a copy of Mrs. Gaskell’s Life of Charlotte Bronte to her closest Amherst friend, her sister-in-law Susan Dickinson.[29]

Now this book (published in 1857) was a revolutionary document in Victorian literary history because it surreptitiously but powerfully challenged the metropolitan or cosmopolitan assumptions which so much of Victorian literary culture casually took for granted. Of course, Mrs. Gaskell is careful to do conventional obeisance to these assumptions: if there is coarseness in Charlottes’s novels one is to blame the outspoken provincial society she grew up in, of which the few men she talked with were unfortunately representative. But an inference about the effect of the sisters’ provincial milieu which touches on Charlotte Bronte herself and not just her characters, can be made by the reader when Mrs. Gaskell says that “people in London, smooth and polished as the Athenians of old” were astonished to be confronted with “the uprising of an author capable of depicting with accurate and Titanic power the strong, self-reliant, racy, and individual characters which were not, after all, extinct species but lingered still in existence in the North.”

Emily Dickinson certainly took this inference: the year after penning her own commitment to seeing “New Englandly” she was referring to the Brontes enthusiastically as “the Yorkshire girls” (Letters, p.437) (Mrs. Gaskell’s antitheses still seem to haunt her phrasing when, in apologising to Higginson for expressing her idiosyncratic views, she asks him to blame “the bleak simplicity that knew no tutor but the North.”) (Letters, p.49l) And many of the elements of her own life-style (and strategy for creation) were very similar to those of “gigantic Emily Bronte” (Letters, p.721) whose poetry, as Mrs. Gaskell reported Charlotte’s words, was “not at all like the poetry women generally write” but “condensed and terse, vigorous and genuine.”[30] In preferring her dog to other companions, in her dislike of travel and town circles and her withdrawal from local society, in her reluctance to publish the poems she had secretly written, the Emily Bronte of Mrs. Gaskell’s portrait certainly seems to have had an influence. And what is closer to the nub as far as Emily Dickinson’s intransigence is concerned, Emily Bronte in Mrs. Gaskell’s account repudiated the “rules and traditions” of a French literary education when they were presented to her by M. Heger in Belgium. She said that she saw no good to be derived from the method of imitating the style of master works; by adopting it, in her view, she and her sister would “lose all originality of thought and expression.”

Emily Dickinson responded to Higginson’s advice with similar astringency: “I haven’t that confidence in fraud which many exercise… I never consciously touch a paint, mixed by another person.” (Letters, p.415) It is important here to notice that Mrs. Gaskell’s Lee presents two versions of the provincial anti-role, one (Charlotte) more tractable than the other, and that the Amherst writer had been “ecstatic” about Jane Eyre as early as 1849. (Letters, p.77) Her strategy of withdrawal was both more ambivalent and more histrionic (the white veil, the communication through a screen or from a landing) than that of her Haworth namesake. It is reminiscent of the way Jane Eyre isolates herself from Rochester’s visitors, remaining in her room, sitting at windows or in alcoves, preserving herself for her art (which Rochester recognises to be original) and for the growing relationship with him. The Bronte life-style allowed Emily Dickinson to identify in herself not only an intransigence like Emily Bronte’s but also a vein of feminine subordination like that in Charlotte’s novels. “I had no Monarch in my life, and cannot rule myself,” she explained to Higginson in August 1862 (Letters, p.414); he was only one of several men whom she cajoled into playing the role of “Master” (Letters, pp.333, 373) which Rochester and M. Paul play for Charlotte’s heroines.

It is not part of my purpose to explain the unresolved psychological conflict which lay deep in Emily Dickinson’s personality. My interest begins at the point where the conflict manifests itself in her literary life-style with its combination of unconfidence and intransigence. Some such strategy of creative withdrawal was called for by her own situation. She was impatient with the obviousness and longwindedness of most of Amherst social intercourse (“I don’t speak things like the-rest’)[31] and the literal-minded religiosity of her family (“They are religious – except me – and address an Eclipse, every morning – whom they call their ‘Father”‘). (Letters, p.404) Company seemed to exhaust the energy she needed for her literary enterprise.[32] But the British model gave shape to her own sense of artistic self-realization. When she gave Mrs. Gaskell’s Life to Sue Dickinson (her most respected and favoured reader as we shall see) she was to some extent begging her mutely to notice the precedent it offered for her own prospective literary strategy.

It is that strategy that makes her stand out like a sore thumb from the main lines of American literary history. She was born too soon (1830) to be contemporary with “Local Colorists” like John Hay (b. 1838), Edward Eggleston (b. 1837), G.W. Cable (b. 1844), Alice Brown (b. 1857) and Sarah Orne Jewett (b. 1849). She was born too late to belong with the writers to whom F.O. Matthiessen in his book of that name attributed the “American Renaissance”: Emerson (b. 1803), Hawthorne (b. 1804), Melville (b. 1819), Whitman (b. 1819), Thoreau (b. 1817). The group of writers she was closest to in place and time (and they were older than her) were the Boston literary men who contributed to the Atlantic Monthly and other journals: people like James Russell Lowell, Oliver Wendell Holmes, William Ellery Channing and, of course, Higginson (who probably mattered to her, as we shall see, more for what he represented than for who he was).

In some ways her regionalism anticipated that of the “Local Colorists.” But with exceptions (like Sarah Orne Jewett) these writers presented their local terrain not in terms of a post-colonial relationship between Old and New England but as part of a thriving, growing and above all integrated whole. (The Civil War was not far behind them.) As Martin S. Day says, the wedding of Frowenfeld and Clotilde in Cable’s novel The Grandissimes (1880) is given a characteristic representative quality: in it we are to see the union of new Americanness with old New Orleans regionalism.[33] How widespread among the wider American reading public such nationalistic literary attitudes were is open to debate. What is clear from the content of a successful mass-journal like Harper’s is that the popular audience was now avid for picturesque local detail. Whether it was of Lincolnshire, the Isle of Wight, Upper New York State, Virginia, the Scottish Highlands, Haverhill, Mass., the upper Thames, Canada, St. Louis or Yorkshire (places that on various pretexts were described and sometimes pictured in Harper’s for 1883-4) did not seem to matter. It is not surprising then that the first published selection of Emily Dickinson’s poems, their local flavour emphasised in a section called “Nature,” had a considerable success (with readers rather than reviewers) in 1890.

In the writers of the “Renaissance” generation, artistic engagement with the local terrain was subordinated either to the search for a Transcendentalist ethic and metaphysic (as in Emerson) or for a regional historical and cultural tradition (as in Hawthorne). Thoreau comes closest to Emily Dickinson in his thinking:

If these fields and streams and woods, the phenomena of nature here, and the simple occupations of the inhabitants should cease to interest and inspire me, no culture or wealth would atone for the loss… If Paris is much in your mind, if it is more and more to you, Concord is less and less, and yet it would be a wretched bargain to accept the proudest Paris in exchange for my native village.[34]

The pragmatism and particularity of Walden (1854) grows out of such a view. But a sentence I left out above reveals by its moralistic tenor Thoreau’s kinship with Emerson, his distance from Emily Dickinson: “I fear the dissipation that travelling, going into society, even the best the enjoyment of intellectual luxuries, imply.” These would not have been Emily Dickinson’s reasons for avoiding mobility and society. She thought her own society “the best” as we shall see and her irony would have triumphantly embraced words like “dissipation” and “luxuries” to express her own imaginative appetites.

What about the literary group to which she was closest? Apart from the dialogue with Higginson, and her acquaintance with some occasional participants in the Boston scene like Josiah Holland and Helen Hunt Jackson, Emily Dickinson’s main link with the neighbouring intellectual centre was the regular arrival in the house of the family copy of the Atlantic Monthly. Not only the first contact with Higginson but strange enthusiasms for little known American writers like Harriet Prescott Spofford emerged from her reading in its pages. But when one sees Emily Dickinson in the context provided by contemporary Boston one becomes conscious of superficial affinities and underlying contrasts. And it is the difference of her strategic thinking from Lowell’s that counts.

In the first place, Lowell’s sense of the relationship between England and America was cosmopolitan: as Henry James pointed out in an obituary,[35] Lowell saw the two nations as a single community of language and manners. Emily Dickinson saw her own local community as defining an identity quite different from that which would obtain . . . were I Britain born.” Lowell’s sense of regional identity, while it can espouse a Yankee humour akin to hers in the Biglow Papers, often comes to rest in “the idea of the great puritan effort” which New Englanders had embodied “in a living commonwealth.”[36] This was a frequent stress among the Boston writers. “There was a Puritan spirit as well as a Puritan commonwealth” says a reviewer in the North American Review for 1857. From both, he claimed fervently, had come “the results in which we glory.” (84, 428) Now, despite her deep apprehension and dramatisation of the spirit of Connecticut valley religion, Emily Dickinson was deliberately cool on this. She spoke of Puritanism as “old fashioned” (Letters, p.699) and said of her own inheritance of ancestral traits: “My Puritan Spirit ‘gangs’ sometimes ‘aglay.'”(Letters, pp.797-98) The coy echo of Burns (an insistently different regional adjunct to the English literary tradition) suggests stylistically how she saw herself in this respect.

Regional allegiance for the Boston writers was a way of reconciling the opposing pulls of the Westward nation and Europe. Their literary nationalism was half-hearted. The Crevecoeurian resonances of an article in the Atlantic titled “The New World and the New Man” diminish considerably when the writer takes the Rev. Francis Higginson (Emily’s ancestor) as its hero. (He appears proclaiming that “a sup of New England air is better than a whole flagon of old English ale.”) Halfway through the article the author admits that “we are still dwelling chiefly on the New England type”. (2, l 858, 521)

The reason for such hugging of the eastern seaboard becomes clear when another Atlantic contributor in 1860 suggests that “the American character is now generally acknowledged to be the most cosmopolitan of modern times.” (5, 257) A capacity to combine discrete national identity with cosmopolitanism is at a premium in the Atlantic. Matthew Arnold is praised there in 1865 for his ability to be cosmopolitan and characteristically English at once. Neither the Atlantic nor the North American Review showed any hesitation about accepting Arnold’s definition and approving his pejorative sense of “provincial” when reviewing Essays in Criticism. Such ready acceptance of the discriminatory attitude in the journals of her closest literary centre must have reinforced for Emily Dickinson the sense of inferiority instilled not just by her own copy of Essays in Criticism but by a whole culture; it must also have emphasised the need for a stand of her own.

The salutary vitality of that stand is most clearly visible if one sets it alongside the position taken by Oliver Wendell Holmes in his articles for the Atlantic, “The Autocrat of the Breakfast Table” (and later “the Professor at the Breakfast Table”). The dialogue form of these pieces is useful to their author: other participants at breakfast can present stock positions – America is the only place where man is full-grown, Boston is “the remote provincial corner of a provincial nation” – before the Professor or the Autocrat gives forth. In the most widely applauded and locally quoted of these articles (1, 1857-8, 734-44) two such minor speakers introduce the extreme positions of the debate: Paris, says the first, “is a heavenly place after New York and Boston.” The second airs the notion that Boston State House is the hub of the solar system. The Autocrat then enters the discussion and by suggesting that all towns and cities have the latter kind of self-centred view of themselves establishes that his own is to be a moderate and common-sense attitude to the issue. Boston “has some right to look down on the mob of cities” by virtue of its superior monthly publications, command of spelling, its fish-market and its fire department. Under cover of this kind of irony the speaker presents a model of literary culture as Boston displays it. Boston “drains a large water-shed (New England) of its intellect and will not itself be drained” (by New York). Whether this is a good thing or not is not decided: “There can never be a real metropolis in this country until the biggest centre can drain the lesser ones of their talent and wealth.” One can only assume that the Autocrat regards this process as healthy as far as Boston is concerned, unhealthy if that city is to be suborned by New York. (At this point the argument seems characterised less by irony than by evasion.) In all events he hates “the little toad-eating cities” that resist the process (even if quiet can be found there). Emily Dickinson, living in the watershed and refusing to bring her “talent” to the centre on Higginson’s invitation, focusing her art in the life of one of “the little toad-eating cities” was preparing to exemplify a very different approach to the literary possibilities of the region.

4: The Uses of Provincial Society

Her attraction to the strategies of writers like the Brontes and George Eliot is, of course, susceptible of a different explanation. Her “interest in English writers of her time,” says Karl Keller, is largely “an interest in the recognition of female accomplishment.”[37] Of course, her femininity is a crucial aspect of Emily Dickinson. (See Guide to Further Reading.) But I suspect that my account of the influences of the Bronte~s on the Amherst poet has already demonstrated how inseparable are femininity and provincial rootedness in these cases. Indeed, the concurrence of female accomplishment and creative use of the provincial milieu to oppose the values of the “centres” (in Jane Austen, the Brontes, George Eliot, Mrs. Gaskell) may well be the English literary phenomenon we really need to acknowledge here.

A common way for provincial writers to outface snobbish discrimination from the cultural centres was to identify themselves with a local gentry. When Charlotte Bronte was attacked by London reviewers she appealed from their judgement to that of “ancient East Lancashire families”: “the question arises, whether do the London critics, or the old Northern squires, understand the matter best?”[38] The words “ancient” and “squires” here suggest validation for her values from old provincial codes; so, in the novels, do the rooted vitality and sophistication she gives to her invented local squires Rochester and Yorke when seen in contrast to the enervate and unoriginal spirit that people like the Sympsons (Shirley, Ch.26) bring from the South of England. Even Hardy, in the Life he wrote for publication under his second wife’s name, was to appeal to the notion of a provincial aristocracy, tracing his own parental line to the le Hardys of Jersey “an old family of spent social energies”[39] (and concealing beneath this fraudulence his real credentials as a representative of rural society).

Behind all such defences lurks the nostalgic idea of”a rural society separate from the urban social community” and still to be found intact in England or the Old New England town or the Southern slave plantation (Max Weber[40]). James, visiting Ludlow, was conscious of traces of a “society good of its kind on which a provincial aristocracy had left so sensible a stamp as to enable you to measure both the grand manners and the small ways.”[41] The components of such a society were disappearing at different times in different places and as Raymond Williams shows[42] there was always an element of nostalgic fiction making in accounts of it. James, from his thoroughgoing cosmopolitan perspective, could discount its attractions: it was probably very boring, he said. But the notion of such a society offered an honorific self-image to the provincial middle and upper middle classes. For literary minded country people, Jane Austen’s novels would dramatise such a self-image showing this society, “good of its kind,” resisting the mobile bourgeoisie like Mrs. Elton and superficial metropolitans like the Crawfords.

There is no doubt that Emily Dickinson’s Amherst offered itself for interpretation in such terms. A commentator in the 1860’s found it “really charming in its simplicity, geniality and intellectuality It was, of course, a somewhat provincial society, but it was sound to the core, educated, refined and at times approached brilliancy. When I went to New York later to reside, I found nothing better.”[43] John W. Burgess was an American Southerner and so had his own cultural reasons (like the Nashville Agrarians later) for promoting an honorific view of gentry-led rural societies. He must have responded favourably to the way the New England town observed rural pietieS (Emily Dickinson’s grandfather, father and brother, for instance, were successively known as “the Squire.”) (Sewall, Life, pp.8, 41-42, 295-99) He portrays her sister-in-law, Susan Dickinson, as “a really brilliant and highly cultivated woman of great taste and refinement” with “a very keen and correct appreciation of what was fine and admirable”.

As “the social leader of the town,” Susan, he says, was “decidedly aristocratic in her tastes”: her friends were “generally scions of the best American families or men who had distinguished themselves highly.”[44]

His sense of the social potential of the role of provincial aristocrat, the dignity and elegance it could give to an immobile life-style, was shared by his hostess. Before settling in Amherst Susan Dickinson had enjoyed to the full the theatre and opera in Baltimore, aware that she might “not always be so situated, that I can see much of this big world of ours.”[45] Once at the centre of Amherst society, however, she attributed to its social rituals (and thereby to her own central part in them) a great deal of dignity and some sophistication:

When the evening came an interesting dinner party of men, at our own house, caught the spirit of the wicked precedent about to be established, and insisted upon strolling up to overlook the dancing … As this happened also to be the night of the President’s [of Amherst College] reception, is it strange that there were wonderings with aspersions cast on the distinguished men who failed to appear on that honourable occasion?[46]

She does her best (by means of a style which it would be unfair to fault because it doesn’t quite achieve the measured elegance it is looking for – her talents after all, were social -) to give the Amherst status-hierarchy equivalence with that of the “big world” beyond. She had been pleased, obviously, to meet in Paris “another old habitue’ of our house and the mansion,” she tells one of her special friends, Samuel Bowles Jr. in 1870.[47] (He had earlier written, perhaps tactfully, of finding Boston society inferior to that of Paris on the one hand and on the other to that of her own Amherst circle.)[48] And from her sense of her own established social position came her frequent clear and assured directives on the correct relation between the sexes. Her daughter is advised, for instance, in dealing with a suitor:

the part of his letter which was meant to be rude was the emphasis he laid on his evening with Mrs. Todd after what I had said to him of her. You would flatter him by showing pique or in any way – so two weeks hence write him a brilliant letter such as he could not write.[49]

Leyda’s selection from her letters and occasional writings does give a vivid impression of her activities in maintaining in almost the “aristocratic” way envisaged by the Southern observer the idea of Amherst as a society “good of its kind.” Only an occasional shrillness (in the last quotation one should note in her defence that her husband’s mistress is involved) reminds one of Mrs. Elton rather than Emma, reveals that Susan climbed socially to become a Dickinson, hints at what Trilling called “an uneasy pride of status.”[50]

To what extent did Emily Dickinson share her sister-in-law’s sense of participation in “good society,” gain artistic confidence from her own inherited place in it to stand up to Higginson and his cosmopolitan standards?

Her letters to members of the local community have none of the kind of appeal to a shared code of elegant manners which Sue Dickinson fostered in Amherst. Emily’s aunt Elizabeth was at the age of nineteen practising in her sentences the arch assertiveness of manner to which the family status entitled them all: “I am not in the habit of writing to gentlemen more than once if I do not get an answer. However I will not censure you …” (Letters, p.6) In contrast the tenor of the poet’s epistolary style is always purely personal, involves the “I” and the “Thou” of the correspondence and no one else. Yet there is no doubt that the elegance and assurance of a “society good of its kind” emerges in her poetry. I am not thinking only of the satiric poems which note “What Soft – Cherubic Creatures -/These Gentlewomen are -” (401) or see that

She dealt her pretty words like Blades –
How glittering they shone –
And every One unbared a Nerve
Or wantoned with a Bone – (479)

In her poem about the snake, for instance, drawing-room wariness of an attractive but untrustworthy gentleman (like that Sue recommended to her daughter) is enacted by the precise social language, the placing of “Fellow” at the end of the line:

But never met this Fellow
Attended, or alone
Without a tighter breathing…

The analogy with polite society is already implicit when more acceptable, harmless (but boring) animal companions are greeted, the speaker’s relief at the freedom to patronise emerging in the verbal extravagance:

I feel for them a transport Of cordiality … (986)

The social resonance of sexual challenge can haunt her landscape. The perfidious “Light” in poem 812 is personified ambivalently at first: it may be a suitor and possible lover lurking about the house, it may be an intruder of distinctly lower class:

It waits upon the Lawn…
It almost speaks to you.

And the social nuance here anticipates the trenchantly superior yet infinitely bereaved conclusion: As Trade had suddenly encroached Upon a Sacrament. In one of her best-known poems the gaucheness of a phrase she uses twice in her letters (“I can’t stop for that”) (Letters, pp.412, 413) is transformed and dignified by the conventions and courtesies of a mannered society:

Because I could not stop for Death –
He kindly stopped for me –
The Carriage held but just ourselves –
And Immortality – (712)

(“Immortality” is, of course, the chaperon.) Her most intimate disclosures may generalise outwards from the furnishings of her father’s house towards ancestral society:

We outgrow love, like other things
And put it in the Drawer –
Till it an Antique fashion shows –
Like Costumes Grandsires wore. (887)

Even in depressed and purposeless isolation her protagonist remains part of a carefully mannered community:

I tie my Hat – I crease my Shawl
Life’s little duties do – precisely –
As the very least
Were infinite – to me –

I put new Blossoms in the Glass –
And throw the old – away –
I push a petal from my Gown
That anchored there – I weigh
The time ’twill be till six o’clock
I have so much to do –
And yet – Existence – some way back –
Stopped – struck – my ticking – through – (443)

The very rhythms in all these examples are redolent with strong social assurance. They are at the opposite pole from those in which a consciously naive jauntiness accompanies “The simple News that Nature told – With tender Majesty” on its way to Higginson and his kind. (Some of her best poems of course achieve the impact of primitive painting with such coy rhythmical effects by placing a naive child-speaker at the centre of a recognisably formal social situation.)

The absence from the letters of such social resonances is not difficult to explain. The letters were only one remove from the intensity and unpredictability of feeling which left her vulnerable to the point of emotional nakedness in proximity to people.[51] She could just master the necessities of relationship in such oblique, idiosyncratic and ambiguous prose. A letter was “the mind alone without corporeal friend.” (Letters, p.460) Poems, however, were at two removes, were constructed fictions:

The Vision – pondered long –
So plausible becomes
That I esteem the fiction – real –
The Real – fictitious seems – (646)

The “I” of her poetry was, as she said, not “Herself” but a formally necessary construct, a “supposed person.” (Letters, p.412) What is more, her fictions were metrical fictions. Everything that went on behind her closed bedroom door (or in her head as she walked in the garden) when poetry was under construction was governed by the animating order and vitality of rhythm. And once her apprentice days were over her staple metres were not the “mainstream” rhythms of Mr. and Mrs. Browning or Keats (though her capacity for versatility and experiment must not be ignored): they were to be found in her hymnal (if they did not echo through her head from a childhood’s regular attendance at “Meeting”). They were eighteenth century rhythms which could be refined when she wished to invoke the balance and antithesis of a mannered society; yet they could be simple when simplicity was required, implicitly recalling the religious tradition which had bound the whole people of her region together at a deeper level than that at which the upper class now expressed its superior status. They could thus be used to undermine such superiority as well as to share it.

5: Reader as Collaborator: Emily Dickinson’s Alternative Audiences

Social assurance idiomatically expressed enters poetry through something that might be called collaboration. It may be little more than envisaging an audience and choosing a sympathetic reader. Or it may be a great deal more, and, as Raymond Williams says, go “beyond conscious cooperation . . . to effective social relations in which, even while individual projects are being pursued, what is being drawn on is trans-individual, not only in the sense of shared (initial) forms and experiences, but in the specifically creative sense of new responses and formation.”[52] The mainstream assumption about this was that social confidence was transmuted into stylistic confidence in the salons and coteries of the centres where the socially prestigious and the artistically talented could congregate while their distinctive graces rubbed off on each other. Even the peasant poet, according to this way of thinking, while he might bring to his work a strong folk inheritance, would perfect his style under the aegis of central elite standards as Clare did in the Fitzwilliam residence at Milton.[53] But the growth of the rebellious anti-role we are concerned with, the provincial literary life-style, was accompanied by collaboration in smaller, tighter, semi-incestuous groupings: Wordsworth and Dorothy, perhaps; the Brontes; Hardy and Emma (whose mutual influence is charted by Robert Gittings[54] and evidenced in Emma’s Some Recollections[55]). In all these cases an early intense familial and imaginative bonding was sabotaged by later events yet laid a collaborative foundation for later artistic practice.

Any attempt to explain how the “I” of Emily Dickinson’s poetry functions at times in a context of strong traditional social awareness has to recognise the collaborative importance of her relationship with Sue Dickinson. Apart from Higginson, only Sue was given any kind of real entree to the poet’s imaginative life. (Samuel Bowles, the other serious contender, she kept at arms length because of his inclination to tamper with her poems and print the revised versions in his newspaper, The Springfield Republican. (Sewall, Life, pp.475-6) And while Higginson received altogether 102 poems, Sue Dickinson received 276 in a constant flow throughout the poet’s creative life. (Sewall, pp.659, 430) Sue, “the social leader of the town” was thus Emily Dickinson’s most steadily envisaged audience.

We begin to understand why she rated so highly when we understand something of ‘her initial significance for the poet. Emily herself, the Squire’s daughter, had been notable among the other Amherst girls for her brilliance and wit before her withdrawal into creative seclusion began. (Sewall, p.8) Her original intensity of feeling towards Sue Dickinson met some kind of obstacle in its early years (during Sue’s engagement to and immediately after her marriage with Emily’s brother Austin). Whether Sue’s now foreordained status as the Squire’s lady had anything to do with it or not, about this time Emily began to send her poems which suggest that she saw their roles as related, Sue being the one who would triumph in life while she, the “I” of the poems, triumphed in art. The first such justification of her ascetic strategy, “Success is counted sweetest” (67) ends:

Not one of all the purple Host
Who took the Flag today
Can tell the definition
So clear of Victory
As he defeated – dying –
On whose forbidden ear
The distant strains of triumph
Burst agonized and clear!

The implication of the poem (as part of a continuing correspondence), is that the poet will “define” the social and sexual triumph which she imagines her ex-classmate merely living. Another poem (299) addressed as a letter to Sue, “Your Riches-taught me-Poverty” (Letters, p.400), contrasts with Sue’s “Dominions,” “Queen”-like Glory” and “Wealth” the poet’s ascetically derived and consolatory “knowledge”:

That there exists – a Gold –
Altho’ I prove it, just in time
Its distance – to behold –

A third piece was sent to Sue and signed “Emily” (Sewall, p.21 1). It begins “I showed her Heights she never saw – (446) and presents the poet as having believed that imaginative ascendency would justify her own supremacy in the relationship with Sue. It was, the poem suggests, when she was rejected on these terms that she imposed on herself the ascetic penalty which released her poetry:

And then I brake my life – and Lo,
A Light, for her, did solemn glow,
The larger, as her face withdrew –

Only the first of these pieces is an objectively realised poem: the other two are half-coherent personal documents. But taken together they do explain how Sue’s continued availability as a reader could have been crucial to Emily’s literary enterprise. They suggest that there was some kind of complementarity between Emily’s creative vocation and the social ascendency of her sister-in-law which laid the foundations of an effective literary collaboration.

Sue Dickinson was the poet’s only discriminating critic and audience until 1861. But in that year Sue, rejecting two successive drafts of a final stanza for poem 216, “Safe in their Alabaster Chambers,” said of the first stanza “You never made a peer for that verse and I guess your kingdom doesn’t hold one.” Emily’s reply is characteristically ambiguous, grateful for the praise yet chafing a little at the critical reservations. In it she says “Could I make you and Austin – proud – sometime – a great way off- twould give me taller feet”(Letters, pp.379-380) and shortly after this she wrote the first of many letters to her cosmopolitan mentor Higginson. Accepting in the light of his (probably quite as discouraging) comments that she should not try for immediate publication, she entered, under the influence of conflicting forces, her period of most intense and powerful creation.

Higginson, then, entered her life as the representative of the literary centres “a great way off’ and he offered as a correspondent the prospect which she subsequently kept in view of an alternative audience for her work. Susan’s continued acceptability as an audience, however, if measured by the steady flow of poems and as Richard Sewall says (Life, p.201), the virtuosity of Emily’s messages to her, was not in question. “To Miss You Sue is power. The stimulus of Loss makes most Possession mean,” she wrote in a letter of 1871. (Letters, p.489) “Power” and “mean” are the two crucial words here, focusing as they do on the imaginative enterprise which was fed by the relationship. But there is no suggestion with Sue (as there is with other correspondents) that letters are preferable to visits. “To see you unfits for staler meetings” she told Sue in 1870. (Letters, p.477) The keynote of Emily’s later letters to Susan was, as Sewall says, “admiration and gratitude” on Emily’s part for the privilege of”living near Sue.” (Life, p.202) Her creative retreat (she did not go even to Susan’s house next door for fifteen years) did not preclude an interest in people:

The Show is not the Show
But they that go –
Menagerie to me
My Neighbor be – (1206)

So her sister-in-law’s visits would be doubly welcome. Susan Dickinson would bring with her her lively commitment to social occasions and social ascendency: and as Sewall says “The qualities that irritated many people in town … Emily apparently cherished as welcome change from the usual Amherst fare.” (Life, p.202) But this very infusion of social energy made its creative contribution (like everything in her life). According to John Burgess, Sue had a very “vivid imagination” and narrative gifts with which “if she had had sufficient application” she could have “rivalled Cervantes”[56] and Emily Dickinson recorded a similar tribute. “With the exception of Shakespeare”, she quaintly and concisely put it to Sue, “You have told me of more knowledge than anyone living.” (Letters, p.755)

Karl Keller’s account of the poet’s circle of literary friends[57] confirms the unique importance among them of Sue and Higginson while recognising some of the ways she learned to keep her distance even from them. Social and literary responsiveness were combined in Sue’s stimulus rather as they were in Charlotte Bronte’s contribution to the social dimension of Wuthering Heights:

Emily would never go into any sort of society herself’ and whenever I went I could on my return communicate to her a pleasure that suited her, by giving the distinct faithful impression of each scene I had witnessed. When pressed to go, she would sometimes say, “What is the use? Charlotte will bring it all home to me.”[58]

While the Amherst relationship lacked the warmth and mutuality the Bronte sisters could depend on, Sue’s creative awareness and her tone of social and moral superiority would reinforce at once in the poet the sense of social and the sense of imaginative status. At the same time it is worth noting that after the instances mentioned earlier there is no record of Emily asking Sue for specific advice about her work. Finally, as T.H. Johnson says, she had to trust her own judgement just as she had with Higginson.[59]

In both cases it seems to have been mainly the stimulating intellectual focus, the audience that could be personalised in the individual, which made her alternative readers important to Emily Dickinson. Both readers were capable of the specialised literary response that can be questioning as well as approving. Both therefore offered her the elements of what Raymond Williams calls collaboration. Johnson[60] says that once Higginson had discounted the possibility of publication and shown his timidity and imperceptiveness the relationship with him became a game which she played methodically until her death. But this is to reckon without the vulnerability of the writer, the writer’s need for an audience, however defective. Emily Dickinson told Higginson later that when he responded to her first letter he “saved her life.” (Letters, pp.460, 649) Such was her need at that point for the alternative audience. Its availability was, as we shall see, from then on reflected in the poetry. The poet’s steady attentions to Higginson represented for her the necessities of competing within the Anglo-American literary community, however painful the sense it sometimes gave her of her own inadequacy. The alternative pole of her creative life was personalised in her relationship with Sue Dickinson: through her she was able to maintain her access to the drive and confidence which status in a society “good of its kind” could instil.

6: The Poetry and its Tensions

That she was emotionally predisposed to look in these two directions for attention is beyond dispute. Two polar attitudes can be detected in the poetry she wrote or preserved before the crucial letter to Higginson, one of anxious humility and one of pert superiority. From about 1856 the latter tendency sought its confirmation in verbal felicity as the poet withdrew from society to make her lexicon, as she told Higginson, her “only companion.” (Letters, p.404) The effect in the spasmodic and very uneven poetry of these years is to present us with achieved moments of elegant superiority, sharp polysyllabic words embroidering grace-notes on an insistent first-person singular:

…..My departing blossoms
Obviate parade. (18)
Wherefore my baffled ‘fingers
Thy perplexity? (69)
….Almost thy plausibility
Induces my belief. (130)

“I,” “Me” and “my” are the forms usually assimilated to such effects: the plural (a kind of royal “we”) can be wittily used to modify polysyllabic assurance with the broader social verities implicit in hymn-book rhythm and rhyme:

To hang our head – ostensibly –
And subsequent to find –
That such was not the posture
Of our immortal mind -… (105)

The alternative note of humility comes to birth in early poems addressed to her brother. At first it is untroubled and genuinely childlike. It is the natural voice of communication within the family clan: “We’re all unlike most everyone,” she had written to Austin at the age of 22, “and are therefore more dependent on each other for delight,” and shortly afterwards: “I wish we were always children, how to grow up I don’t know.” (Letters, pp.239, 241) This is the tone that flows unselfconsciously into her poems addressed to Austin, and the earliest ones she wrote for Sue. By 1861 she is ‘sufficiently the selfconscious artist to use it in constructing a fiction:

Over the fence –
Strawberries – grow –
Over the fence –
I could climb – if I tried, I know –
Berries are nice!

But – if I stained my Apron –
God would certainly scold!
Oh, dear, – I guess if He were a Boy –
He’d – climb – if He could! (251)

And this kind of child-persona was to offer a mask for similarly powerful subconscious materials from time to time from then on. But when she speaks now in less of an assumed voice her tone begins to bear the anxious weight of her sense that she should apologise to genteel readers for the distance between the world of her imagery and the cultural centres:

I’ve known a Heaven, like a Tent –
To wrap its shining Yards –
Pluck up its stakes, and disappear –
Without the sound of Boards
Or Rip of Nail – or Carpenter –
But just the miles of Stare –
That signalize a Show’s Retreat –
In North America – (243)

Compare with the uneasy cadence of that phrase the concluding line of a poem about an Amherst funeral, “There’s been a Death, in the Opposite House”:

..There’ll be that Dark Parade –
Of Tassels – and of Coaches – soon
Its easy as a Sign –
The Intuition of the News –
In just a Country Town – (389)

What is involved is the coy address of the provincial to the cosmopolitan which becomes explicit at about the time of her first letter to Higginson (1862):

This is my letter to the World
That never wrote to Me –
The simple News that Nature told –
With tender Majesty
Her Message is committed
To Hands I cannot see –
For love of Her – Sweet – countrymen –
Judge tenderly – of Me (441)

A tone of voice has here expanded to fill a poem, a cadence has achieved appropriate content. It is the tension between these two polar impulses of style that gives Emily Dickinson’s poetry from about this time its capacity for irony, ambiguity and unconventional syntax. Here, for instance, is poem 303 (which Johnson dates sometime in 1862):

The Soul selects her own Society –
Then – shuts the Door –
To her divine Majority –
Present no more –
Unmoved – she notes the Chariots – pausing –
At her low Gate –
Unmoved – an Emperor be kneeling
Upon her Mat –
I’ve known her – from an ample nation –
Choose One –
Then – close the Valves of her attention –
Like Stone –

There are no extremes of verbal sophistication and naivety. But one can hear in stanzas two and three a wishful consciousness of the “World” and the “countrymen” of “This is my letter…..”:

……. the Chariots – pausing –
At her low Gate –
…… an Emperor be kneeling
Upon her Mat –
I’ve known her – from an ample nation –
Choose One…

The alternative impulse towards superiority and exclusiveness becomes the major emphasis of the poem:

The Soul selects her own Society –
Then – shuts the Door
Unmoved – she notes the Chariots…
Unmoved – an Emperor be kneeling
[i.e. though an Emperor should be kneeling]

But the knotted intensity of the poem grows from its use of language and imagery which can accommodate major and minor emphases. The first stanza, for instance, goes on

Then – shuts the Door –
To her divine Majority –
Present no more –

Is this, we have to ask, “Majority” as in poem 435

(‘Tis the Majority
In this, as All, prevail –
Assent – and you are sane -)?

If it is, then the “divine Majority” is the (democratic) “ample nation” of stanza 3, and the soul is assumed to have already captured the nation’s heart (“her divine Majority”) even though she denies her queenly presence. “Present” in this reading is the participle. An alternative meaning for “majority” however provides a reading which is more in keeping with the poem’s peremptory opening: “Do not try to introduce other candidates for the soul’s approval once she has come of age as a divine individual!” “Present” has now become imperative, the rhythm of the line much more forceful. The poet’s ready resort to the ambiguities her lexicon primed her with (a habitual practice) guarantees that while we prefer this second reading we cannot entirely exclude the first, especially when we find that it anticipates the poem’s final image:

I’ve known her – from an ample nation –
Choose One –
Then – close the Valves of her attention –
Like Stone-

“Like Stone -” suggests the heroic intransigence of the individual act of choice; along with “Valves” however it allows the subordinate recognition that petrified or fossilised valves could not be opened even if the decision (and flow of human sympathy) were reversed. In this context the repeated “Unmoved” in stanza two becomes less ritually portentous, taking on the same kind of ambiguity as that in poem 216 (“Safe in their Alabaster Chambers”) – not only “resolute” but also ‘incapable of being touched by human feeling”:

Unmoved – she notes the Chariots – pausing
At her low Gate –
Unmoved – an Emperor be kneeling
Upon her Mat –
(Compare
Safe in their Alabaster Chambers –
Untouched by Morning –
And untouched by Noon –
Lie the meek members of the Resurrection – ..

Thus the tense little poem accommodates and reintegrates the poet’s two primary social stances.

It was this poetic method that allowed the poet to give structured verbal play to the provincial anti-role without simplifying the internal conflict between the local culture which secured her status and the cosmopolitan culture which belittled it. Karl Mannheim sees as a classic and often repeated cultural situation the one in which the mythology of the dominant caste of a static rural society (doomed to decline) encounters that of a larger more mobile urban stratum: for a member of the older group, he says, “two modes of explanation will collide in thinking about every object.”[61] This was Emily Dickinson’s predicament. She escaped from the traumatic social pressures of her marginality into the creative isolation in which she could write her poetry: her practice of the latter on her own terms provided what Berger and Luckmann call a finite province of meaning, an enclave within the paramount reality’ of common experience.[62] But, as Mannheim says,

The fact that we give names to things which are in flux implies inevitably a certain stabilisation created along the lines of collective activity. The derivation of our meanings emphasises and covers up, in the interest of collective action, the perpetually fluid process underlying all things.[63]

The poet’s withdrawal into isolation, his facility within the poetic “enclave” does not mean immunity from social pressures. His language, imagery, syntax have to envisage a collectivity, an audience however small, by which they can be understood. For Emily Dickinson, constructing in what so many of her poems describe as “silence” the fictions which gave order to her experience, there were two overlapping “collectivities,” that represented by the good society of her locality and that of the genteel literary audience. Each could be summed up in the person of a representative reader. There are a number of extraordinarily alienated poems written in this period of tension which will be discussed later (Chapter 7 (iv)); and it is impossible to decide whether they are written out of damaged or renounced personal relationships, neurotic mental states or the struggle with words itself. A fair copy of 1864 which revises and amends part of one of them (No. 280, “I felt a Funeral in my brain”[64]) is less puzzling and could easily be the poet’s retrospective analysis of the way her cultural dilemma affected her attempt to write poetry:

I felt a Cleaving in my Mind –
As if my Brain has split –
I tried to match it – Seam by Seam –
But could not make them fit.
The thought behind, I strove to join Unto the thought before –
But Sequence ravelled out of Sound
‘Like Balls – upon a Floor. (937)

All one can say in this connection, of course, is that the poem could well be a description of her epistemological predicament. There can be little doubt, however, that her experimental language was an attempt to accommodate such a double perspective to her own satisfaction.

Though “The Soul Selects Her Own Society” undeniably reflects the tensions I am concerned with, it could be argued that the poem is flawed because of them. The ambiguity of the fourth line of the poem becomes ineffective when the poem is read aloud, the primary meaning scanning “Present/no more” and the secondary meaning “Present/no more.” Two more completely successful later poems show the same opposed cultural forces producing a richer and more coherent structural equilibrium. Poem 742, “Four Trees – upon a Solitary Acre,” (c.1863) seems to find in the trees an emblem of a detached and stoical life-style rather like that of the poet herself. The poem has elements of the kind of deference to the cosmopolitan reader which we know Emily Dickinson was now capable of. But they are balanced in its structure against an extraordinary stabilising confidence which has other sources. It is worth noting the human and social reciprocities that are dramatised in stanza 3:

The Acre gives them – Place –
They – Him – Attention of Passer by –
Of Shadow, or of squirrel, haply –
Or Boy-

Considerable dignity in the first line is weighed against the fussy rhythms of the second and third lines, the awkwardly mumbled “haply.” But “Boy,” which seems to receive inferior status from the syntax even to “Squirrel,” gains renewed weight from its place in the curtailed final line. (We remember that the “Boy” is a frequent alter-ego for this poet as in poem 986.) Everything here is dramatically alive, and although the life grows from the writer’s feeling of inferiority in relation to her sophisticated reader, a larger poetic movement accommodates the spasm of unconfidence about the worth of her local materials and (by implication) her art. (The mode has none of the predictability of allegory yet the “Acre” is not too far from representing the first of these and the trees the second.)

The sense of balance in this stanza is derived from that which is present in the poem as a whole. The poem can be read in two ways – as if there was a full stop at the end of the first stanza (in which case it is the poet’s naive and charming plea for the significance of the trees and everything they may represent) or as if there were a colon at that point (in which case the last three stanzas summarise an argument, are what the trees, in their mute existence, “Maintain”):

Four Trees – upon a solitary Acre –
Without Design
Or Order, or Apparent Action –
Maintain –
The Sun – upon a Morning meets them –
The Wind –

No nearer Neighbor – have they –
But God –
The Acre gives them – Place –
They – Him – Attention of Passer by –
Of Shadow, or of Squirrel, haply –
Or Boy –
What deed is Theirs unto the General Nature –
What Plan
They severally – retard – or further –
Unknown –

The second of these possible readings eliminates the powerful but idiosyncratic use of “Maintain” as an intransitive verb and reconciles “Without Design” in stanza one with “Plan… Unknown” in stanza four. It allows a mocking and enlivening social awareness to play over stanzas two and three. Yet the tentative movement of the individual stanzas, the firm pause at the end of each, are too strong to let the second reading completely overpower the first and one is glad that the final stanza with its exclamatory “What”s and the portentous and verb-free “Unknown” fails to clinch the more schematic possibility. We, marginally prefer one reading yet cannot entirely exclude the other: necessities deep in the poem itself work to exclude both the ingratiating naivety of the first possibility and the over schematic sophistication of the second.

The poet’s mannered society and its norms operate in “The Soul Selects its Own Society” and “Four Trees…” as a counterweight to the pressures of the discriminatory cosmopolitan literary situation. These are technically lyric poems whose angular and uncompromising language is given density and irony by the need to accommodate these alternate impulses. Poem 1100, “The Last Night that she lived,” (c. 1866) because it takes its essential shape from a social situation, accommodates the tensions rather differently, though their effect is again visible in the language. Implicit in the tone of the poem’s opening is the deferential assumption that its occasion, a country death bed, is only deserving of the attention of sophisticated readers because certain special insights into “Nature’s” ways ensue:

The last Night that She lived
It was a Common Night
Except the Dying – this to Us
Made Nature different
We noticed smallest things –
Things overlooked before
By this great light upon our Minds
Italicized – as ’twere.

There is a hint here of the “simple News that Nature told” which the poet elsewhere feels is all she has to pass on to the “World.” The freshness and originality of perception which are attributed to the “We” of the poem by virtue of a universalising occasion, however, are die-stamped with considerable verbal authority and poise by that daring and synthesising “Italicized.” The apologetic cough with which the stanza ends does not (and is not intended to) entirely re-establish the speaker’s humble bearing, her stipulated unassuming approach to the exacting reader. The “We” of the poem incorporates an “I” whose shaping awareness promises to undercut and transcend the naive traditionally religious terminology (“this great light upon our Minds”) which would normally be common to the group. As in the other two poems examined, alternative verbal possibilities which have to be actively excluded emerge from the writer’s marginal position and intensify the poetic effect:

As We went out and in
Between Her final Room
And Rooms where Those to be alive
Tomorrow were, a Blame
That Others could exist
While She must finish quite
A Jealousy for Her arose
So nearly infinite –

The ghost of a conventional pious response (to be jealous of rather than ‘for Her”) is raised and given body by the resonance of a popular rhyme-word in hymns, “infinite,” and the arrangement of the sentence over the two stanzas. It is, however, excluded as we follow the syntactically necessary equation between the “Blame” and the “Jealousy” which insists that the latter is on behalf of the dying person, while the (almost) tri-syllabic rhyme linking “finish quite” with “infinite” pushes the latter word away from religious conventionality and towards its root meaning. Our informant, the “we” she is uneasily part of, the outsiders (“Those to be alive/Tomorrow”), the dying woman are elegantly disposed in a structured ironic relationship to each other which leaves no space for a condescending or patronising response. The energy released by this balancing act is carried forward by the action to be dispersed in the fluid syntax of the final stanza:

And We – We placed the Hair –
And drew the Head erect –
And then an awful leisure was
Belief to regulate –

The last two lines can in fact only mean one thing in the context of this return from the frontiers of experience but the placing of “was” in the penultimate line gives to the preferred reading of the last line the tenor of irrelevance.

7: The Poetry and its Genres

Poems with these tense linguistic qualities punctuate the Complete Poems at regular intervals from 1861 on. They help to explain why Emily Dickinson has been seen as a precursor of twentieth century “modern” poetry: and I have suggested reasons for thinking that they emerged as she countered an awareness of the daunting Anglo-American genteel audience with her sense of alternative sustenance within the local community. Such language is, of course, the exception rather than the rule. Its regular recurrence is part of the strange cyclic pattern of development in the course of which about half a dozen different kinds of poem appear and reappear throughout the Complete Poems. Both the nature of these (roughly distinct but overlapping) genres of poem and their incidence offer further witness to the tension in Emily Dickinson’s audience-attitudes with which I am concerned. I shall look first at the genres:

(i) Naive Pastoral.

These poems are limpidly lucid. They essentially defer to the cosmopolitan reader’s discriminatory impulse and ask for special terms from the reader, claiming that their lack of sophistication is compensated for by the special local knowledge of “Nature” they have to offer. The flavour of this appeal (to precisely the kind of reader Higginson represented) can be gauged from a selection of first lines:

I’ll tell you – how the Sun rose – (318)

I know a place where Summer strives. . . (337)

There is a flower that Bees prefer – (380)

This is my letter to the World (441)

Nature – the Gentlest Mother is (790)

The Only News I know. .. (827)

This group of poems represents Emily Dickinson’s most extreme concession to her contemporary readers: both the popularity of the genre when the poetry was first published and its lack of linguistic idiosyncrasy, humour or irony make it difficult to understand Higginson’s timidity about championing these poems at least, when Emily’s sister, Lavinia, first approached him about publication. In the process whereby (as with Clare, Emily Bronte and Hopkins) provincialisms” were edited out of the poems so as not to disturb the genteel reader these poems probably suffered least.

(ii) Love Poems.

Another recognisably Victorian mode is that of related love lyrics, all hinting obliquely at a semi-fictional and incompletely explained personal situation which prevents the consummation of the speaker’s love. A study of the various drafts of Emily Dickinson’s love poems in the three-volume edition of the poems shows that she was steadily converting her affectionate, dependent or passionate feelings towards many people, friends, acquaintances, relatives and possible lovers into semi-fictional poems of this genre. Emily Bronte’ and the Brownings would have offered precedents here. Many of these poems would not have been out a place in a Victorian anthology. Some of them rather unsatisfactorily use a Puritan theological framework to amplify their romantic feeling (No. 664,’ ‘Of all the Souls that stand create-“) or to justify the permanent separation she needed to impose on her lovers (No. 640, “I cannot live with You-“). The best of them allow a precise social awareness to puncture the tendencies to pathetic fallacy or mawkish posturing common to the mode. In poem 348, for instance, the inevitable seasonal behaviour of natural creatures is compared to social “good manners” which, if lacking special sensitivity to the sufferer, are ultimately protective of the individual (and stabilise by suggesting a scale of maturity and immaturity):

They’re here, though; not a creature failed –
No Blossom stayed away
In gentle deference to me –
The Queen of Calvary –
Each one salutes me, as he goes,
And I, my childish Plumes,
Lift, in bereaved acknowledgement
Of their unthinking Drums –

(iii) Naive-voice fictions.

A deferential “little girl” or “simple narrator” voice is assumed (as though the recalcitrant anti-Victorian muse is pretending to be good and to know its provincial place). The “tale” however, a visit to the sea (520), a carriage-drive with Death (712), a dying person’s account of her death (465), an encounter with a snake (986), a self-undermining argument for the existence of God (338), is the vehicle for a grimly ironic vision the full force of which escapes the narrator but not the reader. (It often embodies powerful unconscious materials.) The ambiguities of these pieces reside in their plot rather more than in their language. In poem 465, for instance, the King” awesomely awaited by the narrator and the mourners may, after all, be not Christ but Death:

The Eyes around – had wrung them dry –
And Breaths were gathering firm
For that last Onset – when the King
Be witnessed – in the Room –

The author’s verbal superiority, her analytic control of life is withdrawn from the scene; in its absence these poems are strongly visual in the way of some primitive or surrealist paintings: the exaggerated versions of powerful and fixed social patterns within which the uncomprehending narrator is suspended give the poems an extraordinary nightmare quality.

(iv) Crisis poem. (1861-4)

These poems can be seen as pre-modern in their symbolist exploration of some kind of living death, emotional catastrophe or breakdown of epistemological categories. The essentials of the mode go back to Coleridge and De Quincey and can be found in Poe or Thomson’s “City of Dreadful Night”: but the prosaic speech rhythms and half-rhymes distinguish Emily Dickinson’s tentative pieces from the more usual melodramatic and sonorously metrical treatments of such material. Clusters of shifting symbols (bells, clocks, voyages, shipwrecks, voices, silences) embody the mind’s attempt to come to terms with total flux; but what appeal are the precisely mannered matter-of-fact voice in which they are presented and the homely concrete metaphors to which they are wedded:

I saw no Way – The Heavens were stitched –
I felt the Columns close –
The Earth reversed her Hemispheres –
I touched the Universe –
And back it slid – and I alone –
A Speck upon a Ball –
Went out upon Circumference –
Beyond the Dip of Bell – (378)

This precision of tone and image adds psychological authenticity

(But, most like Chaos – Stopless – cool –
Without a Chance, or Spar –
Or even a Report of Land –
To justify – Despair. (510)

and psychological intensity:

Then Space – began to toll,
As all the Heavens were a Bell,
And Being, but an Ear… (280)

(v) Extended Metaphor.

There is a sense, of course, in which category (iii) poems and category (iv) poems are also extended metaphors. They achieve this respectively by the construction of a significant story (“I started Early – Took my Dog/And visited the Sea”) and by focusing on a significant psychological situation. Neither proposes itself as an extended metaphor as do restrained and logically developed pieces like “We grow accustomed to the Dark -/ When Light is put away-” (419) or “I Years had been from Home… (609) The poems I want to group here are those that use a central metaphor in this very conscious way to achieve their own discrete and autonomous life. The best ones tend to elaborate a startling central conceit set up in their first line:

Before I got my eye put out . . . (327)

One need not be a Chamber – to be Haunted – (670)

My Life had stood – a Loaded Gun – (754)

I read my sentence – steadily – (412)

The first three of these command a combination of surprise and inevitability because of the way the concrete implications of the conceit are teased out as the poem develops. In the fourth, the dramatically heightened image of the court of law is humorously jettisoned to achieve the understatement of the final lines:

I made my soul familiar – with her extremity –
That at the last, it should not be a novel Agony –
But she, and Death, acquainted –
Meet tranquilly, as friends –
Salute, and pass, without a Hint –
And there, the Matter ends –

The power of the poem which is an extended metaphor lies iii its independence of explicit statements of value (including those of the conflicting value systems which complicated the author’s creative predicament). Emily Dickinson is at her most versatile in this mode: one last variation which might be mentioned is her use of a neutral and unspecified pronoun to represent the (unidentified) subject of the poem’s metaphor:

(It dropped so low – in my Regard –
I heard it hit the Ground –
And go to pieces on the Stones
At bottom of my Mind –
Yet blamed the Fate that flung it – less
Than I denounced Myself,
For entertaining Plated Wares
Upon my Silver Shelf -) (747)
or its (unidentified) object, e.g. poem 359, “I gained it so-.”

(vi) Extended simile; poems of definition; riddles.

These poems all construct a poetic image which represents an explicitly stated concept. They may rely heavily on simile itself: the soul’s flight is compared to that of a balloon (1630) or “The Leaves”

like Women interchange
Exclusive Confidence –
Somewhat of nods and somewhat
Portentous inference. (987)

A number of poems rely on “As if’ as an opening gambit and some of them (for instance, “As if I asked a common Aims,” No. 323) do not disclose what they are about and are not completely distinct from my previous category. But the borderline I am concerned with is clear in a case like this:

As if the Sea should part
And show a further Sea –
And that – a further – and the Three
But a presumption be –
Of Periods of Seas –
Unvisited of Shores –
Themselves the Verge of Seas to be –
Eternity – is those – (695)

The substance of the poem is like a set of Chinese boxes, but it is really meant to indicate what Eternity is like: it reverses the formula but is otherwise identical with the “poems of definition” which begin “Exultation is the going/Of an inland soul to sea … (76) or “Presentiment – is that long Shadow – on the Lawn-/Indicative that Suns go down -.” (764) The riddles which seem to me to belong here (for many of Emily Dickinson’s poems of all kinds have a riddling quality) are those which have a specific answer (usually drawn from “natural history”): the “answer” to poem 1463 for instance is “a humming bird,” the answer to poem 560 is “Halley’s Comet.”

(vii) Descriptive, meditative, satiric, didactic and aphoristic poems.

It will probably be apparent by now that I see the various genres of poem as constituting a spectrum of types. Those I have placed first come closest to the appropriate appeal a poet like Emily Dickinson could make to her contemporary readers. Those at the centre of the spectrum (Types (iii), (iv) and (v)) are the-poems that might be called pre-modernist in character. Their ambiguities reflect the poet’s attempt to write at once for both the envisaged readers like Higginson and for others whose tastes were more like her own and who she located (by way of Sue Dickinson) in the local community. Categories (vi) and (vii) are increasingly like pre-romantic poetry in their procedures. In category (vii) birds, butterflies, cats, clergymen, society ladies, life’s vicissitudes, love, faith, death, art, time are treated in a style that is generalised, elegant (at best) and either epigrammatic or strongly antithetical. These are the poems which emerged in the mood (touched off by the continuing relationship with her most consistent reader) in which she exercised her ascendency as the first lady of a largely imaginary hierarchic local community where precise eighteenth century values could be imagined to prevail. Whereas in the middle of the spectrum her rhythmical concern was to introduce by means of her idiosyncratic punctuation a counter-rhythm to the iambic, and rhyme was slackly modified into half-rhyme or intermittently disappeared, here the pointed effect of iambic metre and sharp alternation of full-rhyme and half-rhyme (or insistent full rhyme) rely heavily on conventional expectations:

Upon Concluded Lives
There’s nothing cooler falls –
Than Life’s sweet Calculations –
The mixing Bells and Palls –
Makes Lacerating Tune –
To Ears the Dying Side –
Tis Coronal and Funeral –
Saluting – in the Road – (735)

These are the poems that have their roots most unambiguously in the good society” whose values and wisdom the speaker is in a position to arbitrate. They offer a sane and subtle but abstract commentary on what one might call “life after the event.” Their capacity for generalisation can treat with wise and humane common-sense broad areas of experience (“After great pain, a formal feeling comes-“) (341) or result in the knotted aphoristic linking of traditional oppositions:

Water, is taught by thirst.
Land – by the Oceans passed.
Transport – by throe -… (135)

In both these cases, however, though the procedures may be akin to those of pre-romantic poetry, they look for their vitality to the narrow but deep reaches of Emily Dickinson’s most authentic experience:

This is the Hour of Lead –
Remembered, if outlived,
As Freezing persons, recollect the Snow –
First – Chill – then Stupor – then the letting go – (341)

… Transport – by throe –
Peace – by its battles told –
Love, by Memorial Mold –
Birds, by the Snow. (135)

8: Conclusion

T.H. Johnson’s editorial work has established a provisional order of composition for the twenty years of the poet’s endeavours so that we are able to see how the different kinds of poem recur (or are taken up again for revision) during that time and the sense of a continuous creative process is not affected by the fact that we are often dealing with a poetry of false starts, tailings off, jottings, good lines in bad poems. What is striking is that all the genres I have described seem to emerge and reemerge in a cyclic way as though they are the product of some kind of gestation process. This explains why “poems expressing the poet’s more childish and undeveloped characteristics and poems upon which the sentimentality of her time left its mark are often followed or preceded by poems which define and express the very nearly indefinable and inexpressible.”[65] (Louise Bogan)

It is my contention that no purely literary critical account of the pattern of the Complete Poems can do justice to what is expressed (and sometimes finely embodied) there. This is particularly the case when the critic follows the tradition of Arnold in attempting to recreate at the provincial artist’s expense “that sense of absoluteness which seems necessary to a robust culture”[66] to quote one of the foremost twentieth century exponents of that tradition. It is this absoluteness that Yvor Winters is insisting on when he pursues Arnold’s “note of provinciality” (he calls it “countrified eccentricity”) through Emily Dickinson’s poems, wondering if it lingers in even her best poems as a “fine defect.”[67] R. P. Blackmur diagnoses a “playfulness”[68] in her work which he similarly deplores. And both by implication blame a surrounding “ignorance and platitude…” too far from a’ ‘supposed, centre of correct information, correct judgement, correct taste. (Arnold) “Barren” and “harsh” are Winters’s adjectives for the New England society which according to Blackmur “had no tradition to teach her” that poetry was a “rational and objective art.”[69] It is not enough to say against such critics, as Sewall does (Life, pp.8-9), that there was nothing provincial about Emily Dickinson’s interests or her circle; one must recognise the importance of her creative refusal to accept the subordinate role that Arnold’s terminology (and the social impulses it satisfied) operated to impose, her active rejection of the framework itself. Her value, her heroism if you like, can only be measured by the tension between her vulnerability and her successful moments of accomplished poise. These, the good poems, good stanzas, good lines even, are finally inseparable from the complete sequence of the Poems and are essentially associated with the anti-role she learned to play.

9. Guide to Further Reading

The one volume Complete Poems (1960, 1970) is the basic text but the three volume edition may attract the reader who wonders about variant readings and reworking of poems or is interested in the gestation process of the poetry. (Such a reader might also be interested in R.W. Franklin’s edition of The Manuscript Books of Emily Dickinson (2v., Cambridge, Mass: Belknap Press of Harvard UP, 1981).)

As the reader’s own responses to the poetry accumulate, he may find some of the following volumes of criticism stimulating. Essays by R.P. Blackmur, Yvor Winters and above all Allen Tate (the last two, critics who had not the advantage of Johnson’s text) are conveniently collected in a Twentieth Century Views volume edited by R.B. Sewall. Good studies with their own special emphases are Karl Keller’s The Only Kangaroo Among the Beauty: Emily Dickinson and America (1979) and David Porter’s Dickinson, The Modern Idiom (Cambridge, Mass. and London: Harvard UP, 1981). Porter discusses the importance of Emily Dickinson’s “radical female awareness” (pp.280-290), a quality shared by the other three works of criticism I want to mention, Brita Lindberg-Seyersted’s The Voice of the Poet: Aspects of Style in the Poetry of Emily Dickinson (Uppsala: Almqvist, 1968) Dolores Dyer Lucas’s Emily Dickinson and Riddle (De Kalb, Illinois: Northern Illinois UP, 1969) and Inder Nath Kher’s The Landscape of Absence: Emily Dickinson’s Poetry (New Haven and London: Yale UP, 1974).

Ambiguity, so important to the poetry, is a crucial characteristic also of the Letters (1958)3 (absolutely essential reading as one tries to establish a context for the poetry). A typical example of such ambiguity is Dickinson’s reference in April 1862 to “a terror – since September – I could tell to none.” (Letters, p.404) A disappointed passion for a man or a woman, anticipation of death or of mental instability, acknowledgement of the awesome responsibility of her art? Many interpretations are possible. And such cryptic morsels readily find their way into the texture of approaches to her life and art which insist (for instance) that she was painfully wakened to homosexual self-awareness or psychologically damaged by her mother’s coldness to her or traumatically conscious of her feminine identity.

The three works I am thinking of, The Riddle of Emily Dickinson by Rebecca Patterson (London: Gollancz, 1953) After Great Pain by John Cody (Cambridge, Mass: Belknap Press of Harvard UP, 1971) and Emily Dickinson: When a Writer is a Daughter by Barbara A.C. Mossberg (Bloomington, Indiana: Indiana UP, 1982) represent perfectly legitimate approaches to literary biography. But the irreduceable amount of ambiguity in the materials for a Dickinson “Life” does make particularly welcome the moderation and scrupulousness of the editors of the Letters when they turn to biography: T.H. Johnson, Emily Dickinson: An Interpretive Biography (Cambridge, Mass: Belknap Press of Harvard UP, 1960) and Theodora Ward, The Capsule of the Mind: Chapters in the Life of Emily Dickinson (Cambridge, Mass: Belknap Press of Harvard UP, 1961). It also makes Richard B. Sewall’s biography (1976) with its tolerant plurality of insight, its wise recognition of the limits off actual certainty, an essential tool. Sewall, for instance, weighs exactly the strengths and weaknesses of the arguments of Rebecca Patterson and God’,’. In response to a fascinating book like Ruth Miller’s (1968) he is able to reinforce her documentation of Emily Dickinson’s burning desire to publish her poems, while remaining tellingly sceptical,about her theory that the poems were bound into little books (“fascicles”) by their author in obedience to their structured arrangement in extended lyric sequences.

The other invaluable biographical aid is Jay Leyda’s The Years and Hours of Emily Dickinson (1960). It is a bed-rock source for the realities of the poet’s milieu precisely because it makes no effort to stipulate that this or that component of her day-to-day life actually figured in her consciousness. Jack L. Capps’s Emily Dickinson’s Reading, 1836-1886 (Cambridge Mass.: Harvard UP, 1966) does make a scholarly approach to an aspect of her consciousness which can be quite unambiguously explored.

A final return to the text. Time and the satisfaction of many readers (as well as my own delight) seem to me to have justified the principles upon which T.H. Johnson produced his editions of the Poems (principles which are explained in his “Introductions”). But I should ,Just mention R.W. Franklin’s The Editing of Emily Dickinson: A Reconsideration (Madison, Milwaukee and London: Wisconsin UP, 1967). It argues, contrary to my own view, that Johnson’s editing of the poems missed a fine opportunity to eliminate Emily Dickinson’s eccentricities of notation and provide a text with orthodox punctuation.

10. Notes

  1. (Boston and Toronto: Little, Brown, 1960; London: Faber, 1970). Quotations are all from this edition and are identified in brackets by the poem numbers there assigned. (Details of the three volume edition are given at note 59 below.) Back
  2. Richard B. Sewall. The Life of Emily Dickinson (London: Faber, 1976), p. 583-8, 475- 6. (Subsequent references to this work will be given in brackets in the text.) Back
  3. T.H. Johnson and Theodora Ward, eds.. The Letters of Emily Dickinson (Cambridge. Mass: Belknap Press of Harvard UP, 1958), p.408. (The title of this work will be abbreviated throughout to Letters and references will. whenever possible, be given in brackets in the body of the text.) Back
  4. Morse Peckham: ‘Reflections on Historical Modes in the Nineteenth Century,’ Stratford-on-Avon Studies, 15: Victorian Poetry (London: Arnold 1972), pp. 291-292. Back
  5. “Manners, Morals and the Novel,” The Liberal Imagination (London: Secker & Warburg, 1951), p.207. Back
  6. Compare Lowell’s literary pantheon as he describes it in the North American Review (69, 202) with hers as she explains it in the Letters (pp. 401. 491). Back
  7. In The Century Dictionary ed. William Dwight Whitney (New York: the Century Co.; London: T. Fisher Unwin, 1889-91). p.48. The definition ”narrow, unenlightened” is illustrated from J..J. Shorthouses novel Countess Eve (Edinburgh, 1888): “A society perfectly provincial. with no thought, with no hope, beyond its narrow horizon.” Back
  8. I have adapted these terms from those employed by J.F. Gravier in his Paris et le Desert Francais. (Paris: Flammarion, 1947) though his interest in the historical development of the “mythe parisien” (p. 170) and its world-wide equivalents (p. 121) is subordinate to his concern with the present need for decentralisation. Back
  9. Essays in the History of Ideas. (New York: Braziller, 1955), pp. xii-xiii. Back
  10. “Meaning and Understanding in the History of Ideas.” History and Theory 8 (1969), 39. Back
  11. Trilling, p. 209. Back
  12. “The Provincial Person and his Fate,” Carl Amery, ed, Die Provinz: Ktitik Einer Lebensform, (Munchen: Nymphenburger. 1964), pp.  5-6. Back
  13. Ideology and Utopia, translated by Louis Wirth and Edward Shils (London: Kegan Paul, 1936), pp. 10-11. Back
  14. John Holloway, The Victorian Sage: Studies in Argument (London: Macmillan, 1953), p. 224. Back
  15. R.J. Super, ed., Matthew Arnold: Lectures and Essays in Criticism (Ann Arbor: University of Michigan Press, 1962), pp. 467-68. Back
  16. The Social Construction of Reality (Harmondsworth: Penguin, 1971), p. 83. Back
  17.  Mary Taylor, quoted in Clement Shorter. The Brontes’ Life and Letters. (London:Hodder& Stoughton, 1908), vol. 1, p. 82. Back
  18. Winifred Gerin, Charlotte Bronte (London: OUP, 1967), pp. 40-45, 431-6. Back
  19. F.E. Hardy, The Early Life of Thomas Hardy (London: Macmillan, 1928), pp. 137-38. Back
  20. Quoted Leon Edel, “Introduction,” Bodley Head Henry James, vol. 1 (London: Bodley Head, 1967), p. 8. Back
  21. English Hours, ed. Alma Louise Lowe (London: Heinemann, 1960), pp. 47-48. Back
  22. “Preface to Notes,” Poems of Gerard Manley Hopkins, ed. Robert Bridges (London: Humphrey Milford, 1918), p. 97. Back
  23. T.W. Higginson, “Emily Dickinson’s Letters,” Atlantic Monthly, 68 (1891), 448. Back
  24. English Hours, pp. 29-30. Back
  25. ” …the central spot of all the world (which, as Americans have at present no centre of their own, we may allow to be somewhere in the vicinity, we will say, of Saint Paul’s Cathedral).” Our Old Home and English Notebooks (Boston and Cambridge Mass: Houghton, 1883), p. 255. Back
  26. F. E. Hardy, p. 189. Back
  27. See Cleanth Brooks, William Faulkner, the Yoknapatawpha Country (New Haven: Yale UP, 1963), Chapter l: “Faulkner the Provincial.” Back
  28. Jay Leyda, The Years and Hours of Emily Dickinson (New Haven: Yale UP, 1960), vol. 2, pp. 474-5. Index entries under Bronte in Leyda, Letters and Sewall provide ample confirmation of the general point. Back
  29. Leyda, vol. 1, p. 361. Back
  30. Life of Charlotte Bronte (London, 1857), vol. 1, p.  334, quoting from the “Biographical Notice of Ellis and Acton Bell” attached to the 1850 edition of Wuthering Heights and Agnes Grey. E. D. quotes the “Biographical Notice,” Letters, p. 722. Back
  31. Clara Newman Turner, “My Personal Acquaintance with Emily Dickinson,” quoted Leyda, vol. 2, p. 481. Back
  32. T.H. Johnson, “Introduction,” Letters, p. xv. Back
  33. A Handbook of American Literature (St Lucia, Queensland: Queensland UP; 1975), p. 156. Back
  34. The Heart of Thoreau’s Journal, ed. Odell Shepard (Boston and New York: Houghton Miffiin, 1927), pp. 236-37. Back
  35. Atlantic Monthly 69 (1892), 35-50. Back
  36. North American Review 69 (1849), 202. Back
  37. The Only Kangaroo aAmong the Beauty: Emily Dickinson and America. (Baltimore and London: Johns Hopkins UP, 1979), p. 328. Back
  38. Gaskell, vol. 2, p. 151. Back
  39. F.E. Hardy, Early Life of Thomas Hardy, P.S. Back
  40. “Capitalism and Rural Society in Germany,” From Max Weber, ed. H.H. Gerth and C. Wright Mills (New York: OUP, 1958), p. 363. Back
  41. English Hours, p. 149. Back
  42. The Country and the City (London: Chatto, 1973), pp. 9-12. Back
  43. John W. Burgess, Reminiscences of An American Scholar (New York: Columbia UP, 1934). Back
  44. Burgess, pp. 60-61. Back
  45. Leyda, Years and Hours of Emily Dickinson, vol. 1, p. 221. Back
  46. Leyda, vol. 2, p. 276. Back
  47. Leyda, vol. 2, p. 150. Back
  48. Leyda vol 2, p. 75. Back
  49. Leyda vol 2, p. 445. Back
  50. Trilling, Liberal Imagination, p. 207. Back
  51. T.H. Johnson, “Introduction,” Letters, p. xv. Back
  52. Raymond Williams, Marxism and Literature (Oxford: OUP, 1977), p. 195. Back
  53. Eric Robinson and Geoffrey Summerfield, “Introduction,” Selected Poems and Prose of John Clare (London: OUP, 1967) pp. xviii, xxiv. Back
  54. Young Thomas Hardy (London: Heinemann 1975), pp. 145-7, 151-2, 192-3, 198. Back
  55. Some Recollections by Emma Hardy ed. Evelyn Hardy and Robert Gittings (London: OUP, 1961). Back
  56. Burgess, p. 60. Back
  57. The Only Kangaroo Among the Beauty, pp. 184-221. Back
  58. Shorter, Brontes’ Life and Letters, vol. 2, p. 85. Back
  59. “Introduction,” The Poems of Emily Dickinson (Cambridge, Mass: Harvard UP, 1958), vol. 1, p. xxvii. Back
  60. “Introduction,” Poems vol. 1, p. xxviii. Back
  61. Ideology and Utopia, p. 8. Back
  62. The Social Construction of Reality, p. 39. Back
  63. Ideology and Utopia, p. 20. Back
  64. Ruth Miller, The Poetry of Emily Dickinson (Middletown, Conn: Wesleyan UP, 1968), p. 128 demonstrates this connection. Back
  65. Louise Bogan, “A Mystical Poet,” Emily Dickinson, a Collection of Critical Essays, ed. R.B. Sewall (Englewood Cliffs: Prentice-Hall, 1963), p. 141. Back
  66. F.R. Leavis, New Bearings in English Poetry (London: Chatto, 1954), p.91. Back
  67. In Defense of Reason (Denver: Swallow, 1947), pp.284-286. Back
  68. “Emily Dickinson’s Notation,” Kenyon Review, 18 (1956), 235. Back
  69. Language as Gesture (London: Allen & Unwin, 1961), pp.49-50. Back

Top of the Page