16 Chapter 16: Film, Representation, and Society

The cultural norms that shape cinematic content, much like cinematic language, are largely invisible or unconscious.[1] Cinema, like any other art form, is created by artists who are themselves bound up in a given historical and cultural context. No matter how enlightened and advanced they may be, they cannot possibly grasp every aspect of how that historical and cultural context shapes their view of the world. Inevitably, the unexamined norms and values that makes us who we are filter into the cinematic stories we tell.

Chapter Objectives

  • Discover how film is shaped by and helps shape cultural norms
  • Learn about how film has represented a variety of marginalized groups
  • Explore the power of “social problem” films
  • Review the impact of censorship on cinema

 

The result is a kind of cultural feedback loop where cinema both influences and is influenced by the context in which it is created.

Because that process is largely invisible and unconscious, cinema remains more effective at reaffirming a particular view of the world than challenging or changing it. That is to say, it is an inherently conservative medium, not in the partisan sense, but in the sense of maintaining or “conserving” the status quo. Part of the problem (if you accept that this is a problem) is the economic reality that cinema must appeal to the masses to survive. It costs a LOT of money to make a feature film or television series. Because of this fact, filmmakers and their financiers tend to avoid offending our collective sensibilities. They want us to buy more tickets and pay more streaming fees, so they’re going to err on the side of making us feel better about who we already think we are.

There’s another really important reason why cinema does not tend to challenge the status quo. The reality is that the people who have historically had access to the capital required to produce this very expensive medium… well, they tend to all look alike: mostly white and mostly cis male. When the same kind of people with the same kind of experiences tend to have the most consistent access to the medium, we tend to get the same kinds of stories, reproducing the same, often unexamined, norms, values, and ideas.

This cultural and economic dynamic has shaped cinematic content from the beginning. By pulling our focus from form to content, from cinema as a technical medium to cinema as a cultural document, we can better understand what cinema has to say about who we think we are.

This emphasis on how culture shapes content (and vice versa) inevitably leads to the issue of representation, not only in the sense of who is on screen and how we see them, but perhaps even more important, who is behind the camera. After all, whoever controls the means of communication controls the message.

Women and Cinema

The first concept we need to understand in discussing the representation of women in cinema is hegemony. In the most basic sense, hegemony refers to any political or social dominance of one group over another.  A powerful enough army, of course, can exert a certain amount of control over another nation or region simply by brute force. Violence, though, will only get you so far. Sooner or later, the oppressed will resist violence with violence. Instead, what if you could convince them their oppression is somehow beneficial, or even divinely ordained? What if you could establish a set of cultural institutions – politics, education, religion, the arts, etc – that built this narrative of colonization as the “natural order of things” from the ground up? What if the oppressed internalized this narrative and actually participated in their own repression? Well, then you’d have a hegemony on your hands. Perhaps the most insidious aspect of this form of power is that it works both ways. Those in power also internalize that narrative, believing that their oppression of whole nations and regions is somehow divinely ordained and, ultimately, for everyone’s own good.

The second concept we need to understand is patriarchy. The term refers to any cultural system in which men hold the primary power (political, social, economic, moral), including authority over women. To be clear, just because a nation elects a woman as head of state doesn’t mean they’ve suddenly ushered in a matriarchy. Patriarchies are complex, historically produced, and institutionally affirmed systems where power is multi-layered and distributed unevenly throughout society. A woman president of the United States, for example, would not instantly shift that balance of power (any more than an African American president instantly eradicated racism).

Take two big ideas – hegemony and patriarchy – put them together and what do you get? That’s right, hegemonic patriarchy: a cultural system in which men hold the primary power to manipulate meaning and values such that even women perceive their own subjugation as the natural order of things. This is a system in which men wield extraordinary power over women, not through physical violence (though all too often that is employed as well), but through an array of cultural institutions that convince women that their oppression is somehow for their own benefit, divinely ordained, or the “natural order of things.” Convinced by that same narrative, men come to believe it, too. The result is that no one, neither the oppressed not the oppressor, recognizes it as a cultural invention, a product of history. They all believe it to be a truth that exists outside of themselves, and few think to challenge it.

What does hegemonic patriarchy have to do with cinema? Well, it turns out, mass media is one of the most effective means of communicating the values and ideas that prop up hegemonic systems. Cinema, a medium that has historically been controlled almost entirely by hetereosexual white men, is one of the most effective examples of mass media. In fact, in many of the previous chapters we’ve discussed in great detail how cinema manipulates meaning through a host of tools and techniques that remain largely invisible, by design, to most viewers. That’s exactly how many of us like it. We want to be manipulated, but maybe we should think a little more deeply about what is being manipulated and just who is doing the manipulating.

In film after film, from the earliest narrative cinema, through the Golden Age and the New Hollywood, and arguably into the modern era, the Madonna-Whore Complex, a binary construct whereby women are represented as either “good” or “bad,” has shaped how cinema, and by implication we the audience, see women. For the men in that audience (and behind the camera), that has meant decades of objectifying of women as either virginal or villainous. For the women, it has meant decades of internalizing that same paradox. A classic example of the “good girl”/”bad girl” dichotomy occurs in Metropolis (1927), in which Brigitte Helm plays two characters, one virginal and one lascivious. In the second clip, note the racist stereotype as well.

 

“Good” Maria in Metropolis

 

“Evil” Maria dances in Metropolis

 

Film theorist Laura Mulvey wrote “Visual Pleasure and Narrative Cinema,” a pivotal essay that helped clarify how hegemonic patriarchy worked, specifically in cinema. It was like pulling back the curtain to see how meaning was manipulated and by whom, and she did it by giving it a name: the male gaze (note the literal male gaze in the clips from Metropolis above.)

 

Laura Mulvey on the male gaze

Mulvey’s premise is fairly simple. First, she suggests we’re all inherently narcissistic. That is, we tend to think of ourselves as the center of the universe. So, when we see the (male) hero in a film, all of us, male and female, tend to identify with that hero. Second, she suggests we are also all inherently voyeuristic. That is, we like to watch others but remain unobserved ourselves. This is, essentially, what cinema offers. As we’ve suggested several times in the preceding chapters, the camera is our only way into the cinematic world. We watch events unfold through the frame, which can suggest the frame of a painting in terms of composition, but also a window frame in terms of our fascination with watching the private lives of others. Put those two together, and you get two mutually reinforcing phenomena. We unconsciously identify with a male hero in his objectification of female characters (as “Madonnas” or “whores”), and we identify with the camera as it mirrors that objectification. Put more simply, the male gaze suggests the camera is never a neutral observer, but rather it forces all viewers to assume a heterosexual male point of view.

Obviously, the Male Gaze is alive and well in contemporary cinema, as is its corollary, the Madonna-Whore Complex. Women continue to be objectified and marginalized in mass media entertainment, and cinema, whether in the multiplex or streaming across the internet, continues to be a powerful tool in perpetuating hegemonic patriarchy. Fortunately, there has been more resistance and critique in the years since Mulvey’s essay.

One important source of that contemporary critique originated from an unexpected source: a 1985 LGBTQ+ comic strip in Dykes to Watch Out For by Alison Bechdel. In one ten-panel comic strip, Bechdel shows two women contemplating a trip to the theater. One of them explains that she has a rule about going to the movies. Basically, it has to satisfy three requirements: 1) It has to have at least two women in it; 2) Those two women have to actually talk to each other; and 3) They have to talk about something besides a man.

That’s it. Pretty simple, right? Now think about the last few movies or even television series you’ve watched. How many of them could pass that test? In fact, by the early 2000s, that’s exactly what this became, a test, the Bechdel Test, to be more precise: a basic way to see if a piece of filmed entertainment could muster even the absolute bare minimum of equal representation.

 

The Bechdel Test

 

It is astonishing how little of even contemporary cinema can pass this test. Don’t believe us? Check out this relatively depressing running tally.

In addition to problematic representations of women on screen, another enduring issue involves representation behind the camera, A recent study showed that out of 1,335 entertainment professionals surveyed, only 14.4% of the writers were women, 21.1% of the producers were women, and most startling, only 4.5% of the directors were women. Even when a woman finds herself in a position of power or influence in the entertainment industry, they are often paid much less than men in the same position. This wage discrimination has been highly publicized when it affects well-known movie stars, like when Mark Wahlberg was paid eight times more than Michelle Williams for the film (ironically titled) All the Money in the World (2017), but it affects women at every level of the industry. Add to this humiliation the rampant sexual misconduct, harassment, and outright assault suffered by women throughout that same industry, as exposed by the #metoo and Times Up movements, it may seem like an absolute miracle that there are still women willing and able to work within the system to tell their stories.

From film’s inception, though, women made important contributions to the medium. Alice Guy-Blaché, for example, created one of the earliest narrative films and would go on to establish her own studio, Solax Pictures, in the United States in 1910, where she would make as many as 1,000 films . Many of those films featured women in principle roles or gave women at least equal weight to men. She experimented with synchronized sound, color cinematography, and pushed the boundaries of what was possible in cinematic storytelling. Other important early women directors included Lotte Reiniger, Lois Weber, Germaine Dulac, and Dorothy Arzner.

 

The Cabbage Fairy (1896)

 

The Adventures of Prince Achmed (1926)

Racist Representation of African Americans in Film

As it does with representations of women, film has a problematic history of depicting race. Film scholars have identified five broad categories of Black stereotypes in early American cinema (even if these characters were not always played by Black actors). One of the most prevalent of these stereotypes was of the Black man colluding with white hegemony and the corollary role for Black women. Colloquially known as the “Uncle Tom” and “Mammy” roles respectively, these were characters who upheld and even celebrated the idea of white superiority, the slave who actually seemed to enjoy life on the plantation. Actor Hattie McDaniel played the most famous (and Academy Award-winning) version of the Mammy character, Scarlett O’Hara’s loyal slave in Gone with the Wind (1939), for which she won the Academy Award for Best Supporting Actress. The most infamous example of the Uncle Tom stereotype would have to be James Baskett’s performance as Uncle Remus in Walt Disney’s Song of the South (1946).

 

History of racist stereotypes in cinema

Another common stereotype was the ineffectual and lazy simpleton. Slow-witted and easily fooled, this role was often used as comic relief, a foil for white protagonists to ridicule. Lincoln Perry played the most famous version of this stereotype as the recurring character, Stepin Fetchit. A dim-witted fool who was often billed as “The Laziest Man in the World,” Stepin Fetchit appeared as comic relief in dozens of films. His popularity would earn Perry the distinction of being the first African American actor to earn a million dollars, but Perry would eventually step away from acting, frustrated that he could not get equal billing and pay with his white co-stars. Some have argued that the Stepin Fetchit character was actually a crafty trickster figure, subtly subverting white power in his films, but it’s a hard argument to sustain when you place it in the larger context how that stereotype framed all African Americans as lazy and unintelligent.

A fourth stereotype of African Americans prevalent in early cinema (and literature) was the “tragic mulatto/a,” a character of mixed-race ancestry who was inevitably doomed. Not quite as prevalent as the others, the tragic mulatta would appear now and again, almost always as a female character. In Birth of a Nation (1915), for example, Lydia, a mixed-race housekeeper, becomes the object of her white employer’s desire. Griffith even gives us a title card describing her as the “weakness that is to blight the nation.” There’s also an echo of the “whore” side of the Madonna-Whore Complex here in that the mixed-race character represented a direct challenge to the myth of racial purity and therefore must be destroyed.

The most enduring of the five stereotypes, the one that seems to have never quite disappeared entirely, is the Black male as hypermasculine and dangerous. You see it throughout Birth of a Nation not surprisingly, but also in just about every film in the classical era depicting Black men as violent, unpredictable and overtly sexualized. It was a thinly veiled projection of white fear, a subconscious awareness of their own vulnerability, an awareness, on some level, that the only thing keeping them in power was the idea of power itself, the hegemony of ideas. Maybe that’s why this stereotype has taken the longest to die.

Those tired, old stereotypes were grotesque, but ultimately, they were intended to promote assimilation, which is really just another way of saying submission. The Uncle Tom and Mammy roles, along the Stepin Fetchit character, were held up as positive images, appropriate behavior for African Americans. The tragic mulatto and dangerous Black male were the cautionary tales. All of them were a part of a narrative of assimilation, of submission to white hegemony. As we move into the modern era, we can identify a few new stereotypes designed to promote a similar agenda.

One of the most commented upon new stereotypes is the so-called Magical Negro. This is a recurring character, usually male, often with mysterious, supernatural powers whose only role is help the white protagonist achieve their goal or avoid some terrible predicament. There’s a long list of these characters in popular movies: Michael Clarke Duncan in The Green Mile (1999); Will Smith in The Legend of Bagger Vance (2000); Djimon Hounsou in In America (2002); Morgan Freeman in Bruce Almighty (2003); Samuel L. Jackson in The Unicorn Store (2019). These characters rarely have any inner life of their own, no motivations aside from helping the white characters. A recent film, The American Society of Magical Negroes (2024) satirizes this destructive stereotype:

 

The American Society of Magical Negroes

 

Another prominent modern stereotype is the “Thug” character, which, of course, is really just an updated version of the old “dangerous Black male” stereotype of early cinema. The Thug stereotype is arguably the most common of the new/old stereotypes, appearing in more films and television series than is worth mentioning here. There are others as well, such as The Angry Black Woman, defined by her unmotivated aggression and little else, the Domestic, essentially the Mammy role for the modern era, and The Sassy Best Friend.

Starting roughly around the same time as the birth of Hollywood cinema, there was an alternate film industry, a Black Cinema produced by African American filmmakers for African American audiences. Known as “race films”, they had their own movie stars, their own luminary directors, and their own movie houses scattered throughout the United States. And as more and more African Americans left the south for the industrialized north in the Great Migration, creating centers of Black culture in New York City, Detroit and Chicago, the demand for content that rejected the offensive stereotypes of Hollywood only grew. By the 1940s, there were hundreds of theaters in cities from New York to Los Angeles screening films with Black characters portrayed by Black actors (what a concept!) that were nuanced, heroic, tragic, comic, and human.

One of the most famous and most successful filmmakers in early Black Cinema was Oscar Micheaux. Micheaux would produce more than forty films over his career, spanning the transition to sound, and challenging the prevailing stereotypes with every one of them. His first film, The Homesteader (1918) directly confronts the tragic mulatto stereotype by having the protagonist, an African American, fall in love and marry a woman who “passes” as white but is discovered to be of mixed race, a storyline that actually celebrates rather than denigrates the revelation of African heritage in someone presumed to be white. He formed his own production company in 1919 and produced Within Our Gates (1920) as a direct response to D. W. Griffith’s Birth of a Nation. In that film, a white landowner attempts to rape a Black tenant until he realizes she is his own biological daughter. The revelation actually causes him to repent and turn away from his racist ideas. Other early Black directors include Tressie Souders, Maria P. Williams, and James and Eloyce Gist.[2]

 

Within Our Gates

 

Hellbound Train (1930)

LGBTQ+ Representation in Hollywood

LGBTQ+ characters have appeared in films from the earliest days. Because of cultural taboos and potential censorship, however, filmmakers and actors–many of them a part of the LGBTQ+ community–needed to encode their characters using predominant stereotypes of their time, such as overly “fussy” men or women in “masculine” attire. Writers and directors often employed such characters as secondary comic relief or else as devious villains, rarely making LGBTQ+ characters the protagonists.

 

The question of what counts as LGBTQ+ film and media is anything but straightforward.[3] Many have debated what makes a gay film gay, a queer film queer, and so on. Must the plot revolve around someone’s emergent sexuality, as in Todd Haynes’s Carol (2015) or Donna Deitch’s Desert Hearts (1985)? Does an LGBTQ+ character suffice? How do we know a character’s sexuality unless it is explicitly stated? Must we assume all film characters are straight until proved queer? What about Charles Herman-Wurmfeld’s Kissing Jessica Stein (2001), in which the title character dates a woman and comes out before finally finding the right man? Are films made by queer-identified directors intrinsically queer?

Representation is important for marginalized groups, but applying labels to individuals and content raises ethical issues. With the aim of advocacy and comprehensibility, this chapter makes provisional use of categories such as gay and trans while remaining sensitive to historical contexts. Elsewhere, queer operates as a catch-all for nonnormative sexual identities, behaviors, and aesthetics.

trope is a literary or cinematic convention often identified with a genre. When overused, a trope can become a cliché. The frequent trope of dramatic death in LGBTQ+ film, commonly called Bury Your Gays or “fridging,” includes suicide (e.g., William Wyler’s 1961 The Childrens Hour, Lea Pool’s 2001 Lost and Delirious, Atom Egoyan’s 2009 Chloe), homicide (e.g., Anthony Minghella’s 1999 The Talented Mr. Ripley, Kimberley Peirce’s 1999 Boys Dont Cry, Ang Lee’s 2005 Brokeback Mountain, Patty Jenks’s 2003 Monster), and HIV/AIDS (e.g., Jonathan Demme’s 1993 Philadelphia, Ryan Murphy’s 2014 The Normal Heart, Bryan Singer’s 2018 Bohemian Rhapsody). These tragic plotlines are so ubiquitous that B. Ruby Rich wryly noted that, in 1999, film’s “only lesbian happy ending involve[d] a portal into John Malkovich’s brain” (xxv).

Films involving a queer character’s tragic death aren’t necessarily bad or homophobic, but the persistent, minimally varying association of queerness with unnatural death is reductive and harmful in much the same way that the automatic association of HIV/AIDS with male homosexuality is reductive and harmful. Historically, moreover, these tropes have been cultural or legal requisites for representation to exist at all. To understand the reasons why the definition, production, and consumption of LGBTQ+ film and media remain so complicated today, this chapter devotes significant attention to socio-historical contexts. Because such context is essential to understanding the contemporary conditions and manifestations of LGBTQ+ film and media, the chapter focuses almost exclusively on the United States.

In the 1930s, the Motion Picture Production Code, often called the Hays Code (see below), established moral guidelines that films produced for public consumption had to follow. These guidelines prohibited or restricted the depiction of subject matter such as profanity, drug trafficking, religious effrontery, and childbirth scenes. Before the Code was imposed, however, films featured more LGBTQ+ content than one might expect. See, for example, Harry Beaumont’s The Broadway Melody (1929) and Cecil B. DeMille’s The Sign of the Cross (1932).

Pre-Code depictions of gay and lesbian characters were often caricatured and insulting: mincing, dissolute men and unflatteringly mannish women. These stereotyped conceptions of queerness reflect the era’s prevailing notions of inversion—the idea that queerness equated to femininity in a male body or vice versa. In sexologist Richard von Krafft-Ebing’s words, an invert possessed “the masculine soul, heaving in the female bosom” (399). Though these stereotypes persist today and have been explored in such venues as David Thorpe’s Do I Sound Gay? (2014), queer and feminist theory have helped dispel the assumption that biological sex (male or female) is inherently connected to gender (masculine or feminine), or indeed that there are only two sexes or two genders. During this period (and to the present day), Hollywood would often practice “straight-washing,” the systematic portrayal of queer characters and historical figures as heterosexual. For example, Night and Day (1946) depicted real-life composer Cole Porter as unproblematically heterosexual, as is the fictional Don Birnham from Lost Weekend (1945.)

The Production Code’s later years dovetailed with the Red Scare of the 1940s and 1950s and Senator Joseph McCarthy’s anti-Communist smear campaigns. Josh Howard’s documentary The Lavender Scare (2017) explores the wave of homophobia that arose in conjunction with the Red Scare. The 1950s were a time of extreme scrutiny for gay men and lesbians, leading to firings and other forms of discrimination against individuals suspected of same-sex inclinations. Homosexuality was viewed as dangerously subversive and associated with communist activity—a huge stigma during the Cold War years.

 

The Lavender Scare

“Camp” is an aesthetic that privileges poor taste, shock value, and irony, intentionally challenging the traditional attributes of high art. It is often characterized by showiness, extreme artifice, and tackiness—such as the popular pink flamingo lawn ornaments from which John Waters’s iconic film takes its name. Although largely ironic, camp can also devolve from earnestness gone awry, as in attempts at profundity that fall absurdly short of their targets. Paul Verhoeven’s Showgirls (1995) and Steven Antin’s Burlesque (2010) exemplify the latter. In “Notes on ‘Camp,’” the cultural critic Susan Sontag suggests that nothing in nature can be campy.[15]

Since the 1960s, the camp cinema of John Waters has delighted some audiences while repulsing others. Pink Flamingos (1972), Polyester (1981), and Hairspray (1988) lampoon the strictures and hypocrisies of the suburban United States, featuring the drag queen Divine and innumerable acts of subversion. Divine’s influence went far beyond Waters’s films, too. Legend holds Divine to be the inspiration for the villainous sea witch Ursula in Disney’s The Little Mermaid (1992). More recently, Liz Flahive and Carly Mensch’s comedy series GLOW (2017–) has embraced the campy 1980s phenomenon of the same name, giving fictional life to the erstwhile women’s wrestling venture full of caricatured personae and self-consciously over-the-top storylines.

 

Ben Saunders, “What Is Camp?”

The rise of independent film festivals such as Sundance and Telluride in the 1970s and 1980s spotlighted smaller productions that lacked the financial backing of major studios, from avant-garde work to indie narrative cinema. Following the liberation-oriented activism of the 1970s–1980s and then the HIV/AIDS crisis, a movement of unconventional, experimental, and unapologetic films emerged in the early 1990s. Rich termed this movement “New Queer Cinema,” describing it as one “favoring pastiche and appropriation, influenced by art, activism, and such new entities as music video. . . . It reinterpreted the link between the personal and the political envisioned by feminism [and] restaged the defiant activism pioneered at Stonewall” (xv).

New Queer Cinema films such as Gus van Sant’s My Own Private Idaho (1991) and Derek Jarman’s Edward II (1991) featured overtly queer content, often focalized through outsider characters. Many also engaged with or alluded to the AIDS crisis, including Richard Fung’s 1991 Chinese Characters, Marlon Riggs’s 1989 Tongues Untied, Todd Haynes’s 1991 Poison and 1995 Safe, and Gregg Araki’s 1992 The Living End.

Cheryl Dunye’s mockumentary The Watermelon Woman (1996) calls out the erasure of Black lesbians in Hollywood and the persistence of racist film tropes over the years. The film follows Dunye’s character as she stages interviews with both fictitious and real-life lesbian activists, including Sarah Schulman and Camille Paglia. Jennie Livingston’s Paris Is Burning (1990) documents New York City ball culture, foregrounding Black and Latinx lives and communities involved in the dance vogue scene. Iconic as it has become, scholars including bell hooks and Judith Butler have questioned the film’s racial politics. Livingston, who is white and from a privileged background, arguably profits off a marginalized community and its unambivalent celebration of drag as a means of subversion and liberation.

Whereas New Queer Cinema was defined largely by the queer-identified directors, writers, and producers creating its films, LGBTQ+ films began to enter bigger markets in the early years of the 2000s. Ang Lee’s Brokeback Mountain (2005), for example, featured the (straight) A-list stars Heath Ledger and Jake Gyllenhaal as covert lovers. Subsequently, films such as Julie Taymor’s Frida (2002), Lisa Cholodenko’s The Kids Are All Right (2010), Ryan Murphy’s The Normal Heart (2014), Morten Tyldum’s The Imitation Game (2014), and Barry Jenkins’s Moonlight (2016) have all featured well-known (and disproportionately straight) actors and achieved mainstream prominence, including major award nominations. Some critics have suggested that straight people playing LGBTQ+ roles is equivalent to white actors using black, yellow, brown, or red face to play BIPOC characters.

One of the predominant tropes in LGBTQ+ film and media is the Coming Out Story, exemplified Lianna (1983), Saving Face (2004), Pariah (2011), Call Me By Your Name (2017), and Love, Simon (2018). These films focus primarily on the protagonist’s realization or disclosure of their queerness. Sexuality is framed as a confession or disclosure, something that a closeted character hides or denies until a dramatic outing scene, often the plot’s climax. Coming out stories are important, but it is also important to challenge the status of heterosexuality as the assumed default until a different orientation is declared.

Homonormativity establishes the bounds of acceptable queerness and that which deviates from it, often replicating other dominant social norms with regard to race, sex, class, and ability. For example, ABC’s popular Modern Family presents gay men (a married couple played by Jesse Tyler Ferguson and Eric Stonestreet) positively, but they are rendered respectable through other aspects of their identity: white, wealthy, monogamous, and constituents of a more or less traditionally structured nuclear family. The show’s message about queerness may therefore be read as “Look, we’re just like heterosexuals,” overriding rather than embracing difference.

Debates over homonormativity in film and television abound. For example, Glee provides numerous queer characters and storylines. Yet the show ultimately portrays an abundance of queer characters who are routinely victimized yet nonetheless overarchingly happy and conformist, as though simply rolling with the punches eventually yields contentment. Moreover, LGBTQ+ people of color are still dramatically underrepresented.

Maria San Filippo and others have critiqued bisexual erasure or invisibility within LGBTQ+ cinema. Even when bisexual themes, characters, and storylines are present in film, San Filippo observes, they are typically referred to as gay, queer, or lesbian, terms that fail to acknowledge bisexuality as its own entity. Kevin Smith’s Chasing Amy (1997), Alfonso Cuarón’s Y tu mamá también (2001), David Lynch’s Mulholland Dr. (2001), Charles Herman-Wurmfeld’s Kissing Jessica Stein (2001), Ang Lee’s Brokeback Mountain, and Luca Guadagnino’s Call Me by Your Name all unambiguously depict both same-sex and different-sex relationships, yet they are seldom framed in terms of bisexual identity or desire.

Trans people are often excluded from mainstream (and independent) media, even from narratives specifically about trans lives. Among the films focused on trans individuals that have found commercial and critical success, many feature cisgender actors exclusively. Some examples include Hilary Swank in Boys Dont Cry (1999), Felicity Huffman in Transamerica (2005), Jared Leto in Dallas Buyers Club (2013), and Eddie Redmayne in The Danish Girl (2015).

Laura Horak observes, too, that much writing on trans media focuses on representations of trans individuals rather than on trans authorship. Because being out in Hollywood has always posed professional and personal risks—from pigeonholing and blacklisting to physical violence—it’s impossible to know the full extent of sexual and gender diversity that has existed among filmmakers, performers, writers, and others.[4]

Hollywood Representation of Asians

As with other marginalized groups, Hollywood represented Asians via numerous cinematic stereotypes throughout film history. Arising from fears surrounding immigration from China after the Civil War, the concept of “yellow peril” reflected a backlash against Asians and led to legislation such as the Chinese Exclusion Act (1882). The yellow peril ideology held that Asians would “infect” the United States with drugs, prostitution, and gambling. It also paradoxically suggested that Asian countries planned to subjugate the West. Some recent historians have used recent rhetoric surrounding COVID-19 to note that yellow peril ideology persists. Harry M. Benshoff and Sean Griffin argue that negative real-life treatment of Asians and Asian Americans found significant cinematic analogues. In particular, they observe that many persistent stock Asian characters found their genesis in this era. Some stock characters include the laundry worker, the “coolie” (a low-wage laborer), the Tong (gangsters often involved in “white slavery”), the inscrutable sage, and the Dragon Lady (exotic and desirable yet villainous) (140-147). Referencing Anna May Wong’s character in The Toll of the Sea (1922), the “lotus flower” (exotic, desirable, and submissive) is another stock character that still persists. Asian men were typically represented as either weak and subservient or mysterious and devious. Asian women generally were exoticized and sexualized. As Jeff Yang notes, until the 1960s Asian characters “were depicted in movies almost exclusively as demented buffoons, vicious thugs, alien outsiders, and servile sidekicks to heroic, charismatic non-Asian protagonists” (viii).

 

Act2EndRacisim on yellow peril

 

The Toll of the Sea (1922)

 

Nancy Wang Yuen on the history of Asian representation on film

Despite the increase in Asian actors and storylines in the Hollywood films of the 1960s and 1970s, many of the stereotypes above continue to the present, supplemented with others, including the martial artist, dutiful child, model minority, and the perpetual, and linguistically challenged, foreigner.

 

The Take on the model minority trope

While some Asian and Asian Americans–such as Anna May Wong and Sessue Hayakawa–acted in early films most of the starring parts of “Asian” roles went to white actors. Such systemic “yellow face” persisted well into the 1970s, and white actors from Mary Pickford, Bela Lugosi, and Katharine Hepburn to John Wayne, Mickey Rooney, and Emma Stone have played Asian characters.

 

Documentary on Asian whitewashing

Cinematic Representations of Muslims

As with its portrayal of most marginalized groups, Hollywood relied heavily on stereotypes in its representation of Muslims. Generally, in its earliest days, cinematic depictions collapsed any distinction between Muslims as practitioners of the Islamic faith and “Arabs.” Typically, such characters were not presented positively, much less three-dimensionally, and Hollywood–relying on Colonialist imagery and ideology–employed Middle Eastern men as crafty, decadent, greedy, and dangerous. Women, in contrast, generally appeared as exotic temptresses. Representations of faith–if they appeared at all–focused on architecture (minarets.) The Sheik (1921) and The Thief of Bagdad (1924), both of which used non-Middle Eastern actors as protagonists–exemplify this approach, the latter also including flying carpets and other magical objects. One of the few prominent Muslim actors of early Hollywood, Mohammed Hassan (Frank) Lackteen was relegated to playing miscreants (often in non-Muslim roles) like Hamid Bey or “Pawnee Killer” or stereotypes such as “Mustapha the Beggar.”

Frank Lackteen

Mohammed Hassan “Frank” Lackteen (né Yachteen)

Elements of these depictions continued through the 1960s, with films focused more on white, Christian heroes’ experiences in Muslim nations. Most Muslim actors served as part of the background and (generally) Arab characters were distinguished by their helpfulness to the protagonist. Those opposed to the hero recycled earlier villainous imagery, while those on the protagonist’s side were presented as loyal and willing to sacrifice themselves. We can see such representations in films such as Lawrence of Arabia (1962) and Khartoum (1966) (this film uses decidedly non-Muslim actor Laurence Olivier in blackface as the Sudanese foe.) Disney’s Aladdin (1992) and Iron Man (2008) are examples of other, later, films that lean into the false “good” or “bad” dichotomy, and Three Kings (1999) is another entry that depicts the Muslim experience through the eyes of non-Muslim whites.

More recently, Hollywood has added to such stereotypes with the prince or business person with oil-related riches and, most perniciously, the fundamentalist (and often indistinguishable) terrorist motivated by nothing but the single vision of destroying the West. See Operation Thunderbolt (1977), The Delta Force (1986), The Hurt Locker (2008), Traitor (2008), and American Sniper (2014) for some examples of this Islamophobic trope. Terror-related depictions of Muslims tend to spike following geo-political crises such as the Iran hostage crisis, the Gulf War, and 9/11.

 

Excerpt from Reel Bad Arabs (2006)

 

Hollywood misrepresentation of Muslims

While in dire need of a more diverse approach to Muslim characters, Hollywood is more aware of its propensity to rely on a limited number of stereotypes. Significantly, more positive–and nuanced–representations of Muslim characters have started to appear, although generally in independent films rather than big budget ones. For instance, films such as Americanish (2021), The Persian Version (2023), Breaking Fast (2020), and Undefined (2017) have challenged the predictability of mainstream portrayals of Muslims. Additionally, Muslim actor Mahershala Ali won an Academy Award for best supporting actor in 2017, and Riz Ahmed received the first-ever best actor nomination in 2020. Nonetheless, the dearth of Muslim-American directors and the lack of funding for Muslim-centric films continue to impact how Muslims appear on screen.

 

Americanish trailer

 

Breaking Fast trailer

 

The Persian Version extended preview

Disability on Film

While cinematic representation of disability no longer exclusively focuses on the shocking spectacles of the silent and early sound era, authentic and nuanced portrayals of disabled people still compete with stereotypes and pathologized representation. When they didn’t erase disability (particularly non-physical disability) altogether, early films tended to employ disabled people in pitiful ways (as in The Faithful Dog; or, True to the End [1907]), villainous ones (as in Phantom of the Opera [1925]), or as disturbing (as in Freaks [1932]). Disability would often function as a plot device rather than as a reflection of reality, and characters such as Tiny Tim in A Christmas Carol (1938) rarely had any identity beyond their disability.

Possibly because of increased awareness of disabilities as a result of World War II, Hollywood occasionally–but still infrequently–depicted disabled characters in ways that transcended archetypes. The Best Years of Our Lives (1946), for example, includes the struggles of a returning veteran adapting to everyday life after losing his hands. Significantly–and departing from typical practice–the role was played by a real-life disabled veteran, Harold Ross, who won an Academy Award for his performance.

Another film presenting disability in a more rounded way was Johnny Belinda (1948). More typically, Jane Wyman, a non-deaf performer, played the deaf protagonist and won an Academy Award. Historically, Hollywood has praised non-disabled actors for taking on “challenging” roles centered on disability. This, of course, resulted (and still results) in fewer parts for disabled actors. Many films of this era focused on “overcoming” disability as an inspiration.

 

The Best Years of Our Lives

While most early depictions of disability focused on the body, from the 1960s films explored both the psychological dimensions of disability and non-visible disabilities. In both cases, Hollywood typically did not cast disabled actors in such roles. Further, most films of this type–such as The Three Faces of Eve (1957), Shock Corridor (1963), and One Flew Over the Cuckoo’s Nest (1975)–sensationalized their portraits. Many films (such as 1978’s Coming Home) treated disability as a metaphor for other socio-political topics. This approach–including the casting of non-disabled actors–continued even after deaf actress Marlee Matlin won a 1986 Academy Award for her role in Children of a Lesser God. Films such as Forrest Gump (1994) and My Left Foot (1989) (both movies used non-disabled actors) either reprised stereotypes from earlier eras or aimed for “inspirational” messages focused on “overcoming” disabilities rather than living with them.

 

Children of a Lesser God

In recent years, Hollywood–perhaps pressured by activists–has placed more emphasis on both crafting more accurate roles and casting disabled actors, although plenty of exceptions still exist. For instance, while The Theory of Everything (2014) treated Stephen Hawking as a rounded individual with a disability, it still used an able-bodied actor for the part. Sound of Metal (2019) is another recent film that employed a non-deaf actor in the lead role, although it used disabled actors in supporting parts. The film also received some criticism for relying on “super hero” stereotypes of disabled people achieving success only through extraordinary effort. Some films, such as CODA (2021), which earned Troy Kotsur an Academy Award, and My Feral Heart (2016)–combine both substantive storylines and disabled lead actors. Nonetheless, limited roles (including supporting roles) and scripts continue to be a challenge despite over one in four Americans having a disability. Disability also still figures prominently in the horror genre, suggesting that Hollywood continues to rely on stubborn stereotypes.

 

The Peanut Butter Falcon

 

Latinx Representations on Film

Given California’s history as a part of Mexico, it’s unsurprising that Latinx actors have been present from the earliest days of Hollywood film. As with many marginalized groups, however, early depictions relied on negative stereotypes, and most roles were minor. In the silent era, the use of intertitles mitigated the accent-related stereotypes and humor that arose with the advent of sound. Nevertheless, U.S. tensions with Mexico before and after World War I allowed for filmmakers to portray Mexican characters in hostile ways with little comment. Hollywood also tended to level differences among Latinx cultures and developed a generic representation that ignored geographic and cultural diversity: Mexicans, Cubans, Puerto Ricans, and other groups were often treated as interchangeable. Some Latinx actors, such as Dolores Del Rio and Ramón Novarro were able to succeed despite the lack of positive roles.

 

In Caliente (1935)

Charles Ramírez Berg identifies six common representations that manifested in silent films but persisted to the present: the bandido (“dirty … vicious, [and] cruel” [68]), the harlot (“lusty and hot-tempered” [70]), the male buffoon (uneducated, emotional, and ineffective [72]), the female clown (“silly and comical” [73] , the Latin lover (“eroticism … tinged with violence and danger” [76]), and the dark lady (“virginal, inscrutable, aristocratic–and erotically appealing” [76]). Frederick Luis Aldama, and Christopher González identify a seventh stereotype that could be merged with some of the others: the “lazy” Mexican (58). Aldama and González point out that while these stereotypes began in films such as the vilely named Broncho Billy and the Greaser (1914), they were updated in succeeding generations and still appear in contemporary films such as Sicario (2015) and Casa de mi Padre (2012) (44).

 

 

 

Genesis of Hollywood’s  Latinx stereotypes

 

The Western in particular, served as the locus of misrepresentation, and many of the portrayals first established in the 1910s persisted through the films of the 1930s-1970s. Directors such as John Ford, Sam Peckinpaugh, Howard Hawks, Sergio Leone, and many others cemented and recycled the stereotypes discussed above. Nonetheless, Latinx actors such as José Ferrer (Oscar winner), Thomas Gomez (Oscar-nominated), Rita Moreno (Oscar winner), Anthony Quinn (Oscar winner) received a high level of acclaim. Despite this, Hollywood persisted in offering leading “Latinx” roles to white actors in so-called brown face. Marlon Brando in Viva Zapata! (1952), Charlton Heston in Touch of Evil (1958), and Natalie Wood in West Side Story (1961) are but a few examples of this practice.

Starting in the 1940s, an additional stereotype, the “illegal immigrant” began appearing in films such as Border Incident (1949). These negative portrayals were at times countered with positive images in films such as A Medal for Benny (1945) and Salt of the Earth (1954). The civil rights movement in the 1970s saw a continuation of these competing trends, but the increase of Latinx writers and directors such as Luis Valdez and Sylvia Morales provided an increased awareness of the effects of destructive media representation.

 

Salt of the Earth

While this trend continued with the high profile works of directors such as Gregory Nava, Miles Morales, and Aurora Guererro, the lack of opportunity–despite increased Latinx population and strong demographic trends. As with many other racial and ethnic groups, Latinx actors and directors remain vastly underrepresented in Hollywood. As we have seen throughout this chapter, the most efficient way to combat negative representations is by empowering filmmakers to tell their own stories on screen.

Zoot Suit (1981)

 

Mosquita y Mari (2012)

Indigenous Representation in Hollywood

As Benshoff and Griffin note, many cinematic stereotypes of Indigenous Americans find their sources in works that long pre-date the film industry. Among the most persistent of these are the “blood thirsty savage” and the “noble savage” (120). Early English settlers (or invaders) such as William Bradford and Mary Rowlandson set the tone for the former with descriptions of encounters with barbarous demons bent on destroying God’s people. Over a hundred years later–after the Indigenous people were no longer viewed as competitors for land and resources–a competing image appeared in the works of fiction writers such as John Augustus Stone, James Fenimore Cooper, and Lydia Maria Child: the innately good (almost childlike) native co-existing with the natural environment. Both stereotypes continued with one of film’s earliest genres, the Western. Nevertheless, some early films departed from this false binary, and established what Benshoff and Griffin term the Indian Story genre, a type of film that sometimes portrayed indigenous Americans in a more sympathetic light (123). While most such films fetishized native customs, some, such as the films of James Young Deer provided a more nuanced view.

 

Hollywood stereotypes of Native Americans

 

James Young Deer’s White Fawn’s Devotion (1910)

From the 1930s to the 1960s, however, thousands of Hollywood Westerns (the most popular genre of the time) relied on the false binary of the “noble or ignoble savage” that presented Indigenous Americans as either docile children of nature or irrationally hostile (Marubbio and Buffalohead 4). M. Elise Marubbio and Eric L. Buffalohead observe that such representational shorthand is profitable (3-4). Rather than express Native Americans as individuals, Hollywood treats them either collectively (and anonymously) as attacking warriors or else as stereotypes such as the drunk, the drudge, or the sexually available princess. The negative portrayal–the bloodthirsty, settler-killing savage–predominated and, as Jacquelyn Kilpatrick notes, “achieved lasting fame” (19) as a worthy opponent who was “savage and crafty and endowed with physical prowess” (27).

Kilpatrick also observes that such portrayals suited the national narrative of manifest destiny, as they focused on the ideal of individual, rather than collective, land that became “valuable” only through the work of the (white) settlers as they overcame impediments that included sneak attacks from Indigenous warriors (42, 45). Such depictions also ignored tribal and regional differences, mixing and matching clothing, weapons, tactics, music, and other cultural practices at will (51). Indeed, the widespread practice of “red face” often dispensed with the need for Indigenous actors, save for minor (and usually non-speaking) roles. Even into the 2000s, white actors would often assume pivotal “Indigenous” roles.

Kilpatrick notes that Native Americans often served as an “all-purpose metaphor,” whether as an unfairly persecuted people in the anti-communist 1950s or as a “mystical people” that “revered the earth” during the counter-cultural 1960s and 1970s (58, 65). Even more sympathetic portrayals, such as Dances with Wolves (1990), Little Big Man (1970), or Killers of the Flower Moon (2023) generally used white characters as focal points to tell the story of their indigenous characters. Further, such films rarely resisted stereotypes completely, For example, in Dances with Wolves a “vanishing” and gentle tribe (Lakota Sioux) contrasts with the hostile Pawnee: noble savage vs. ignoble savage once more. For another example, Killers of the Flower Moon, which did use indigenous actors and consultants, concentrates mainly on the white, male protagonist and antagonist and decenters its Native American performers.

Starting in the late 1990s with Chris Eyre’s Smoke Signals (1998), Indigenous filmmakers have been interrogating and replacing Hollywood stereotypes, although most such productions are lower-budget independent films. Sherman Alexie (The Business of Fancydancing [2002]), Randy Redroad (The Doe Boy [2001]), Blackhorse Lowe (Fukry [2019]), and Shelley Niro (Café Daughter [2023]) are a few directors who have challenged the Hollywood status quo in order to produce more nuanced representations of contemporary Indigenous people. Kilpatrick notes that in order to overturn centuries of stereotypes, Native Americans need to find roles not only as actors and directors but all across the industry so that they can present Indigenous characters “with a respect that does not preclude laughter, tears, pleasure, or even subversion” (233).

 

Tai Leclaire on what Hollywood gets wrong about Native Americans

 

Smoke Signals deconstructs the “Hollywood Indian”

 

Shelley Niro previews her film Café Daughter

 

Social Problem Films

As we have seen in Chapter Fourteen, documentary films often tackle important contemporary issues. While some documentaries receive public acclaim and reach a wide audience, they rarely see the type of box office receipts that a popular fiction film will earn. Consequently, the power of fiction films to raise consciousness of an issue, start discussions, and impact change often far outpaces that of their nonfictional counterparts. Recognizing this reality, many directors of fiction films choose to engage with social problems. Greta Gerwig, for example, underscored institutional sexism in Barbie (2023), a major hit watched by millions of viewers. Gerwig used comedy explored the effects of patriarchy on women in the same year as the documentary War on Women (2023) investigated many of the same issues in a more serious vein. While the award-winning documentary no doubt advances more sophisticated and well-researched arguments about sexism than does Barbie, it’s indisputable that America Ferrera’s monologue had more of a societal impact.

 

War on Women

 

Barbie

As Gerwig demonstrates in Barbie, directors can approach social issues even in films with an overall comic tone. Many, if not most, films engaged with social justice topics, however, employ a more serious tone. While documentaries are often straightforward about highlighting a particular injustice, or concern, fictional social problem films will sometimes take a more indirect approach by getting their audiences to invest in a character or community that then becomes entangled in a wider societal problem such as racism, pollution, drug use, or political corruption. By having viewers relate to a character, such films can often bypass initial audience resistance to a topic that they may have strong ideological views about. For example, a viewer who claims to be “anti-regulation” might ultimately cheer for Erin Brockovich as she exposes corporate complicity in covering up the cancerous effects of its polluted water.

 

Erin Brockovich (2000)

 

Social problem films will often bring hidden issues into the light, which can lead to public dialogue and, potentially, policy or behavioral changes. These films can range widely in tone and tactic, including moral panic (e.g., Reefer Madness, 1936), melodrama (e.g., The Lost Weekend, 1945), comedy, (e.g., Guess Who’s Coming to Dinner, 1967), dramedy (e.g., Do the Right Thing, 1989), didacticism (e.g., Moolaadé, 2004]), sincerity (e.g., There Is No Evil, 2020), and more. Such films demonstrate how cinema can function as a powerful ideological tool. Indeed, some films, such as JFK (1992) and The Snake Pit (1948) even led to legislation.

 

Reefer Madness

 

There Is No Evil

 

The Snake Pit

Censorship and Cinema

Pre-1934 United States

Once films began to tackle subjects more complex than people exiting buildings, public officials and citizens began to weigh in on what should–and should not–be shown to the public. Because most films don’t require a literate audience, politicians feared that vulnerable audiences might be corrupted or unduly influenced by the movies. In the United States, government regulation of films started at the local level, with regional film boards in places such as Illinois and Pennsylvania. With the rise of film’s popularity– and with directors willing to push the limits of propriety–however, some religious and civic organizations demanded more universal regulation of film content. In particular, some individuals opposed themes and images dealing with sexuality, drug use, race, religion, and unpunished crime. Generally, such calls for censorship infantilized film audiences, implying that less educated viewers would be unable to resist the temptations depicted on the screen.

In 1912, the United States Congress passed the Sims Act, which targeted boxing films. Critics of such films argued that in addition to their depictions of violence, they also promoted an activity that was illegal in most states–and thus violated interstate commerce rules (Biesen 7-8). Additionally, the Supreme Court affirmed the ability of state censorship boards to regulate films with its 1915 Mutual v. Ohio decision. Sheri Chinen Biesen suggests that this decision may have spurred Hollywood to consider self-regulation as a more palatable alternative to state or federal mandates (10). Indeed, by 1921 over thirty states had censorship boards, and the studios responded with a voluntary set of 13 guidelines based on Jesse Lasky’s recommended 14-point restrictions (11).

Despite the guidelines, arbiters of public morality continued to complain about Hollywood’s films throughout the 1920s. Several public scandals involving famous actors sparked public furor, and, afraid of the possibility of official government regulation (a distinct possibility because of the Mutual v. Ohio ruling), executives from various studios decided that a better course of action would be to self-regulate. Initially (in 1927), this involved a set of “Don’ts and Be Carefuls” (prohibitions against “sex perversion” and “miscegenation,” for instance), but many directors and companies ignored them.

With the advent of sound films increasing movie attendance–and, allegedly, film’s ability to corrupt the public–the Motion Picture Producers and Distributors of America agreed to encode a series of prohibited subjects ranging from sex and profanity to graphic violence and the ridicule of religion. From 1930-1934, the nominal enforcer of this code, Will Hays, a former postmaster general, initially was unable to dissuade filmmakers from interpreting the Production Code (popularly referred to as the “Hays Code”) as they saw fit. As one can imagine, many directors had very broad interpretations, indeed. Known as the “pre-Code era” despite the Code technically being in force, 1930-1934 brought forth a host of film titles that almost openly mocked the Production Code restrictions. Sample titles include Night Nurse (1931), Bad Girl (1931), Freaks (1932), I’m No Angel (1933), Baby Face (1933), and Madam Satan (1930).

 

Cinema Cities on Pre-Code Hollywood

 

Baby Face

 

Night Nurse

1934-1968 United States

With Hollywood virtually ignoring the Code, many self-appointed guardians of public morality saw the Hays Code as little more than a publicity stunt. Various organizations put more and more pressure on Hollywood to deliver on its promise, and with the arrival of Joseph Breen, it did. Breen actively called out screenplays for breaches of the Code and demanded that producers halt filming until they conformed. From 1934 until shortly after Breen’s retirement in 1954, the Production Code ruled Hollywood, although creative directors found ways to skirt the Code with euphemism, symbolism, and creative editing. Breen’s zeal in enforcing the code, however, resulted in the demise of many of the state and local censorship boards (Biesen 41).

During the 1940s and 1950s, the so-called Red Scare emboldened the House Committee on Un-American Activities (HUAC) to seek out film personnel they felt were sympathetic to communism, and Hollywood responded by blacklisting a number of prominent writers, directors, and actors. Ironically, some filmmakers were even accused of harboring pro-Soviet sentiments because of films they made during World War II, when the government opposed movies critical of the USSR. As a result, many post-HUAC filmmakers avoided themes that were critical–or could be perceived as being critical–of the United States.

 

Joseph Breen on the Production Code

 

Clara and Julia Kuperberg on the history of the Production Code

 

Turner Classic Movies on the Hollywood Blacklist

With Breen’s retirement, some key legal challenges–especially 1952’s Joseph Burstyn, Inc. v. Wilson, the so-called Miracle decision that expanded some first amendment protections for movies–and a sea change in cultural values, the Code withered away in the 1950s and 1960s, although it did not formally end until 1968  with the adoption of the voluntary MPAA rating code (with its familiar G, PG, [etc.] system.)

Post-1968 United States

Censorship of American films did not end with the demise of the Production Code. After a series of court victories, including 1957’s Roth v. United States, had loosened restrictions on sexual content, 1973’s Miller v. California clarified the application of local community standards with respect to obscenity. Inevitably, this resulted in increased censorship, although these attempts were rarely successful. A 1973 attempt in Phoenix, Arizona, to ban The Last Picture Show (1971) ultimately failed, for instance, while a federal court overturned an Oklahoma judge’s 1997 ruling that The Tin Drum (1997) was obscene.

In the post-Code era, censorship tends to local and less systematic, but there are some exceptions, such as Florida’s 2022 Parental Rights in Education law (AKA “Don’t Say Gay” law), which has intimidated educators into not showing films that could conceivably violate the provisions of the law. Such self-censorship is often the result of such laws, and the threat of legal action (and its concomitant expense and anxiety) functions as a proxy for actual censorship.

One should note that boycotts–attempts by private citizens or groups to discourage the screening of a film–do not fall under the umbrella of official censorship. For instance, numerous Catholic and evangelical organizations and clergy led boycotts over Martin Scorsese’s The Last Temptation of Christ (1988), but while some chains decided not to run the film many others freely did so. Similarly, some disability rights groups boycotted Tropic Thunder (2008) for its portrayal of developmental disabilities (surprisingly, the NAACP did not protest the film’s use of black face), but the film played widely and was a financial success. Boycotts function across the political spectrum as a form of individual speech rather than as a government-imposed mandate.

Global Censorship

As with the United States, nations across the globe imposed censorship on the movie industry. Similarly, the qualms leading to restrictions ranged from sex and violence to religion and matters of national and personal identity. With the rise of fiction films in the early 1900s, countries as diverse as Canada, China, and Mexico implemented laws and boards designed to rein in the perceived excesses and threats of cinema. Colonies such as India and Nigeria, moreover, quickly followed suit with England’s 1909 Cinematograph Act and 1912 British Board of Film Censors.

Interestingly, as regimes changed in countries such as Russia, Germany, and China, censorship boards and laws often allowed previously prohibited topics only to ban films sympathetic to their opponents. In Russia, for instance, the Tsarist government banned films critical of its government and of the Church, but in the Soviet era censors proscribed films deemed supportive of the earlier regime (Biltereyst and Vande Winkel 97). Similarly, the Weimar Republic and National Socialist (Nazi) governments in Germany had competing views of what constituted an attack on the national character (85-86). As governments succeeded each other, they often repealed censorship laws and replaced them with new ones. Post-Mussolini Italy, post-Shah Iran, and post-colonial India serve as but three examples. Topics targeted by current government bans range from how films portray the LGBTQ+ community to how they represent a country’s past. Many countries also limited films (especially American movies) that conflicted with their notions of national identity. China, South Korea, and Spain are examples of a countries that still maintain such a quotas. Some countries also employ technological means to censor international streaming content. In spite of sometimes vicious reprisals, filmmakers across the international community continue to use the medium as a way to challenge political and culture norms even if their films cannot legally be viewed in their home country.

 

A note about sources

This textbook reuses, revises, and remixes multiple OER texts according to their Creative Commons licensing. We indicate which text we are adapting with a footnote citation before and after each section of text. Additionally, we employ a number of non-OER sources. We indicate these using standard MLA citation. Full source information for both OER and non-OER sources appear in the works cited. Additionally, video clips link to their original source.

  1. https://uark.pressbooks.pub/movingpictures/part/part-ii/
  2. https://uark.pressbooks.pub/movingpictures/part/part-ii/
  3. https://milnepublishing.geneseo.edu/introlgbtqstudies/chapter/screening-lgbtq/
  4. https://milnepublishing.geneseo.edu/introlgbtqstudies/chapter/screening-lgbtq

License

FILM 110: Survey of Film Copyright © by James Decker. All Rights Reserved.

Share This Book