Posts Tagged ‘television’
“Privacy is rarely lost in one fell swoop. It is usually eroded over time, little bits dissolving almost imperceptibly until we finally begin to notice how much is gone.”*…
… And now, indeed, we’re beginning to notice. Hana Lee Goldin surveys the state of play– who’s buying our personal information, what they’re using it for, and how the system works behind the screen– and considers our options…
Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.
But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.
If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.
Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction…
[Goldin explains what data is being collected and shared, and by whom; how the data is managed and trafficked; how its being used (by insurance and financial companies, employers and landlords, retailers, AI companies, governments, and criminals); and how “inferred” data is used to augment the “hard” data. It’s chilling. She then puts the issue into context, and discusses we we can– and cannot– do about it…]
… The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.
This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.
One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.
The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.
This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed – through regulation, through market pressure, through better tools – toward something more transparent.
California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.
Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.
Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.
Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.
The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.
The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.
So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.
So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next…
Eminently worth reading in full: “So What if They Have My Data?“
See also: “Why Do We Care So Much About Privacy?” (source of the image above) in which Louis Menand suggests that our concern should be with the “weaponization” of data…
* Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security
###
As we reinforce our rights, we might recall that it was on this date in 1996 that the internet-as-we’ve-come-to-know-it broke big into the mainstream: Yahoo! launched the national campaign that asked “Do You Yahoo?” advertising its web-based search service on national television. The campaign was created by ad agency Black Rocket and Yahoo Marketing Head Karen Edwards (whose many awards for the work include a seat in the Advertising Hall of Achievement).
An early spot from the campaign…
“The arts are not a way to make a living. They are a very human way of making life more bearable.”*…
… Still, there are bills to be paid. Mathilde Montpetit (and here) on how the young Claude Monet made bank…
At the age of fifteen, Claude Monet was, by his own account, one of the most successful artists in Le Havre. Crowds would gather in the Norman port city to gawk at the pictures he sold through a framing shop: not paintings of haystacks or of the sea or water lilies, but slightly cruel caricatures of local bigwigs and minor celebrities. He had already learned to commercialize, charging his customers 20 francs (around 200€ in today’s money). “If I had continued”, he claimed to an interviewer in Le Temps almost fifty years later, “I would have been a millionaire.”
Spurred by profits, the young Monet was productive, creating up to seven or eight of these caricatures a day; a small collection of them is now held at the Art Institute of Chicago, most donated by the former mayor Carter Harrison IV (1860–1953). The French art historian Rodolphe Walter has claimed that his caricatures constituted a “clandestine apprenticeship”, the first attempts by a son of Le Havre’s bourgeois shipbuilders to make his way in the art world.
The earliest are anonymous: the identities of The Man in the Small Hat or The Man with the Big Cigar are now lost, although the framing shop devotees may well have been able to name them. Some of the works are imitations, like the 1859 drawing of the French journalist August Vacquerie (1819–1895) that Monet seems to have copied from Nadar (1820–1910), probably the period’s most famous caricaturist.
Monet’s own 1858 caricature of Léon Manchon, the treasurer of Le Havre’s Société des amis des arts, captures his subject’s appearance but also, in the background, both his love of the arts and his work as a notary. Most fantastical is the 1858 caricature of Jules Didier (1831–1914), which shows the 1857 winner of the Prix de Rome as a “Butterfly Man” being led on a leash by a dog. Monet scholars remain divided as to the symbolic meaning of the iconography, though more obviously derisive is the drawing of a dejected fellow applicant to an 1858 Le Havre art subsidy, Henri Cassinelli. Monet has captioned it “Rufus Croutinelli”: a slightly forced pun on “croute”, meaning a daub of paint. Monet didn’t receive the subsidy either.
Sixty-year-old Monet’s claims about how he could have made his young fortune probably had more to do with his later difficulties in selling Impressionism than the actual fortunes to be made in portraits-charge, but it was the roughly 2,000 francs (20,000€) from selling these caricatures that allowed him to, against his father’s wishes, move to Paris and begin training as an artist. (He also received a pension from his wealthy aunt Marie-Jeanne Lecadre, with whom he had been living since his mother’s death in 1857.)
Perhaps it helped him in other ways as well. In the Le Temps interview, Monet claimed that it was while admiring his admirers at the framing shop window that he first encountered the work of his mentor Eugène Boudin (1824–1898), whose paintings were also hung there. Boudin would later take him en plein air for the first time. Perhaps, too, there’s something in the quickness of the caricature that speaks to what Impressionism would become — a desire to capture not just the literal appearance of a thing, but its true essence…
“Doing Impressions: Monet’s Early Caricatures (ca. late 1850s)” from @mathildegm.bsky.social in @publicdomainrev.bsky.social.
Re: the other end of Monet’s career, readers in (or visiting) the Bay Area might appreciate “Monet and Venice,” over a hundred works– mostly the fruits of Monet’s only visit to the City of Canals, but spiced with Venetian views from artists including Renoir, Sargent, and Canaletto– on display at the de Young Museum in San Francisco through July 26.
* Kurt Vonnegut
###
As we cherish cartoons, we might might send pointedly-insightful birthday greetings to Peter Fluck; he was born on this date in 1941. An artist, caricaturist, and puppeteer, he was half of the partnership known as Luck and Flaw (with Roger Law), creators of the epochal British satirical TV puppet show Spitting Image.
The show ran from 1984 through 1996. (It was revived, with a different crew, in 2020.) Here’s a BBC appreciation of the original…
“Two possibilities exist: Either we are alone in the Universe or we are not. Both are equally terrifying.”*…
Happy Charles Dodgson’s (Lewis Carroll’s) Birthday!
Just when we thought that there was nothing else about which to worry, a different kind of “alien” concern: Helen McCaw, and economist and former senior analyst at the Bank of England, has written to her former employer with a warning…
The UK must plan for a financial crisis that would be triggered if the US government announces that aliens exist, a former Bank of England expert has said.
Helen McCaw, who served as a senior analyst in financial security at the UK’s central bank, has written to Andrew Bailey, the Bank of England’s governor, urging him to set out contingencies in case the White House ever confirms the existence of alien life, according to The Times.
Ms McCaw, who worked for the Bank of England for 10 years until 2012, said politicians and bankers can no longer afford to dismiss talk of alien life, and warned a declaration of this nature could trigger bank collapses…
Read on: “Bank of England must plan for a financial crisis triggered by aliens, says former policy expert,” from @the-independent.com.
* often attributed to Arthur C. Clarke (but likely from Stanley Kubrick, quoting Carl Sagan [who was riffing on a Walt Kelly Pogo quote])
###
As we acclimate to chaos, we might recall that it was on this date in 2021 that Resident Alien debuted (on Syfy).
Resident Alien is based on a comic book of the same name [by Peter Hogan and Steve Parkhouse]. Created by Chris Sheridan, Alan Tudyk plays an alien who crash-lands in Patience, Colorado and immediately goes on a killing spree including the town’s doctor, Harry Vanderspeigle.
Taking on the form of Harry, the alien continued killing thinking that by doing so, it would be good for planet Earth. But then, he was overcome with human emotions and started questioning the morality of it all…
– source

“But I don’t want to go among mad people,” Alice remarked.
“Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.”
“How do you know I’m mad?” said Alice.
“You must be,” said the Cat, “or you wouldn’t have come here.”
– Lewis Carroll
“Everybody experiences far more than he understands. Yet it is experience, rather than understanding, that influences behavior, especially in collective matters of media and technology, where the individual is almost inevitably unaware of their effects upon him.”*…

In the early 1970’s Marshall McLuhan and his son set out to discover if there might be general principles of technology, attributes, and effects common to all products of human innovation, to all of these artificial extensions of ourselves. Eric’s son, Andrew McLuhan shares their findings…
… Toward the end of his life, a life which ended before his 70th birthday, Avant Garde magazine asked Marshall McLuhan what he considered his greatest achievement. His reply?
“I consider my greatest achievement is the discovery that all human artifacts, all the extensions of man, are patterned structurally in the mode of the word. Whether it is a medium like radio, a bull dozer, or a safety pin; whether it is the word or a law of science, all these utterings and outerings of man have a four-part structure which is that of metaphor itself. I will illustrate this discovery from the character of money, which:
(a) enhances the speed of exchange
(b) obsolesces barter
(c) retrieves potlatch (conspicuous waste) and
(d) when pushed to its limits, flips or reverses its character into credit.
A book of these things is due to appear, title ‘The Laws of the Media’.”
But no one was interested in publishing it. It wasn’t published until 1988 when Eric McLuhan finally got someone – University of Toronto Press – to put it out as ‘Laws of Media: The New Science.’ The subtitle was a deliberate nod to Francis Bacon (Novum Organon) and Giambattista Vico (Scienza Nuova) of which tradition the McLuhans felt their work was part.
I have noticed more people using the laws of media, or the ‘tetrad’ (group of four) as it’s called, lately.
The laws of media can’t tell you everything about any technology, but they give you four reliable places from which to begin to explore what any technology is and what it does – another way of saying ‘the medium is the message.’ Particularly, it’s a way of examining the form of a thing and not just its content. The content of a medium, what we do with it, pay attention to, is always both the smaller part of the situation, and the less affective area. In Understanding Media McLuhan brilliantly paraphrases T S Eliot when he describes content as the juicy piece of meat carried by the burglar to distract the watchdog of the mind. The content keeps us busy, hold our attention, while the media do their work rearranging us, our lives, our world. To enlist Mary Poppins, content is the spoonful of sugar that helps the medicine go down.
The four things the McLuhans discovered are that:
Any given technology enhances or amplifies some aspect of us. We create tools to do something we already do faster, more easily or efficiently. Gloves to save our hands. Computers, to calculate. Telephone, that our voice carries across the world.
“It is a persistent theme of this book that all technologies are extensions of our physical and nervous systems to increase power and speed.” (‘Understanding Media: The Extensions of Man’ 1964)
It obsolesces, it upsets or displaces, disrupts something already in a dominant position. The Linotype machine put 90% of typesetters out of work. Twitter broke the news that television and radio networks used to.
“Now today, we speak of the book as obsolete. This means the book is acquiring ever new uses in the age of Xerox and the age of paperbacks.” (Marshall and Eric McLuhan in conversation, 1971)
It retrieves, or brings back something from the past, however near or far, in a new form. Text messaging put a telegraph in your pocket. The man in the car, the knight in shining armour.
“What recurrence or retrieval of earlier actions and services is brought into play simultaneously by the new form? What older, previously obsolesced ground is brought back and inheres in the new form?” ‘Laws of Media: The New Science’ 1988
When pushed past a point, it tends to flip or reverses its utility or characteristics. A glass of wine or two can make for a good time, relieve stress, grease the social wheels. A few bottles… quite the opposite. Information assists informed, timely decisions, too much information leads to overload, paralysis.
“When pushed to the limits of its potential the new form will tend to reverse what had been its original characteristics. What is the reversal potential of the new form?” (Laws of Media: The New Science 1988)
For example, here’s a tetrad from Laws of Media:
Xerox:
enhances: the speed of the printing press
obsolesces: the assembly-line book
reverses into: everybody becomes a publisher
retrieves: the oral tradition
Laws of Media: The New Science (Marshall and Eric McLuhan, 1988)
While media can be complex in nature and do many things, Marshall and Eric found that all media, without exception, do these four things. As remarkable as this discovery is – so remarkable that Marshall McLuhan considered it his most impressive achievement – almost equally remarkable is that so few people know about it.
They found four things which applied in all cases, but never stopped looking for a fifth. I know my father Eric was still keeping an eye or ear out for a fifth common dimension, something that would apply without exception to all media. A few people have ventured one thing or another but they did not satisfy my father’s criteria…
… The ‘laws of media’ can’t tell you everything about any medium, but it does give us something remarkable: predictability. We know that anything we can come up with will do these four things. It will amplify some part of us. It will make something obsolete. It will bring something back from the past in a new form. It will, when pushed, flip. This is an incredible advantage when it comes to new media. It gives us a real head start on being able to anticipate the effects of new forms on us and our world…
… [Per the title quote above] The point of the tetrad, the point of media studies at all, is to make media visible. To force us to pay attention to what’s happening all around us, sometimes only slightly beneath our awareness, sometimes buried deeply underneath. The true user experience is what we don’t notice but which shapes us all the same…
More (including how to “make” tetrads yourself): “Laws of (New) Media.”
* Marshall McLuhan, Understanding Media: The Extensions of Man, 1964
###
As we engage with the emergent, we might recall that it was on this date in 1993 that ABC and CBS simultaneously broadcast their own movies based on the Amy Fisher story with ABC’s starring Drew Barrymore and CBS’s starring Alyssa Milano. NBC had already scooped the other networks, airing their own version (starring Noelle Parker) about six days prior.

“I think every age has a medium that talks to it more eloquently than the others. In the 19th century it was symphonic music and the novel. For various technical and artistic reasons, film became that eloquent medium for the 20th century.”*…
… and few filmmakers have been as fluent as the remarkable Walter Murch. In the context of a review of Murch’s recent book, Suddenly Something Clicked: The Languages of Film Editing and Sound Design, John Lahr offers an appreciation…
Walter Murch , the film editor and sound designer Francis Ford Coppola has described as ‘kind of like the film world’s one intellectual’, has what he terms standfleisch. He has spent most of his almost sixty years in the film industry standing his lanky frame in front of various editing consoles. ‘Why do surgeons, orchestra conductors and cooks all stand to do their jobs?’ he asks in Suddenly Something Clicked, a piñata of ideas and anecdotes about his life and work. It sheds light on his forensic craft, his distinctive way of thinking about editing and the making of many of the major films he’s worked on, including Apocalypse Now (1979), the Godfather trilogy (1972-90), The Conversation (1974), American Graffiti (1973) and the 1998 recut of Orson Welles’s Touch of Evil.
To Murch, who has won three Academy Awards and been nominated for six more, film editing is a sensual ‘full-body’ experience: ‘a kind of dance, a choreography of images and sounds in the flow of time, forged in movement, eventually crystallising into permanence’. This embrace is a kind of erotic surrender to the unique metabolism of each story and its performers, a way of ‘drenching yourself in the sensibility of the film to the point where you’re alive to the smallest details’. ‘To watch Murch at work,’ Michael Ondaatje writes in The Conversations (2002), ‘is to see him delve into almost invisible specifics, where he harnesses and moves the bones or arteries of a scene, relocating them so they will alter the look of the features above the skin.’ The Conversations, a book of interviews with Murch, grew out of his work on the film version of Ondaatje’s novel The English Patient. ‘Most of the work he does is going to affect us subliminally,’ Ondaatje writes. ‘There is no showing off here.’ In the filigree of image and sound there comes a moment when, Murch says, he disappears into the film: ‘The shots, the emotions, the story seem to take over. Sometimes – the best times – this process reaches the point where I can look at the scene and say, “I didn’t have anything to do with that – it just created itself.”’
How heavy is this editorial heavy-lifting? Murch, of course, has done the maths. In the tale of the tape, Apocalypse Now is the undisputed champ. A single frame of 35 mm film weighs ‘five-thousandths of an ounce’; a reel of film – eleven minutes of picture and sound – weighs eleven pounds, or a pound a minute. By that calculation, the 1,250,000 feet of film shot by Coppola weighed more than 14,000 pounds or, as Murch puts it, ‘seven tons of film that had to be broken down, boxed, catalogued, put in accessible racks, moved around from editor to editor’. The average ratio of footage shot to footage used in a feature film is 20:1; the ratio for Apocalypse Now was 95:1. Over four years, Murch and his team got the film down from 236 hours to 2 hours and 27 minutes. This is as much bushwhacking as editing, finding the film’s story as well as its grammar, a feat Murch also accomplished for Coppola in The Conversation, which he restructured and essentially rewrote by cutting a third of the scenes…
… If Murch is full of wonder at film’s storytelling possibilities, the inventors of the moving picture were not. ‘The cinema is an invention without a future,’ Louis Lumière declared. The cinematograph, which he invented with his brother, Auguste, was a camera that recorded, developed and projected film onto a screen (one of the first being a bedsheet in a Russian brothel). Thomas Edison, though more interested in sound than image, developed the Kinetograph (an early motion-picture camera) and the Kinetoscope, which projected images that could be seen through peepholes. The breakthrough, which turned a 19th-century novelty into the 20th century’s only new art form, was the arrival of montage in 1901. The transition from one shot to another transformed motion pictures from a literal medium into a psychological and poetic one. Movies could now jump back and forth in time and space, ‘the cinematic equivalent to the discovery of flight’, as Murch sees it. Out of its illusion of naturalistic flow – 24 frames a projected second – a new grammar of seeing and of storytelling evolved: close-ups, dissolves, long shots, fade-outs.
‘“Filmic” juxtapositions are taking place in the real world not only when we dream but also when we are awake,’ Murch wrote in his book from 1992, In the Blink of an Eye. This explains why audiences find edited film a surprisingly familiar experience. Every blink is a thought. Every thought is a cut. In support of this belief, Murch quotes John Huston: ‘Look at that lamp across the room. Now look back at me. Look back at that lamp. Now look back at me again. Do you see what you did? You blinked. Those are cuts. Your mind cut the scene. First you behold the lamp. Cut. Then you behold me.’ In cinema, Murch says, ‘at the moment you decide to cut, what you are saying is, in effect, “I am going to bring this idea to an end and start something new.”’…
… Murch jostles between metaphysics and neurology in his discussion of film editing, but biology is his link to theorising about sound design. Hearing develops four and a half months after conception. ‘We luxuriate in a continuous bath of sounds: the song of our mother’s voice, the swash of her breathing, the piping of her intestines, the timpani of her heart,’ he writes. ‘The almost industrial intensity of this womb sound’ is about 75 decibels, ‘equivalent to … the cabin of a cruising passenger jet’. After birth, however, sound is gradually demoted. ‘Whatever virtues sound brings to film are largely perceived and appreciated by the audience in visual terms. The better the sound, the better the image.’ This fusing of sound and image is a sleight of mind in which the brain projects dimensionality onto the screen and makes it seem as if it had come from the image in the first place. ‘We do not see and hear a film, we hear/see/hear/see it.’
By his own admission, the phenomenal success of The Godfather triggered a revival of the metaphorical use of layered sound. Murch’s masterstroke of sound design was the addition – not indicated in the original script – of a rising metallic screech, as if from an overhead train, as Michael Corleone prepares to assassinate Sollozzo and Captain McCluskey. ‘The rumbling and piercing metallic scream,’ he writes, ‘is not linked directly to anything seen on screen, and so the audience is made to wonder at least momentarily, if perhaps only subconsciously, “What is this?”’ Because it is detached from the image, the scream becomes a clue to Michael’s state of mind; it comes and goes, then grows louder and louder until he finally pulls out his gun. After he shoots, the sound stops abruptly.
‘Even for the most well-prepared of directors, there are limits to the imagination and memory,’ Murch writes. ‘It is the editor’s job to propose alternative scenarios as bait.’ In Apocalypse Now, the sampan massacre and, more important, the restoration of Captain Willard’s narration to the final script are down to Murch. ‘Willard is an observer – he is our eyes and ears in this diabolical landscape – and for most of the journey, until he gets to the Kurtz compound, he is a mostly silent passenger,’ Murch explains. ‘The audience judges character by comparing words spoken with actions taken, but if there are few words and fewer actions, the character has to emerge from somewhere else: out of an interior, quasi-novelistic voice.’ Following this editorial impulse, Murch dug out Willard’s voiceover from the original screenplay and recorded it himself, ‘lacing it selectively over the first half-hour of film’. His pitch worked. Willard’s voiceover was reinstated (as rewritten by Michael Herr), a crucial adjustment that spoke to the accuracy of Coppola’s dictum that a film director is the ‘ringmaster of a circus that’s inventing itself’.
Suddenly Something Clicked was conceived by Murch as a ‘three-braided rope – theory, practice and history’, a sort of intellectual high-wire act of technical expertise and personal anecdote. Like Murch himself, the book is unique. It’s designed for the reader to play with. Want to read Maxim Gorky’s reaction to seeing his first motion picture? Or see Orson Welles’s lost 58-page memo to the Universal Studios executives who took control of his production of Touch of Evil? Or hear the six pre-mixes and the final mix of the helicopters landing to ‘Ride of the Valkyries’ in Apocalypse Now? Or watch an animated restructuring of the scenes in The Conversation? QR codes beside the text provide detours into these subjects and more. Similarly, there are chyrons of adages from other filmmakers and artists – ‘fortunes’, Murch calls them – at the bottom of every even-numbered page, intended as a kind of dialectical chorus to counterpoint or contradict his opinions. His high-spirited advice to film editors holds true for his readers: ‘Good luck! Make discoveries!’…
Eminently worth reading in full: “Every Blink,” from @lrb.co.uk.
As his book(s) on film and editing would suggest, Murch is generous in sharing his insights. That’s true too at a more personal level, as he’s made time to advise and mentor younger, less-experienced filmmakers (as your correspondent can attest).
Apropos Coppola’s characterization of him, Murch is a man of wide interests– to many of which, as reported in “Walter just knows stuff” (source of the image above) and “Transits, Translations, and Secret Patterns: When Lawrence Weschler Met Walter Murch,” he’s made important contributions. Oh, and he’s also a literary translator.
* Walter Murch
###
As we juxtapose, we might spare athought for an earlier cinematic pioneer, Hal Roach; he died on this date in 1992. A film and television producer, director and screenwriter, and founder of the namesake Hal Roach Studios, he was active in the industry from the 1910s to the 1990s. He is best known for producing a number of early media franchise successes, including the Laurel and Hardy franchise, Harold Lloyd‘s early films, the films of entertainer Charley Chase, and the Our Gang (AKA, “The Little Rascals”) short film comedy series.





You must be logged in to post a comment.