I was on a crunch to finish the latest draft of the novel, but now it’s in, and I’m slowly catching up on everything else. Saturday we harvested the remaining squash and bell peppers from our garden, and earlier in the week we planted garlic for next year. Perhaps the one diversion, aside from pleasure reading and a planned getaway to the Smokies, is that I’ve been shooting film and developing it at home.
The process is much less complicated than I’d expected, thanks to the encouragement of a friend’s twice-monthly artists’ show-and-tell, and various YouTube tutorials. In my mind I’d conflated developing (the fixing of shot images on the roll of film) with the more-romantic process of printing (think: darkrooms, red lamps, tongs and chemical trays, slick enlargements hung up to dry).
You still need chemicals for the former, but no dedicated space. In the dark, you load the film into a small tank that lets liquids in but keeps light out; the rest of the process happens out in broad daylight. You also need a film scanner to digitize the negatives, but you can get a decent one, as I did, for just north of a hundred bucks.
I can hardly express the joy and surprise when the first images, from my first roll (one that’d been sitting unfinished in an old SLR for two years) appeared in full color on the screen. Listen to film photographers talk about why they prefer film over digital, and you’ll often hear them talk about the slowness of its analog process – the way they have to more carefully plan and visualize their shot, before releasing the shutter.
And it’s true! Greater foresight is needed, greater familiarity with light and lens and film stock – especially in absence of other foresight-assisting tools like a working light meter.
But slowness, as I’m learning, is also baked into what happens after the shutter is released. Once I’d developed a couple of rolls on my own, the next photo I shot on my phone startled me; I’d taken for granted that here, effectively, was a photograph developed and printed on the spot. A process that in my home setup took hours (including time for drying the film and scanning it) was flattened into milliseconds.
That’s not to say you can’t be pleasantly surprised when reviewing a digital image, but the force of the surprise is an order of magnitude less. It goes either way, in photos that turn out better than expected, and those that turn out worse. Imagine my disappointment on seeing the image below and discovering that, after touring a bourbon distillery, my hand was not as steady as I’d thought:
Surprise is contingent on forgetting, and with digital, you are not afforded the same chance to forget. Sometimes this is a tradeoff you want to make, for the sake of getting the image right. Other times it might not be.
Last week, Apple announced, along with a new high-end iPhone, an update to its suite of computational photography algorithms. These algorithms analyze what you’re photographing and blend multiple shots on a pixel-by-pixel basis to better represent light and color – less an intelligent camera than an intelligent film. You don’t see it happening, of course. You just see, near-instantaneously, the resulting image. Computational photography subsumes yet another part of the process, of photographers editing their images to compensate for limitations in film stocks and sensors, and to better express the scene and subject as they’d envisioned – or remembered – it.
As is so often the case with all technology, not only photographic: possessing a knowledge of what came before, the new technology is a marvel.
And without that same knowledge, an obfuscation.