In the end I didn’t bother to assemble a video clip from my slit scan experiment. Many of the frames were over-exposed, which I hadn’t spotted during the shoot because I’d been concentrating on trying to move the camera as smoothly as possible. I must have slowed down for the later frames.
Anyway, here’s the first frame. The remainder should have been similar but with the image moved slightly under the slit each time.
I wouldn’t say that the test was a complete failure, as it gave me lots to think about. It didn’t however, produce anything like I hoped it would.
I should be clear about how I did this. I wasn’t using the approach where different parts of an image are taken from different frames and are therefore separated by time, leading to peculiar warping effects (which seems to be the basis for most examples available on the web, especially those created through After Effects, Quartz Composer or Processing).
My technique was based entirely on an article by Martin Kelly who used to create slit scans professionally. He describes it as “an extremely simplified form of the highly complex sequences needed for 2001: A Space Odyssey”. The diagrams included in his article show the diagonal smearing of light from the centre of the screen to the edge.
Give or take the adjustments required to line my images up properly, that’s what mine looks like – diagonal smears. They bear no relation to the varied textures visible in 2001. It isn’t just a matter of simple disturbance, described in the short documentary among the DVD extras, because they move smoothly outwards from one frame to the next, so there is continuity.
I’ve already achieved the effect of my analogue approach using Processing, and it would be simple to add random imperfections to simulate the variations in brightness resulting from the low tech, hand-made nature. I can’t see how to leap from this to the Doug Trumbull look though, so I’ve ordered a back issue of Cinefex 85 which has a “comprehensive retrospective” of the film. The magazine is coming from America and will take several weeks to arrive, so I’ll turn to other things in the meantime.
I still haven’t had a chance to compile the brief video clip of my slit scan experiment on Wednesday, but in the meantime, here’s a glimpse of my low-tech rig. Not the extensive use of gaffer tape to hold the plastic box against the enlarger head. Inside the box there are several pieces of thick card to hold the camera roughly in place, and in the base of the box there is a hole cut with a craft knife.
The enlarger head is moved by two wheels at the base of the vertical bracket. It’s not designed for smooth motion, and they wheels are awkward to reach, especially with the light box in the way.
If I use this technique again (and I’m far from certain that I will – to be discussed in another post), I’ll probably experiment with a camera tripod above a smaller light box.
Today I finally got round to experimenting with an analogue slit-scan setup. I’ve started toying with this idea just over a year ago, tinkered with it in Processing, and have been actively preparing for this analogue version for the last few days.
I printed a large abstract image found on the web onto an A4 size transparent sheet, placed it on an A1 size light table, then covered it with a large sheet of dark paper into which I’d cut a narrow slit, just narrower than the width of the image.
I placed all of these under an old photographic enlarger that is securely bolted to the wall in the darkroom at work. I cut a hole in the base of a plastic box for the camera lens to poke through, then used gaffer tape to hold the box against the lens mounting on the enlarger.
Testing the rig proved rather laborious. When the enlarger head was high, I had to stand on a chair to review the test image on the camera’s LCD screen, so a live feed to a laptop would have been useful. In the end, I had the aperture set at f16, started the movement at 50cm above the base of the enlarger and stopped when the enlarger head couldn’t go any lower. Even then, I had to raise the light table on boxes so that the enlarger head finished close enough to the image.
My Heath Robinson rig did what it had to do, but it was far from perfect. The widest part of the lens was too wide to fit through the hole, so I had to take the lens off the camera body when I inserted the camera into the box, then reattach the lens through the hole. This was made even more awkward by having to feed the shutter release cable in between the strips of gaffer tape. As a result, I had to switch off the Auto Power Off setting on the camera as it was too awkward to keep re-waking the camera before each test shot. Furthermore, there no way of fixing the camera in place so that it would slot into exactly the same orientation in the hole in the bottom of the box.
Still, these were merely nuisances rather than serious flaws. I could, if I were going to be using this kit often enough to make it worthwhile, arrange things better and in such a way that the various parts could be locked down to avoid undesired movement.
Even so, there are still too many variables with this approach. One or two might have given it an acceptable hand-made appearance rather than a sterile digital look, but even the few frames I captured differed too much.
As you’d expect, there was no motor to raise or lower the enlarger head, so I had to do it manually. Not only was it difficult to maintain a constant speed of camera movement during each photograph (which led to horizontal bands of brighter and darker patches), but it proved impossible to maintain constant overall exposure for each frame. My hands soon got tired and I slowed down, so the later frames were exposed for longer and were therefore brighter.
There is a more fundamental problem with this whole approach, however. I’m not convinced that this is really how the original slit scans were created, but I’ll leave discussion of that to another blog post. In the meantime, I’ll go away and compile a brief clip of my first attempt.
We went to Biddulph Grange last week, and I took the opportunity to indulge in some (oh all right then, a lot of) photography using my 300mm lens. It’s a long time since I’ve taken more than just a few quick shots, and I found that I was rather rusty, to the extent that I forgot just how short a depth of field that lens has.
In many of the shots, I was using the perspective-flattening characteristics of the lens to concentrate on patterns among the strong geometric shapes of square-cut hedges or alternating light and dark foliage, all emphasised by the contrasts caused by strong, bright sunlight casting deep shadows. On reviewing the results back at home, however, I was disappointed to find that so little of each shot was in focus, and the effect wasn’t as strong as I’d hoped.
Still, that same characteristic worked reasonably well in the shots of flowers.
It may have been true at some stage that the camera never lies, but that time has definitely gone.
Yesterday’s post about kaleidoscopes and light drawings reminded me of these mosaic images of body parts and the Flikr group Camera Toss, where people take photographs with the shutter held open while throwing their cameras around…
Things are starting to look up. I’ve arranged a few activities, including a trip to Edinburgh, and the theatre visit and painting course I booked a while ago are both imminent. I’ve also started thinking once more about my slitscan sketch in Processing.
I watched a brief explanation on DVD of how Doug Turnbull created the slitscan sequences in ‘2001 A Space Oddessey’. It turns out that he achieved the mottled effect not by filming the slit itself but by filming its reflection on a roughly textured mirrored cylinder. I realise that this is being pedantic, but since random noise was introduced into the sequence, it’s impossible for anyone, despite their claims, to decode the original images used in the slitscan sequences. These re-creations still have the noise in them.
It would be possible to introduce an equivalent noise to my Processing slitscan sketch, but I’m still keen to try my hand at an analogue version. That will have unavoidably irregular movement of the camera on the vertical axis anyway, which may be sufficient distortion.
It’s nearly three years after the event, but I only recently came across one of the best time-lapse films I’ve ever seen: Noah takes a photo of himself every day for six years.
The impact is partly due to Noah’s perseverance, partly to the manipulative soundtrack, partly to the unchanging facial expression and partly to the identical positioning of the eyes in each shot, but the main impact, and what makes it almost unbearably poignant, is the aging that occurs over what is perhaps a quarter of Noah’s life so far.
He’s still taking photographs of himself: have at look at his website.
I can’t remember how it started, but my attention was caught recently by the idea of slitscans. The term seems to be used indiscriminately for different but related techniques, so I’ve tried to categorise them for my own understanding.
Firstly, there’s the creation of a sequence of images from a backlit static original. That’s how the stargate sequence for ‘2001: A Space Odyssey’ was created, as well as the original Dr Who title sequence. It’s a laborious approach, where, for each frame of the sequence, the camera shutter is held open while the camera is lowered towards the static image. Only a thin line of the original is visible through a slit (hence the name), and the light coming through it falls on different parts of the film as the camera’s position alters. For subsequent frames, the camera is raised again, the film wound on, the original image moved slightly so the next part of it is visible through the slit, and the process is repeated.
Intriguingly, someone has reverse-engineered the stargate sequence to produce some of the static images that must have been used to create the effects.
The next version of the technique creates a single image from a sequence of images. A common use is to capture a series of timelapse images of a scene then take adjacent slices from each one and combine them. The result is an image of a scene where different parts of it represent different times. The teeming void has examples of a street scene and the sky.
A variation on this approach captures a single image from a changing scene by using a box with a moving slit in it in front of the camera. Alternatively, though it’s more restricted, you could move things while scanning them.
Finally there is the creation of a sequence of images from a sequence of images. This seems to be particularly popular because of the weird effects you get from simple movement. It’s a development of the previous technique, where each frame of the output sequence consists of slices of different frames in the starting sequence. You can watch a test video to compare the input and output frames and see what’s happening more clearly. Some video editing programmes provide filters to achieve this, and people have supplied code for use with Processing and Quartz Composer. It can be impressive, but the novelty value of this approach wears off very quickly.
I’ll develop this topic further in some way, but in the meantime, if you’re interested in delving deeper into the subject, there’s an extensive collection of examples assembled by Golan Levin.
more p5.js valleylost.co.uk/?p=1469
@VictoriaCoren Not sure that bluffing in a poker game includes tweeting your inadequacies
@THEAGENTAPSLEY Too extreme an adaptation, barely recogniseable as Macbeth, leaving the drama to the scenery rather than the actors
@Ukuhooley Have you chosen the dates for next year's Hooley? What plans do you have? Or do you need a rest first? Thanks for your hard work!