It’s not quite slit scan, but while I photographed the village duck pond on my way home from my most recent night photography expedition, I tried zooming in/out while I held the shutter open:
Last night I read more of ‘Brian Eno: Visual Music’ by Christopher Scoates and came across Eno’s use of televisions as light boxes and looping recordings of music with Daniel Lanois. There was also a description, photograph and diagram of ‘Shutter Interface’ , the 1975 work by Paul Sharits, which involved four projectors each showing a loop of single- colour frames that overlapped on the wall, creating a mix of hues. Each loop had one black frame to introduce a flicker that interrupted at different moments in the four cycles.
The films are all out of phase/sync and therefore a multitude of variational states of interactions between them is set in (potentially perpetual) motion. For Sharits, the fades and dissolves were “‘active’ punctuation for the ‘sentences’ being visually enunciated” and in their variable syntax recall a Chomskyan notion of grammar.”
The reference to Chomsky may seem high-falutin’ but it refers back to an earlier passage which discussed the linguist’s theory of
“…linguistic competence in which he argued that language has an infinite set of sentence combinations, which became known as ‘generative grammar.'”
It occurred to me that creating an equivalent work in Processing would be very simple, so I immediately started writing one, though my initial thought of using an OOP approach may have been slightly over the top, but that depends on how far I might want to take this. In these days of Processing, Arduino, LEDs and ubiquitous computers, it seems odd to think of Eno regarding television to be “.. the most controllable light source that had ever been invented…”, but the availability of simple alternatives doesn’t diminish his work.
Furthermore, this nocturnal burst of programming spurred me on to resume work on my abandoned path-following Saffron Mandala sketch and my as-yet unstarted slitscan sketch. From past experience I’m wary of promising imminent future posts on these sketches, because there are other things to deal with such as work, family, food and house but I also need to catch up on sleep too. But these are now my projects to focus on.
The copy of Cinefex (Issue 85, April 2001) which I ordered recently from America has arrived. I bought it in the hope that its long article on ‘2001: A Space Odessey‘ would contain a thorough description of how Douglas Trumbull created the Star Gate sequences, since the limited amount I could find on the Internet wasn’t detailed enough. And now I know why.
The fascinating article discusses many aspects of the filming and describes some of the slitscan techniques at length (pp110-113), but it’s hard to grasp exactly how it was done. I suspect that I’ll have to watch the film and re-read the article repeatedly before I could work out to do something similar. Still, it prompted me to relook at the pioneering abstract animation by John Whitney, such as Catalog (1961) and Arabesque (1975), and following those links reintroduced me to Karl Sims, whose low-tech three-pendulum rotary harmonograph looks like a fruitful source for converting to Processing sketches.
Perhaps the appeal of these pioneers is that the special effects in films like ‘Inception‘ are beyond the scale of a small group of dedicated people working on a film like ‘2001’, so they feel inhuman, a degree of perfection beyond the ‘uncanny valley‘ of robotics.
In the end I didn’t bother to assemble a video clip from my slit scan experiment. Many of the frames were over-exposed, which I hadn’t spotted during the shoot because I’d been concentrating on trying to move the camera as smoothly as possible. I must have slowed down for the later frames.
Anyway, here’s the first frame. The remainder should have been similar but with the image moved slightly under the slit each time.
I wouldn’t say that the test was a complete failure, as it gave me lots to think about. It didn’t however, produce anything like I hoped it would.
I should be clear about how I did this. I wasn’t using the approach where different parts of an image are taken from different frames and are therefore separated by time, leading to peculiar warping effects (which seems to be the basis for most examples available on the web, especially those created through After Effects, Quartz Composer or Processing).
My technique was based entirely on an article by Martin Kelly who used to create slit scans professionally. He describes it as “an extremely simplified form of the highly complex sequences needed for 2001: A Space Odyssey”. The diagrams included in his article show the diagonal smearing of light from the centre of the screen to the edge.
Give or take the adjustments required to line my images up properly, that’s what mine looks like – diagonal smears. They bear no relation to the varied textures visible in 2001. It isn’t just a matter of simple disturbance, described in the short documentary among the DVD extras, because they move smoothly outwards from one frame to the next, so there is continuity.
I’ve already achieved the effect of my analogue approach using Processing, and it would be simple to add random imperfections to simulate the variations in brightness resulting from the low tech, hand-made nature. I can’t see how to leap from this to the Doug Trumbull look though, so I’ve ordered a back issue of Cinefex 85 which has a “comprehensive retrospective” of the film. The magazine is coming from America and will take several weeks to arrive, so I’ll turn to other things in the meantime.
I still haven’t had a chance to compile the brief video clip of my slit scan experiment on Wednesday, but in the meantime, here’s a glimpse of my low-tech rig. Not the extensive use of gaffer tape to hold the plastic box against the enlarger head. Inside the box there are several pieces of thick card to hold the camera roughly in place, and in the base of the box there is a hole cut with a craft knife.
The enlarger head is moved by two wheels at the base of the vertical bracket. It’s not designed for smooth motion, and they wheels are awkward to reach, especially with the light box in the way.
If I use this technique again (and I’m far from certain that I will – to be discussed in another post), I’ll probably experiment with a camera tripod above a smaller light box.
Today I finally got round to experimenting with an analogue slit-scan setup. I’ve started toying with this idea just over a year ago, tinkered with it in Processing, and have been actively preparing for this analogue version for the last few days.
I printed a large abstract image found on the web onto an A4 size transparent sheet, placed it on an A1 size light table, then covered it with a large sheet of dark paper into which I’d cut a narrow slit, just narrower than the width of the image.
I placed all of these under an old photographic enlarger that is securely bolted to the wall in the darkroom at work. I cut a hole in the base of a plastic box for the camera lens to poke through, then used gaffer tape to hold the box against the lens mounting on the enlarger.
Testing the rig proved rather laborious. When the enlarger head was high, I had to stand on a chair to review the test image on the camera’s LCD screen, so a live feed to a laptop would have been useful. In the end, I had the aperture set at f16, started the movement at 50cm above the base of the enlarger and stopped when the enlarger head couldn’t go any lower. Even then, I had to raise the light table on boxes so that the enlarger head finished close enough to the image.
My Heath Robinson rig did what it had to do, but it was far from perfect. The widest part of the lens was too wide to fit through the hole, so I had to take the lens off the camera body when I inserted the camera into the box, then reattach the lens through the hole. This was made even more awkward by having to feed the shutter release cable in between the strips of gaffer tape. As a result, I had to switch off the Auto Power Off setting on the camera as it was too awkward to keep re-waking the camera before each test shot. Furthermore, there no way of fixing the camera in place so that it would slot into exactly the same orientation in the hole in the bottom of the box.
Still, these were merely nuisances rather than serious flaws. I could, if I were going to be using this kit often enough to make it worthwhile, arrange things better and in such a way that the various parts could be locked down to avoid undesired movement.
Even so, there are still too many variables with this approach. One or two might have given it an acceptable hand-made appearance rather than a sterile digital look, but even the few frames I captured differed too much.
As you’d expect, there was no motor to raise or lower the enlarger head, so I had to do it manually. Not only was it difficult to maintain a constant speed of camera movement during each photograph (which led to horizontal bands of brighter and darker patches), but it proved impossible to maintain constant overall exposure for each frame. My hands soon got tired and I slowed down, so the later frames were exposed for longer and were therefore brighter.
There is a more fundamental problem with this whole approach, however. I’m not convinced that this is really how the original slit scans were created, but I’ll leave discussion of that to another blog post. In the meantime, I’ll go away and compile a brief clip of my first attempt.
Things are starting to look up. I’ve arranged a few activities, including a trip to Edinburgh, and the theatre visit and painting course I booked a while ago are both imminent. I’ve also started thinking once more about my slitscan sketch in Processing.
I watched a brief explanation on DVD of how Doug Turnbull created the slitscan sequences in ‘2001 A Space Oddessey’. It turns out that he achieved the mottled effect not by filming the slit itself but by filming its reflection on a roughly textured mirrored cylinder. I realise that this is being pedantic, but since random noise was introduced into the sequence, it’s impossible for anyone, despite their claims, to decode the original images used in the slitscan sequences. These re-creations still have the noise in them.
It would be possible to introduce an equivalent noise to my Processing slitscan sketch, but I’m still keen to try my hand at an analogue version. That will have unavoidably irregular movement of the camera on the vertical axis anyway, which may be sufficient distortion.
I’ve been developing my Processing slitscan filter to create a sequence of images from a single original source.
Although manipulating individual pixels led to some interesting effects, I couldn’t work out how to achieve what I was really after. I switched to using get() and set(), thus leaving the hard work of calculation to Processing, and we’re both happier as a result. The only drawback is that images drawn with set() can’t be tinted, so, to suggest distance, I added a narrow black rectangle over each row of pixels, filled with decreasing transparency towards the centre.
I’m still not there yet, but the following clip is much closer:
Here’s the source image:
Following on from my previous poston slitscanning, I’ve been working on a Processing sketch to imitate the original slitscan technique, i.e. create a sequence of images from a static single original. I’ve never used images or video in Processing before, so it’s all new to me, and I’ve a way to go yet, but I’m getting there. So far, my sketch creates a new frame for each row of pixels in the original image, and adjusts each line in it with a flickering offset.
Here’s the output from the current version of the sketch:
… and here’s the original, rather cheesey image I used as a test source (taken from a website offering free desktop wallpapers):
I can’t remember how it started, but my attention was caught recently by the idea of slitscans. The term seems to be used indiscriminately for different but related techniques, so I’ve tried to categorise them for my own understanding.
Firstly, there’s the creation of a sequence of images from a backlit static original. That’s how the stargate sequence for ‘2001: A Space Odyssey’ was created, as well as the original Dr Who title sequence. It’s a laborious approach, where, for each frame of the sequence, the camera shutter is held open while the camera is lowered towards the static image. Only a thin line of the original is visible through a slit (hence the name), and the light coming through it falls on different parts of the film as the camera’s position alters. For subsequent frames, the camera is raised again, the film wound on, the original image moved slightly so the next part of it is visible through the slit, and the process is repeated.
Intriguingly, someone has reverse-engineered the stargate sequence to produce some of the static images that must have been used to create the effects.
The next version of the technique creates a single image from a sequence of images. A common use is to capture a series of timelapse images of a scene then take adjacent slices from each one and combine them. The result is an image of a scene where different parts of it represent different times. The teeming void has examples of a street scene and the sky.
A variation on this approach captures a single image from a changing scene by using a box with a moving slit in it in front of the camera. Alternatively, though it’s more restricted, you could move things while scanning them.
Finally there is the creation of a sequence of images from a sequence of images. This seems to be particularly popular because of the weird effects you get from simple movement. It’s a development of the previous technique, where each frame of the output sequence consists of slices of different frames in the starting sequence. You can watch a test video to compare the input and output frames and see what’s happening more clearly. Some video editing programmes provide filters to achieve this, and people have supplied code for use with Processing and Quartz Composer. It can be impressive, but the novelty value of this approach wears off very quickly.
I’ll develop this topic further in some way, but in the meantime, if you’re interested in delving deeper into the subject, there’s an extensive collection of examples assembled by Golan Levin.
FFS. I wondered why the BBC's recipe section was for the cut. This explains it. That petition will do a lot of good. https://t.co/sVzOXcSfkG
@thegvnr66 I want to know! Tell me!
Aargh! Can't do any work while Waiting for Windows 7 to update and configure #12May16
It's a hectic day but good, rushing from one part-time job to the other. Feels like I'm getting the hang of this! #12May16
I promised to name and shame MPs, SIR Alan Duncan claimed £162,000 in expenses and voted to cut disability benefit https://t.co/U7xOpL206R
This man is taking money from disabled and giving it to wealthy. Retweet if you think he's got his priorities wrong https://t.co/WPO8q1nV4u
Rodrigo y Gabriela were brilliant in Cambridge 2014! Bought my London Palladium tickets for 5th July already! See you there! @rodgab