Isn't it marvelous and surprising that complex things come often from simple ones? One of the characteristics of chaotic systems is precisely that: obeying a limited set of rules they can display a large variety of behaviors. Complexity in behavior, though, lies not in the complexity of the rules, rather it emerges (does anyone know from where?).
Videofeedback's rules are simple. Just connecting a camera to a television, so that the television displays what the camera is recording, and pointing the camera to the television. A closed loop is thus created in which the image on the television screen is captured through the lens of the camera, which sends it back again to the television just to be recorded by the camera, and so on. Complex patterns can emerge from this few and simple rules, as the two seen below.
And these images and video below are from a videofeedback session with a digital camera and television. Thanks to Berny for the material and help!
There is a little bit more than the situation described above. Both camera and television normally have a set of control parameters that can be adjusted to obtain certain effects. A list with the most common ones follows.
By adjusting these parameters it is possible to control the form and evolution of the arising light patterns.
Now that we have a basic idea of how to produce videofeedback, we will try to get a glimpse of its workings by examining some of its aspects.
In an iterated system an initial starting value is needed in order to produce results. An initial image has to be fed to the system that will be processed and give the initial stimulus to the system for going on evolving in a complex pattern. Unless there is a starting stimulus, the videofeedback loop will remain empty, no signal will be iterated from television to camera. Just switching on and off the light, or interposing a small light between the devices will suffice to start the process. In our software this has been implemented as a grid of small squares either with random colors or random grays.
Worth mentioning is the fact that, for certain values of the control parameters, the long term pattern that arises after many iterations is independent of the initial seed. That is, the dynamics of the system is given by the control parameters, independently of what initial image we are choosing.
Let us assume that camera and television are aligned in the direction of the longitudinal axis of the camera, that is, this axis is perpendicular to the television screen and passes through its center. And also that the scaling is 1:1, that is, no zooming is performed. In such an arrangement the camera can be turned around that axis by a certain angle. This implies that the iterated image is rotated by the same angle in each iteration. For angles such that 360/angle equals a rational number, patterns with definite symetry arise. For instance, if angle=45 then after 360/45=8 iterations the information of a given pixel would have travelled around the pattern to arrive at the same point from where it departed, and a figure with eightfold symetry would be expected. The more the quotient 360/angle approaches irrationality the less symetrical and more dynamical the observed patterns are expected to be.
The effect of scaling is to give a tendency for expanding or contracting to the pattern. As for rotation, this effect is cummulative. Let us suppose that the angle is 0. When the scaling is, for example, 2:1 we are performing a zoom in and the camera captures only one portion of the television screen. In each iteration the image gets amplified, the light patterns tend to grow and expand. On the other hand, when the scaling is 1:2 we are performing a zoom out and the camera captures parts of the television outside the screen. In each iteration the image gets reduced, the light patterns tend to shrink and contract.
A mathematical model for describing videofeedback was proposed by Crutchfield in (1). The process of succesive feedbacks of images is there expressed as an iterated functional equation.
Instead of coding this mathematical algorithm in a programming language, discretizing the domain in cells corresponding to the pixels, integrating numerically the spatial averages or building a rotation matrix with sines and cosines, one can benefit from the already existing application programming interfaces for graphics and videocards with fast GPUs, and let the hardware compute for ourselves.
Those interested in the details of one possible way of simulating videofeedback with the computer (using OpenGL's GLSL) can get more information in this page about these technical aspects.
The program has been made using C++ as base language, OpenGL and GLSL for graphics, AntTweakBar as graphical user interface and DevIL for having the possibility to take screenshots in order to make videos. See links below for further information.
The user can select between a number of options and there are controls associated with the relevant parameters of the system. The screen is divided into two viewports, the left one is the videofeedback area, where the patterns are created and their evolution takes place. The right one is used to display a small number of features associated with the system. One can plot there the trajectories in color space (RGB) of all the points that form the iterated pattern, or that of just one selected pixel.
Source code: morphogen.tar.gz