Wednesday, January 23, 2008

What is singularity programming?



Singularity programming is a radical form of design (not just coding) that takes its inspiration from the mathematical concepts of manifolds and singularities.

The question is asked

In principle the question is asked: how would a program look that responded to an information system whose steady state has undergone the equivalent of a phase transition in physics?

Singularity basics

It is tempting to plough ahead without an understanding of singularity basics, but that would leave the reader with little benefit from this exploration. Therefore it is valuable to touch upon a few core concepts that nevertheless have deep application, but require a bit of mental abstraction.

The term singularity is familiar in the context of manifolds in differential geometry, but it is used to describe several different (albeit related) topics. In particular I am using the term singularity in the classic Riemannian sense and its more famous extension in Einstein's General Theory of Relativity.

Riemann's is also the version referenced by Manuel DeLanda when he expands the notion of manifolds and singularities to describe physical processes. He posits that the intrinsic structure of a manifold can describe the evolution of such processes over time.

We are interested in the singularities that are topological points and thereby define a steady state. They have an influence on the behaviour of trajectories, and therefore on the physical system itself. A singularity, in this sense, often acts as an attractor within the manifold. Any trajectory, as long as its origin lies within the basin of attraction, will have as its end point this attractor singularity.

Thus we could also have spoken of attractor programming or steady state programming rather than singularity programming, were it not for the notion of a phase transition associated with the symmetry-breaking bifurcation of one singularity to another. (A symmetry-breaking bifurcation, in short, implies that the system has changed state and its new stable state is represented by a different singularity.)

To use a simple example we may think of water. When it is a liquid its state can be described by a certain singularity in a manifold. It may lose temperature, or gain temperature – whether through kinetic or heat energy – but essentially it remains water. However, when this type of energy is consistently applied to the water it may become a gas. At this point it undergoes a phase transition, and stabilises around a new state (gas). Both these states would be represented by two different singularities within the manifold.

Meanwhile back at the digital manifold

I want to use these terms as metaphors in a digital space, the space defined by system calls, applications and user spaces in the operating system, memory and storage systems of a computer.

For starters let's imagine a smoothly running system – business as usual – evolving through two singularities in the manifold. First phase: total inertia. Bootup? First phase transition. Loaded Windows? One stable state is reached. Or was it Linux instead? A different stable state. Perturbed by applications? Hmmm ... but if you close them again, the system returns to the typical stable state of Windows, or Linux, and so remains around the same singularity.

This gets us going in the right direction, but for the purposes of typical programming the example is a bit too broad. Most of us who develop aren't system hackers – we write user space software.

Nevertheless, we already have some correlating ideas. Programs, or certain types of data, perturb the system and push the system around the basin of attraction of a particular singularity. It generally continues to stabilise around that singularity, but occasionally a large memory leak or a kernel panic can lead to a phase transition in the system. And let's be honest, in most systems this phase transition is rather fatal to the user. The infamous Blue Screen of Death is a memorable case in point.

This hints at the paradigm I am suggesting: a form of programming that caters for such a new state. But ... what exactly is singularity programming then?

It is not error handling

To begin with, we may start with something it is not - namely traditional error handling. At, for example, the assigning of values to a variable in a C++ or Java program I, as a programmer, might notice that the value could cause an anomaly through division by zero. To handle this exception - which is a kind of error - I write an error handler. In effect, we are using a logical form of redirection that continues in the same domain - it originates and remains within the basin of attraction, in other words.

Any well-written piece of software should trigger an error handler in such a situation. The error handler diverts the flow from disaster and the program continues its execution. It's the equivalent of the program saying: "Oh by the way, this is the problem that just arose, but you don't need to take it too seriously, just let me get on with things ...".

However the state of the system is not radically changed by this logical redirection, and hence we cannot speak of error handling as singularity programming. In fact we might say that the goal of error handling is to keep the system in its present state, which is to say the program does not want the system to change its state and reach a new singularity.

I give up, are you going to tell me what singularity programming is?

Let's look at our example of a stable system again, and imagine that it is a firewall. A simple firewall may accept internet (untrusted) data at secure ports, inspect the packets, and pass the packets to a local network via another port. Packet load may vary, but the firewall can normally continue these operations with no disruption and little noticeable change in resource usage for months on end. Often the system wouldn't even need a reboot. It's a simple system that remains relatively stable during its lifetime.

Briefly take one step back in order to satisfy our analogy. The firewall system reached its present state after the connection, installation, configuration and implementation of hardware, network, operating system, and crucial operations software. We may have attempted different tactics during any of these processes, but eventually we would have a stable, running system whose state is represented by an attractor singularity in our imaginary firewall manifold.

Now imagine that the system is perturbed by unusual volumes and types of data, for instance during a Denial of Service attack. In simple terms, the system becomes overloaded using all of its resources to cope - or at least those focusing on typical functions.

To make things worse, certain types of attack can deliberately alter the configuration of the firewall to allow more access, then disable some processes, and ultimately allow a flow of untrusted data to pass through. Under these circumstances normal data will be processed and inspected very slowly, or not at all.

It might well be impossible for normal operations to resume even when the attack ends. In such a case the system administrator would have to intervene, reconfiguring or reinstalling as the case may be.

In summary, a Denial of Service attack could push the system into a new state whereby, even if the attack halts, the state gravitates to its new singularity (no doubt a faulty one, in the eyes of the system owner).

If we tried error handling, it would involve shutting down port access when certain parameters have been exceeded, alerting operators about the excessive activity, and activating processes that can protect sensitive areas of the system. Error handling may therefore save the current state of the system, and allow normal operations to proceed.

Singularity programming, on the other hand, would allow the system to be flooded and attempt to operate under the new state. Thus it is not a system of error-prevention, but instead encourages unusual states as a necessary evolution of the system.

When the system has unusual numbers of data packets pouring in and it does not enable error handling, we could imagine a new form of program being triggered. The Singularity Program could decide to open more ports and activate processes that are hungry for this abundant data. A projector of data on a screen, for instance.

Although I am not advocating any particular use for singularity programming at this stage - I want to present the theory of its possibility - we might reflect, momentarily, on an analogous situation in an economy. When an abundance of goods or services arrives in a market the price might go down, but instead of rejecting the goods a portion of the market might transform them, since they are so readily available, into other. more valuable goods.

No comments: