Saturday, January 26, 2008

Singularity programming design



Scope

Having touched on the possibility, both in terms of its theory and very briefly its application, of singularity programming as an alternative programming model, we now focus on the kind of design it would embody.

Platform games

Facebook took the world by storm in 2007 and was voted favourite technology by many leading technology readers. Was it just a gimmick? A fad? Marc Andreessen, respected software engineer, entrepeneur, and co-founder of Ning, lauded Facebook's adoption of an API model. "My personal opinion is that the new Facebook Platform is a dramatic leap forward for the Internet industry", he stated unequivocally on his blog.

In his analysis of Facebook's success, Andreessen cites the advantages of a platform over an application, including the "walled gardens" of closed solutions that have been knocked off the playing field by the openness of the web.

It seems that it should almost go without saying, but apparently it takes someone with Andreessen's clout and standing to put one and one together: solutions that have been fully crystallised by developers fare less well than those that can be reprogrammed. Platforms, in other words, are flexible to users' needs and input.

This distinction is not dissimilar to the evolution of unusual states we discussed before. An "event handling" program has the potential to deal with unusual data or events in a way that is mostly hardcoded. As a result, perturbations that vary too much from the anticipated will either be rejected outright or alternatively push the system to an unusable state with no differentiation of function possible in that new state.

But what if "unusual data" is not a rogue Denial of Service attack, but instead represented users' varying needs?

Design class

In cellular activity prior to individuation and the formation of tissue and organs, cells are considered pluripotent. In other words they have the ability to be any one of several cell types. In fully individuated humans there are 254 different types of cells. Jellyfish have about three.

During experiments the researcher and theorist Stuart Kauffman found that the process of induction, i.e. when cell collectives suppress or enhance cellular differentiation in other collectives via signals, there are "recurrent patterns of gene activity within these networks, patterns which exhibit the kind of homeostatic stability associated with attractors" (DeLanda, p. 65). Those attractors, he concluded, represent consistent cell types.

Object orientation is only one of several models available to the computer scientist, but it is particularly suited to our theory. Objects are like cells in the sense that they hide certain kinds of information (as cells would contain the cytoplasm or nucleus that in turn contains the chromosomes and DNA) that are nevertheless vitally important to their eventual, activated functionality.

If we extend our analogy of cells at the level of pluripotent collectives to the layer of possibility in a pre-formed system (please note: I am using the term pre-formed in the sense of "unformed but will eventually be formed" not in the sense of "already formed prior usage"), we have the corresponding notion of signals that could determine the type of cell available to the collective for tissue building or, in the case of a system, the kinds of input that could determine the type of object available for component building.

It is a bit like the problem of cross-cutting concerns that have annoyed developers for years. Some kinds of functionality do not have a core function (logging is the classic example) but nevertheless requires implementation across the majority of classes (objects) which themselves are meant to solve the problem of a separation of concerns. Input data that try to find a matching pattern in a pre-formed system cut across all classes (pre-formed objects).

But cross-cutting concerns are already well defined problems, whereas the problems to be dealt with at the pre-formed object level are not well defined yet. In our example an undefined perturbation to the system is precisely the cause of variation and diversity at the next level: the component level. Until the data has found a way to fit itself in available classes (not necessarily in a complete way) the differentiated class cannot emerge. Likewise, in its evolutionary form, a new class may emerge that instantiates clusters of objects and a component that is ultimately rejected. It is therefore expected to be an evolutionary process.

By focusing on differentiation - i.e. the evolution of a system from its classes designed by a separation of concerns to differentiated (instantiated) objects to components through to a fully realised and differentiated product - the availability of unrealised objects that have the ability to change state according to unpremeditated signals (data or events) are neglected.

Faceless

If Facebook is truly a leap forward for the internet it is immensely exciting to speculate what it could be if not just external programmers but also users had the ability to contribute radically to the platform. It is in part the satisfaction of users' diverse needs to play and interact with objects and people in the environment that drives the thriving communities of Second Life and World of Warcraft.

In the singularity programming environment this level of interaction is envisaged as part of an evolving dialogue initiated by signals to a pre-formed layer of digital object possibility where classes enhance and suppress information to form new types of objects. These objects then cluster together to structure novel components and building blocks to respond to the information contained in the user's signal.

Wednesday, January 23, 2008

What is singularity programming?



Singularity programming is a radical form of design (not just coding) that takes its inspiration from the mathematical concepts of manifolds and singularities.

The question is asked

In principle the question is asked: how would a program look that responded to an information system whose steady state has undergone the equivalent of a phase transition in physics?

Singularity basics

It is tempting to plough ahead without an understanding of singularity basics, but that would leave the reader with little benefit from this exploration. Therefore it is valuable to touch upon a few core concepts that nevertheless have deep application, but require a bit of mental abstraction.

The term singularity is familiar in the context of manifolds in differential geometry, but it is used to describe several different (albeit related) topics. In particular I am using the term singularity in the classic Riemannian sense and its more famous extension in Einstein's General Theory of Relativity.

Riemann's is also the version referenced by Manuel DeLanda when he expands the notion of manifolds and singularities to describe physical processes. He posits that the intrinsic structure of a manifold can describe the evolution of such processes over time.

We are interested in the singularities that are topological points and thereby define a steady state. They have an influence on the behaviour of trajectories, and therefore on the physical system itself. A singularity, in this sense, often acts as an attractor within the manifold. Any trajectory, as long as its origin lies within the basin of attraction, will have as its end point this attractor singularity.

Thus we could also have spoken of attractor programming or steady state programming rather than singularity programming, were it not for the notion of a phase transition associated with the symmetry-breaking bifurcation of one singularity to another. (A symmetry-breaking bifurcation, in short, implies that the system has changed state and its new stable state is represented by a different singularity.)

To use a simple example we may think of water. When it is a liquid its state can be described by a certain singularity in a manifold. It may lose temperature, or gain temperature – whether through kinetic or heat energy – but essentially it remains water. However, when this type of energy is consistently applied to the water it may become a gas. At this point it undergoes a phase transition, and stabilises around a new state (gas). Both these states would be represented by two different singularities within the manifold.

Meanwhile back at the digital manifold

I want to use these terms as metaphors in a digital space, the space defined by system calls, applications and user spaces in the operating system, memory and storage systems of a computer.

For starters let's imagine a smoothly running system – business as usual – evolving through two singularities in the manifold. First phase: total inertia. Bootup? First phase transition. Loaded Windows? One stable state is reached. Or was it Linux instead? A different stable state. Perturbed by applications? Hmmm ... but if you close them again, the system returns to the typical stable state of Windows, or Linux, and so remains around the same singularity.

This gets us going in the right direction, but for the purposes of typical programming the example is a bit too broad. Most of us who develop aren't system hackers – we write user space software.

Nevertheless, we already have some correlating ideas. Programs, or certain types of data, perturb the system and push the system around the basin of attraction of a particular singularity. It generally continues to stabilise around that singularity, but occasionally a large memory leak or a kernel panic can lead to a phase transition in the system. And let's be honest, in most systems this phase transition is rather fatal to the user. The infamous Blue Screen of Death is a memorable case in point.

This hints at the paradigm I am suggesting: a form of programming that caters for such a new state. But ... what exactly is singularity programming then?

It is not error handling

To begin with, we may start with something it is not - namely traditional error handling. At, for example, the assigning of values to a variable in a C++ or Java program I, as a programmer, might notice that the value could cause an anomaly through division by zero. To handle this exception - which is a kind of error - I write an error handler. In effect, we are using a logical form of redirection that continues in the same domain - it originates and remains within the basin of attraction, in other words.

Any well-written piece of software should trigger an error handler in such a situation. The error handler diverts the flow from disaster and the program continues its execution. It's the equivalent of the program saying: "Oh by the way, this is the problem that just arose, but you don't need to take it too seriously, just let me get on with things ...".

However the state of the system is not radically changed by this logical redirection, and hence we cannot speak of error handling as singularity programming. In fact we might say that the goal of error handling is to keep the system in its present state, which is to say the program does not want the system to change its state and reach a new singularity.

I give up, are you going to tell me what singularity programming is?

Let's look at our example of a stable system again, and imagine that it is a firewall. A simple firewall may accept internet (untrusted) data at secure ports, inspect the packets, and pass the packets to a local network via another port. Packet load may vary, but the firewall can normally continue these operations with no disruption and little noticeable change in resource usage for months on end. Often the system wouldn't even need a reboot. It's a simple system that remains relatively stable during its lifetime.

Briefly take one step back in order to satisfy our analogy. The firewall system reached its present state after the connection, installation, configuration and implementation of hardware, network, operating system, and crucial operations software. We may have attempted different tactics during any of these processes, but eventually we would have a stable, running system whose state is represented by an attractor singularity in our imaginary firewall manifold.

Now imagine that the system is perturbed by unusual volumes and types of data, for instance during a Denial of Service attack. In simple terms, the system becomes overloaded using all of its resources to cope - or at least those focusing on typical functions.

To make things worse, certain types of attack can deliberately alter the configuration of the firewall to allow more access, then disable some processes, and ultimately allow a flow of untrusted data to pass through. Under these circumstances normal data will be processed and inspected very slowly, or not at all.

It might well be impossible for normal operations to resume even when the attack ends. In such a case the system administrator would have to intervene, reconfiguring or reinstalling as the case may be.

In summary, a Denial of Service attack could push the system into a new state whereby, even if the attack halts, the state gravitates to its new singularity (no doubt a faulty one, in the eyes of the system owner).

If we tried error handling, it would involve shutting down port access when certain parameters have been exceeded, alerting operators about the excessive activity, and activating processes that can protect sensitive areas of the system. Error handling may therefore save the current state of the system, and allow normal operations to proceed.

Singularity programming, on the other hand, would allow the system to be flooded and attempt to operate under the new state. Thus it is not a system of error-prevention, but instead encourages unusual states as a necessary evolution of the system.

When the system has unusual numbers of data packets pouring in and it does not enable error handling, we could imagine a new form of program being triggered. The Singularity Program could decide to open more ports and activate processes that are hungry for this abundant data. A projector of data on a screen, for instance.

Although I am not advocating any particular use for singularity programming at this stage - I want to present the theory of its possibility - we might reflect, momentarily, on an analogous situation in an economy. When an abundance of goods or services arrives in a market the price might go down, but instead of rejecting the goods a portion of the market might transform them, since they are so readily available, into other. more valuable goods.