**Charles Peirce said, probably in the 1890s, that mathematics was “the” science of hypothesis. We learnt about this insight many years later when it appeared in an essay in The World of Mathematics which was published in 1956 by Simon and Schuster. Peirce’s insight applies, incidentally, to both so-called “pure” mathematics and so-called “applied” mathematics. (It should be pointed out, though, that thousands of years elapsed during which a kind of proto-mathematics using tally-bundling was widely practised, long before any maths valorised by the honourable epithet ‘pure’ was around. This tally-bundling was done for practical, and some ceremonial, reasons, but there was no body of autonomous wisdom which was being “applied” to the real world. So in the early infancy of the subject, mathematics was entirely instrumental.)**

It was only later, after Archimedes, Euclid and others had put together a classic body of geometrical and arithmetic reasoning that a “pure mathematics” could be said to have arrived. So there is nothing “applied” about the instrumental use of mathematics. Bits of mathematics are used instrumentally in all kinds of contexts, but the significant instrumental projects are those in which it is used to explore the implications of practical hypotheses, such as building projects, technological ideas, possible commercial deals and military campaigns.

So Peirce’s description can be used in relation to applicable maths without any prior notion that this is somehow subsidiary to pure maths.

In the case of pure maths, we know from various superstars such as Paul Erdos, that they are mainly motivated by tantalysing unsolved questions, which are really hypotheses about how certain imagined configurations will turn out. Michael Atiyah made a point in his Presidential Address to the Mathematical Association in 1965 that pure mathematicians use the concept of ‘mathematical modelling’ all the time, mostly when they are looking for representations of the particular non-instrumental mathematical structure they are exploring.

We know that applicable mathematics took a big step forward in the 17^{th} century when Descartes showed how it could be systematically utilised to describe change. This involved the use of spatial coordinates and also a time coordinate, *t* seconds, which could be *imagined* as gradually increasing. In this way applicable maths could offer a priceless mathematical way of picturing and studying physical change.

But there are two aspects of the giant Cartesian step made in mathematics which are rarely subjected to critical scrutiny. The first is that it only produces a “moving representation” which simulates a physical process *by the application* of *human* *imagination*. The second is that it is only capable of providing such a “moving representation” in cases when the change in question is *fully determined* and predictable. Probability can be introduced, but the effect is to blur the representation. Instead of getting a definitive outcome, one gets a range of possible outcomes, each with a probability parameter attached. It is as if the results were printed in grey ink.

There has been a tendency for many commentators to downplay these limitations: as when Laplace opined that everything which happened in the physical world <<must be fully determined anyway>> (sic): as when mathematical physicists have dangled the possibility of finding a <<theory of everything>> which would, unfortunately, not include the imagination needed to bring it to life as a changing representation. So this is just another, recent example of the Laplacian fallacy.

Such theorists have not been sufficiently observant to notice that all kinds of unpredictable things are happening around them all the time. Bishop Butler famously said that <<probability is the guide to life>> meaning that we often have to place our faith in the most probable outcomes of situations. No one seriously thinks that we can predict the destination of every raindrop or every covid coronavirus vector for that matter.

So unpredictability infuses into every aspect of existence, and the scope for successful mathematical modelling is inevitably conditioned by this fact of life.

The upshot of such thoughts is that a second 100% abstract language is evidently needed, one which can simulate the outcomes of situations in which unpredictability is centrally involved.

The full effect of this conclusion —the potential place for Actimatics— is dynamite from a philosophical point of view. It means that the main rock on which Western philosophy has relied for more than two thousand years —that all philosophy is, and must be, in the end a footnote on Plato— is categorically *wrong*. This platonic insight arose because classical mathematics was a superbly elegant form of clarified knowledge, quite unlike anything else in ancient times. But that was more than 2,000 years ago, and things have changed a lot since then. In Plato’s day life was brutish and short. Against this backcloth, the certainty and timelessness of mathematics looked supreme.

The February blog showed mathematics in a much more positive light than is usually seen today, but the paradox remains that in spite of this commonsense rehabitulation, mathematics is no longer the unique, unchallenged peak of human knowledge. This is a culture shock to the *n*th degree. The source of the appeal of platonism has always been that no other kind of human knowledge gets near mathematics as a gloriously abstract, pure, representation of real things.

But now there is a new abstract discipline based on random-jumping sequences, which most likely, will be much better fitted to describing physical reality.

It is only the gap between random-jumping sequences and quarks which stands between the new development and a complete re-organisation of modern science. (There are already multiple theories in place which offer rational explanations of phenomena from neuro-electric behaviour in the human brain down to sub-atomic physics. If quarks can be represented by actimatic objects, all this suddenly becomes part of an actimatic make-over of modern science.)

But the idea’s main appeal is to present at last a credible account of why *a universe exists at all*. It exists because it is self-defining. The human brains which arise at the top of the evolutionary tree offer the basis for creating the epistemologic definitions on which it all depends. (It would be unthinkable for these brains not to will the definitions which make themselves possible.)

The organic plant-and-animal bioworld can be explained as a by-product of this existential willpower. Similarly the non-organic cosmos. It is just as “real” as we think it is, because it, too, is a by-product of the definitions which are necessary for our consciousness. (It’s “reality” is 100% relative to our “reality”.)

So here we have a supernova of new intellectual light suddenly illuminating the bottomlessly dark, puzzling world of physical reality. It is a simplification of immense proportions, and we know that quite small simplifications —like William Harvey’s discovery of the circulation of the blood— can be source of intellectual joy. Thousands of physical assumptions previously regarded by physicists as brute, unexplainable “facts of life” —like the three dimensionality of the universe and the ceiling surrounding the velocity of light— suddenly acquire explanations. When the new ideas have become widely known, they are likely to exert an extraordinary strength of intrinsic appeal —one which can settle the restless modern mind and which can bring us all together as a single human family.

**CHRISTOPHER ORMELL 1 ^{st} March 2022 **