AMMDI is an open-notebook hypertext writing experiment, authored by Mike Travers aka mtraven. It's a work in progress and some parts are more polished than others. Comments welcome! More.
This grand concept of a universal knowledge calculus made my computational senses tingle. Is this not what GOFAI aims at? Particularly in efforts like the Cyc project, the apotheosis of the dream of "knowledge representation". However, the logical formalisms of KR seem pretty crude compared to what Hesse is describing. Cyc was based on a notion of representation as formalized statements or graphs,; while the Glass Bead Game is an entry into a higher, more spiritual plane, a practice, not a body of knowledge.
Someone in the course asked me a followup question about GOFAI. It's a bit hard to explain, it is such a relic of an earlier era. Back in the 1960s it was thought that symbolic reasoning and formal logic was a good model for human intelligence. The mind contains representations of the world (facts aka propositions) and rules or algorithms for reasoning about them. This is a pretty standard, if wrong, model of thought (it's the basis of analytic philosophy) and the GOFAI thrust was to take it literally, and build machines that worked that way. While these machines might have some learning capabilities, most of their knowledge was meant to be hand-crafted by programmers or "knowledge engineers".
It's not original with them of course, it was a founding principle of GOFAI, incarnated in Newell and Simon's notion of a General Problem Solver and the entire subfield of planning
A grand project to build a formal representation of all human knowledge! Not wikipedia, but knowledge in a formalized frame system, including everything from common sense facts about objects, money, botany, human relationships, and the economic activity of Indonesia. This was sort of the moonshot project of GOFAI, led by Douglas Lenat first at MCC (a big semi-governmental research lab in Austin) and later as its own company Cycorp which is still going.
Good Old Fashioned AI, the kind that actually tried to model human thinking with carefully designed symbolic computational procedures. Quite different in methods and purpose from the AI of today, which relies on machine learning from massive datasets. There is just a deep cultural divide between the two camps, which actually dates back much further than the recent triumphs of the ML approach. Goes back at least to Minsky and Papert's eviscertation of the neural/ML models of an early day in Perceptrons.
Whereas GOFAI is fundamentally discrete or digital (a proposition is either true or false), LLMs and other deep learning machines work via continuous mathematics, everything is a numerical probability and the space being explored is smooth (continuous).