Universal Translation Machine
The earliest computer translations were comically bad, because they tried to translate texts word by word—at first, it seemed, without any grammar check, but even if they added that in they would be comically bad because languages are idiomatic and as we speak, listen, read and write we don’t check the dictionary definition of every word we use. Machine translation became good when it began to “cheat”: rather than producing translations from scratch by matching words from one language with words from another, they worked with a database of already existing translations and could therefore check how specific words, phrases and sentences of, say, English, had already been translated into, say, Italian, and draw on the database that way—for completely novel sentences made of completely novel phrasing (not that common) approximations are possible while, perhaps at the furthest reaches of innovation or idiomaticity, even this kind of translation can’t be done very well yet. A similar trajectory characterized the development of AI in the form of LLMs: from earlier attempts to simulate human speech by producing vast logic trees that might lead from one sentence to another, efforts moved to next word prediction based, again, on which words have actually followed which words in all of the texts included in the database. In both cases the success of the later approach is obvious, but in both cases it’s also possible to think that certain very interesting problems have been skirted rather than solved.
A similar oscillation has accompanied my own attempts to think about universal translation, since I wanted to produce specifically pedagogical translations, most importantly within the same language, that would more closely approximate what I would now call center study idioms. For this I wanted to use some articulation of Anna Wierzbicka’s Natural Semantic Primes and the originary grammar I had been extracting from Eric Gans’s The Origin of Language: the ostensive-imperative-interrogative-declarative sequence (with ostensive-imperative-ostensive loops included). The method would be to break down any text or utterance into an explicit articulation of ostensive, imperative, interrogative and declarative, expressed in the primes. Wierzbicka’s own explications, in which she takes a particular word or phrase and breaks it down into what is essentially a scenic articulation in which “someone” can “say” that they “see” something, or “want” something, but one “way” and not another “way,” and only “after” something else “happens,” and so on, served as a model for thinking about this. Initial approaches seemed to require the creation of what would essentially be an operationalizable proto-language that would provide in advance all necessary phrases and sentences, but this would obviously require massive labor even to prepare the database to perform machine translations. This line of thinking, that is, ran aground on the same shoals as those early attempts at machine translation and language AIs. Another approach, following the same trajectory as those fields, might be to start with fully developed modes of language which can be designated as specific articulations of originary speech forms with samples translated into primes, and then maybe get the algorithms churning. The break down into primitives would still be necessary, but only as needed, say, to advance a particular side conversation, not as a pre-existing comprehensive base.
It's helpful to think about what and whom such a machine would be for, and it’s possible to be more precise about this now. Such a machine would be part of the officer class pedagogical company (let’s say: The New Officer Class Academy) discussed a couple of posts back and which would ravel up Fitness, Nomos/Class Action and Thirdness (with its strictly controlled predictions markets on judgments—which now, due to a suggestion from Eric Jacobus, I can see as a way of tokenizing deferral). Even more, building, maintaining, updating and deploying the machine might very well be the central, maybe in a sense, the sole, activity of the pedagogical company. This technology, like all technology, has a specific goal: to bring ostensives, imperatives, interrogatives and declaratives into closer correspondence with each other. This entails bringing more of language into a performative condition and making performative language increasingly felicitous. Healing the cut of the primacy of the declarative, we might say—putting the overcoming of metaphysics on firm footing. There’s a place here for a kind of extreme declarativizing—there’s an Oulipo procedure that calls for replacing every word in a text with its dictionary definition (and, of course, we could then replace all the words thereby produced with their dictionary definitions and so on) and a thoroughgoing prime translation of a text would look something like that—this is one of the ways I have thought of originary satire: taking things belonging to the ostensive and imperative realm and rendering them so completely declarative as to expose the ridiculousness of trying to “justify” every ostensive and imperative; but it could also work the other way, of exposing the ridiculousness of certain ostensive gestures and imperatives—either way, the ostensive-imperative-ostensive realm is being put to the test by being brought into the light of day of the declarative. But such extreme declarativizing would itself be performative, because no one, not even the most “academic” of intellectuals, could ever really speak like that, so it’s a way of exposing the limits of the declarative rather than just stating propositions.
We could start, then, with the idioms of center study and a list of performative discourse forms and then put together a database of high-quality texts, narrowing down a range of world traditions to the ones on which we want to leverage and train contemporary utterances, prompting the model to translate contemporary texts of all kinds into a more centered discourse. We create the weights as we go by privileging those idioms that turn discourse more towards the center, creating, say, hypothesis/thought-experiment/prayer/promise hybrids from which we can derive modes of tokenization and protocols for blockchaining data. It would then be here, in the selection and refining process, that originary grammar and explications in terms of primes are employed. We search for closer ostensive-imperative-interrogative-declarative fits, using reduction to the primes to further clarify the scene each utterance presupposes: what is the relation between someone thinking, someone doing something, something happening, someone thinking they see someone doing something when something is happening, and saying this is good, etc. We would be preserving and reactivating a whole range of traditions, and not only those of high culture but everything that reveals something of humanity, because none of it should be lost, even those parts we wouldn’t want to revivify. This selection and refining process is really a refining process and small changes in the prompts would yield significant dividends at the margins where entire forks of civilization are at stake.
So, AI in the broadest sentence is translation machinery: from input to output is translation, next word prediction is translation because each new word reconfigures the discourse into a different one, imitation of human language is translation, from possible to actual. All of the arguments about AI Ethics, AI alignment, AI safety, etc., are all about translating the mass of data we now have collected, tagged and assorted in ways that are “better” rather than “worse,” with no one being able to say exactly what’s better or worse without resorting to liberal bromides. We’ll be able to say, and in a way that takes in the entire supply chain of data organization and processing: better is more performative, better is more useful in settling cases, better is placing winning bets on pedagogical futures, better is widening, deepening and expanding the nomos by making more namespaces—and this “better” is determined within the pedagogical institutions serving as a pipeline for the leading companies in data security. Everyone will eventually be working on the universal translation machine because everyone has worthwhile data to offer while at the same time the need to gather forces against antinomic vendettas means filters are created that allow only a few in for now: Nomos/Class Action will be part of the machinery for this very reason. Zack Baker’s tokenization of kingship argument provides an approach—those producing and handling data in more useful ways receiving tokens providing access to the subscription system the company builds with other companies, serving as currency. The translation machinery should be able to instantiate Alexander Good’s goal of using AI to predict the economy sufficiently well to invest successfully enough to, ultimately, govern. But such predictions must encompass what encompasses the economy, the broader array of kinship relations, pedagogical practices and succession rituals. How far into the future can we see family trees branching out; or increasing precision in showing others how to do things one way rather than another; the likely trajectories of candidates for the governing class a couple of generations down the road? Whatever “picture” we could get of all this now would guide us in our current practices, which is to say would become input that makes something like that future more likely but also, precisely because of the increasingly valuable flows of information entering the machinery, more likely to look different than what we can imagine now. Such predictions, then, are more a question of fitness and readiness in the present, while also being an abundant source of idioms.
Finance provides the best idioms for weighing pedagogical futures, with the “futures” of course itself a derivative of financial terminology. Finance is completely future oriented, organized around exchanges to take place some time in the future, with the contract to engage in such an exchange then itself becoming an asset that can be bought or sold, and then used as collateral or included with a batch of other assets that serves as a hedge for some other investment. Of course, anyone with financial knowledge knows all this, and where I’m getting things wrong or putting them too simplistically. I will try to keep learning and improving, Money, according to Samuel Chambers, is credit, and nothing but credit, which I embrace since it’s entirely consistent with the concept of originary debt and makes it clear how the entire world runs on exchanges of confidence that change constantly. Finance is money exchanged for money, which if Money Has No Value, as the title of Chamber’s book has it, shouldn’t be possible but (and you’ll have to read Chamber’s book to see what I’m doing to his argument here) it seems to me that money exchanges are essentially exchanges between those who want the money at one time and those who want the money at another time. What’s being exchanged, then, is ready access to money, or liquidity, on the one hand, or the accumulation of reserves of it, on the other. Someone is obliged to pay me 100$ ten years from now, i.e., I am their creditor, and so I can sell you that credit for 60$ right now, because I can’t wait for the money, or can’t take the chance that debt will not be repaid—you, on the other hand, can wait and can absorb that risk, and what is involved here is simply a money as power equation because you have a far larger spread than I do. This means that the critical financial devices of derivatives and arbitrage are solely measures of power: not everyone can be in a position to scour the markets looking for assets priced differently in different markets and scoop them up at the lower price and to be sold instantaneously at the higher prices. Chamber makes a very powerful case that there has never been a non-financialized capitalism and never could be, and that finance shapes the underlying commodities speculated on in constitutive ways—it’s not a distortion of some real economy. All this is completely consistent with Colin Drumm’s dissertation (which Chambers cites, very favorably) and Bichler and Nitzan’s Capital as Power (which he doesn’t).
Finance’s futurism serves as a kind of funhouse mirror, but one accurate down to the last detail in its own way, of the transference of originary debt to succession rituals I like to imagine. For Chamber, bitcoin cannot be money because it cannot have a credit-debtor relation—indeed, is designed precisely not to have such a relation—and also addresses the question of whether it is something else, potentially post-capitalism (which Chambers, who seems clearly critical of capitalism, would probably take an interest in) and concludes, not really. All this will have to be considered, while there are all kinds of other implications of blockchaining than the creation of new currency, but Chamber’s argument helps me to reconcile my own sense that bitcoin is ultimately an attempt to cancel all debt to the center, and therefore will always be limited. We should want the world to be predictable in certain respects and unpredictable in others—we want to build forms of exchange that will be recognizable as precedents to forms of exchange to take place in the distant future without us, now, being able to tell what those future forms of exchange will look like. The things historians can recognize in the past, such as a small, backward province having what were then unrecognizable capacities to eventually become an empire, to be visible to us in the present, regarding the future. Then we could issue credits to some marginal sector or technology, seeding it for skunkworks and channeling promising students in that direction, but that is only possible insofar as the value of that sector or technology is not discounted against expected future earnings so that, for example, there are incentives to invest, hype, sell high and then let crash. The best way to counter capitalism, it seems to me, is to be able to predict the value of a particular investment decades or more into the future so that even the most narrowly profit oriented will have the incentive to stick it out. Then, considerable social energies can be directed toward reinforcing that prediction, thereby improving the odds of the prediction, and building in other directions that depend upon the long-term viability of that project. And this would to some extent depend upon and to some extent help create, new kinship networks that would make all classes want to provide for future generations.
This may seem like a digression, but this is the kind of thing the universal translation machine should be for only not only and not even primarily to predict investments but to predict, given various hypothetical conditions depending upon the state of the juridical and the disciplinary, the prevalence and distribution of specific dispositions which can themselves only be located upon pedagogical scenes. This where the work of approximating discourse to performativity and the donation of resentment to the center leads—this is the path to the conversion of assets into data. I am writing this portion of this post the day Trump announced that, if he wins, he will establish an online American university, free to all, free of wokeness, and dedicated to making available the highest forms of knowledge. Maybe that will be a place to start building; more likely, it will serve as a model for forms of higher education rivaling and poaching from the current university system. What will count as the highest or, for that matter, genuine, knowledge will clearly be up for grabs, and center study should be ready to enter the fray, with our universal translation machine, predicting pedagogical futures. Once the prediction of future value extends beyond decades and looks towards centuries, we’re not talking about capitalism or money any more, and it will just be a question of the officer class and a few generals taking the leap from control over assets to positions within pedagogical platforms that provide a spread in the form of supply chains that have also moved off the capitalist line, and operate ritualistically, with the ritual being the tributary one of succession, with apex operators of one platform exchanging goods, services and materials to each other for publicly staged succession rituals—which ultimately become, simply, the nomic order.