On Credit and the Idiom
Here's a way to prepare for the explorations of this post: think about “argumentation,” what we mean by it, what counts as a “good” or even “genuine” argument, and then “counter-argument,” how to teach and assess one’s argument, and so on. The go to disciplines for addressing these issues would be philosophy and rhetoric, along with the legendary contest between them, from which all the other disciplines in the human sciences flow (I have not credited rhetoric in the past, so I’m correcting that now, considering that rhetoric essentially co-emerged with philosophy). But rather than the whole array or logical models and rhetorical techniques we might appropriate from those disciplines, what if we just say that “argument” is knowing how to use a particular cluster of words and phrases: because, nevertheless, therefore, for example, imply, and dozens or even hundreds of others—a few of them primes and others products of the metalanguage of literacy. You learn how to use all these words in all the varying and ever-changing ways they can be used, by reading and writing and being read, by working with texts and producing them. There’s quite a bit of technics here, but not of the kind you’d learn from rhetoric, with its hundreds of devices, thoroughly labeled and defined, but also standing outside of ordinary language to create a language with narrow designs on the other. With the techno-lingual or originary semiotic approach I’m proposing, it’s a question of operating on vast linguistic infrastructures to bring new scenes into view and into play, and in a way directly reliant upon their intricate histories.
That is endless language learning, which I have for a long while placed at the center of human existence: the question of meaning is a literal one, concerned with our declaratives cashing out in exchangeable ostensives. We never really know our own language because language is a home of idioms that are perpetually invented and re-invented. Reading a new text means learning the language that text is constituting; entering a conversation means learning the language that has emerged amongst the speakers. But language is gesture and motion, and it is sedimented in our artifacts, so aligning ourselves with scenes and stacks of scenes is also language learning. Learning is contributing to the constitution of scenes, which means manipulating props and scenery, and becoming a prop and part of the scenery oneself. Entering into scenes also means marking and re-marking traces of the elements of those scenes, all derived from a single scene, but not in any way one could comprehend on a particular scene. We can model this working with traces on the work of the philologist, confronted by a series of variants of a particular text, or set of texts, and having to determine the provenance of each—originally, of course, to identify the genuinely sacred text that could be traced back to revelation through its generations of loyal caretakers. Now, though, we want to know how the array of scenes constituting us now is the result of a reconfiguration of earlier arrays of scenes, and we want to know this for the arrangement of orderly succession and the settlement of cases. Just as the metaphysical concepts of transcendence, logos, truth, and so on can be reduced to certain ways of using conjunctions and pronouns, historical inquiries can be reduced to identifying viable chains of hand-offs of power. In both cases this is a huge let down only if one can only imagine acting on a small set of inherited Big Scenes, free of the Stack; otherwise, creating meaning will have to suffice: if you want to continue to use words like, “noble,” for example (and why not, it is still part of the language), then you need to install the grammatical stacks and idioms through which “noble” will pave a path toward an ostensive, to an act ramifying across the stack of scenes that enough of the noble themselves will point to so as to perpetuate the sign.
I’m working through the implications for center study of Samuel Chamber’s analysis showing, convincingly and, for me, “pleasingly,” that all money is credit—any token refers to a creditor and debtor. What those tokens entitle you to will, then, depend on the quality of your debtor or, probably more precisely the assessment, on the part of those who would have to grant you the entitlement, of the quality of the debtor. Having the US Treasury as your debtor when you hold dollars is having a very solid debtor, until it’s not. The state goes into debt so that it can be indebted to you so that you will in turn draw upon that credit to encourage others to provide you with things, for which others have to provide them with things, and so a chain of credit ultimately winds its way back to the state—which therefore must maintain a monopoly on that line of credit, at least in part by outlawing by force either the use of other lines of credit or refusal to take tokens issued from this one. But we must then factor in the exchanges of money, or tokens, which I hypothesized in my last post means paying for use of the money at a particular time: you pay more for the use of money if you need it now than if you can wait for later. But even those who can generally wait until later need it right now, because enough of the people who generally need it now were not able to pay for their use of it. Chambers assumes that the production (of commodities) system is still essential to the money system, perhaps in part due to a residual Marxism, but I think he’s right. The financial system can greatly “distort” the production system by treating ownership of the assets comprising that system as forms of money, like collateral, so that whoever can discount the value of those assets against expected future earnings can make the commodities cost something other than they “ordinarily” would. But Chambers is pretty clear that there is no “ordinarily,” and there never was—money and commodity were always tied up together as long as we’ve had capitalism. But capitalism can continue indefinitely because as long as people need commodities, productive assets can always be revalued and become a source of monetization once again.
But the ultimate source of credit is succession, and we could say that governance involves indebtedness to the governed, with the government backing the donations to the center commanded of the governed. If succession is insecure, credit is insecure: this is the grain of validity in the governing class’s fear of Trump, since succession under the current regime means maintaining the bureaucratic displacement of the juridical sufficiently robust and directional as to wage war or impose debt enforcement on the constituency of the potential Big Men, wherever they may be, while preventing any Big Man from rising to the top. There’s an arbitrariness to it, since no action can ever be taken that won’t create fertile ground for the emergence of the chief or the sacred or divine king. And a constituency for such an emergence seems to be taking root in the one area of American life and maybe even global life, where something like monarchies can exist—the tech world. And something like monarchies can exist in the tech world because that world has been a reliable weapons supplier to the bureaucracy against the Big Men, but that no longer seems to be the case. It’s no coincidence that the tech world is one of the only places that seems conducive to learning, and is therefore the most likely site for the originary debt to be taken up again, in the form of reciprocal indebtedness of governor and governed. And this in turn means that the most basic form of learning, the kind that serves as the infrastructure for scenic design and idiom fluency is when and how to enforce and forgive the originary debt. Our most fundamental stance toward each other is the question: what do you owe to the center and what must the center provide you with to make that donation? But what you owe and what you ought now receive also entails a forgiveness of outstanding and even longstanding debts, and there will always be such. And this involves configuring yourself as data and initiating data exchange with the center. This must always already be technical knowledge, of the kind needed to configure yourself as data selected, curated and preserved against a broader field of data, which is also to say, everything. Money is a particular kind of data, an especially important kind under capitalism, and the only way to transcend capitalism is to create investments that only pay off long after we are all dead—for that matter, that never pay off and are indefinitely deferred, while perhaps allowing a living on the interest and subsidiary investments whose yield is to be funneled back into the deferred one. And that long term investment would have to be in “human capital,” because even if, say, you wanted to invest in growing a new forest whose trees will only be usable for lumber in 150 years, you must also want the human “gardeners” (biologists, chemists, etc.) needed to ensure the growth of that forest. What you’re really betting on, then, is that there will be enough people to tend effectively to the forest. You’re taking on pedagogical futures which can only be valued in your current learning: the more thresholds of idiomatic penetration with the stack you pass, the more the worth of your pedagogical futures. We pay off our originary debt by producing ever more refined and precise measurements of fitness, where we are continually learning to do things we couldn’t have imagined as something one might do until after we’ve done some other thing.
Originary debt is paid down (and you must pay it down if you want your line of credit kept open) by identifying the beginning of some event as the middle of another event, the middle of an event the end of some other event, and so on—each such identification is a unit, if not necessarily an equal unit, of learning. Let me back up a little here. I am retrieving an old analysis of mine regarding scenic temporality, in which I borrow heavily from Charles Sanders Peirce’s analysis of boundaries and transitions. If we want to see exactly where the boundary between, say, a red and blue area is, we should examine, at the boundary, how many red and blue “particles” we find mixed together. Where it’s 50/50, that’s where the boundary is. Similarly, how do we determine when a particular event has ended? This became especially important to me in thinking about the closure of the originary event—when, exactly, could we say it has been concluded? An event could only be determined to have been concluded from outside of the event, which is to say afterward, by someone unimplicated in the event—but, then, the completion of the event depends upon that person deeming it so, but who in turn deems that person to be beyond the “boundary” of the event? We seem to have an infinite regression of events validating or confirming previous events. Peirce proposes thinking about it as follows: consider the beginning of any event to be the middle of another event and the end of yet another event—in that case, you’re always in three events, or on three scenes, simultaneously, and from within each of these three scenes the other two can be deemed open, ongoing or closed. Obviously, we could multiply this model without limit, but three is the minimum, and beginning, middle and end represent the basic narrative structure, so any further analysis would just follow from this basic analysis anyway. So, presence involves establishing simultaneity of these three points in these three events (I also had in mind Einstein’s famous thought experiment regarding light coming from different directions and being registered by an instrument as a measure of simultaneity).
Instead, then, of this analysis serving the purpose of entering some Bergsonian discussion of time and continuity (which it could still do, of course), it now seems to me to have more use as applied to learning: you have learned something when you have coordinated or interoperated these scenes. I would apply this to learning how to work with a tool as much as to learning abstract concepts or learning a language (setting aside that learning to work with a tool is also a kind of language learning). Rather than just being on one scene, which is, to speak phenomenologically, a kind of immediate form of being, the way things appear to a “natural disposition,” what you’re initiating now on some scene with some prop or piece of scenery is also in the middle of discovering some use and at the end of a certain perfection of the scenery, prop or, even, let’s say “app.” We could speak in terms of the conversion of a scene which is the same as itself into a scene upon which we can continually say “this is the same.” Likes converge into sames while new kinds of likes are always being generated and discovered. Everything is like everything else in some way and we now have the computational power to keep generating more and more ways things can be like each other, while this computational power also enables us to decide the likelihood that any number of those things can be said to be the same on a scene. So, that’s learning: operationalizing new ways of saying this is the same sample out of a scene that is the same as itself.
This way of thinking about learning is fit to complete and continue our analysis of originary debt as being accounted for and paid off (“cleared”) in the currency of “learncoin,” which now becomes creating scenes within scenes—scenes within scenes that are both discovered and invented while, in some sense, having been always already there—the fundamental originary paradox. And we will always “relapse” in “feeling” ourselves to be on a single scene, and so debt enforcement involves interrupting that scene (interposing the temporality of one of the other scenes), for oneself and others, and while forgiveness involves allowing for it as a constant that allows for plural scenicity to spread out “elsewhere.” And this can only take place on a scene within the stack of scenes, which will come increasingly to involve writing and revising protocols and preparing oneself as a data sample that is like all other samples in ever more anomalous and incongruous ways while being the same under closely controlled ostensive conditions. Wealth, then, comes to be measured in learncoin, which is the approximation of succession practices to pedagogical practices, that is, practices that do nothing but create tripled scenes out of mono-scenes. All value will reside in the promise of new idioms that are transparent in proportion to one’s presence on the scene of saying “this is the same” with each such scene becoming the same as itself only long enough to become a new scene of saying this is the same in perpetuity.
So, each time you saying “this is the same” is iterated by others upon a scene that started off the same as itself, you have mined a learncoin. How much is that learncoin worth? Its value is as liable to fluctuations as any other coin, depending upon how much credit it provides, how much future that credit has, and how liquid that credit supply is. And that will all depend on proximity to singularized succession in perpetuity. Ensuring greater approximation to succession entails bringing the completion of a learncoinage into alignment with the middle of another mining process and the initiation of a new one. If we think about singularized succession in perpetuity as a kind of fluctuation negatively mirroring the fluctuations of money/credit then acting sovereignly means seeing to the alignment of these scenes, by continually updating your successor in such a way as to signal new more. Or less likely alignments of scenes. And any kind of acting means acting sovereignly, or in as close approximation to some sovereign action as possible. Only bringing into view and into practice an even more inter-articulated and complex array of futures can displace the futures generated by money as credit.