Scenic/Event Intelligence
The event/scene duality, then, is to run parallel to information processes sifting out noise from communication systems. Posing the problem this way (which must be done just to ensure GA’s competitive superiority to post-cybernetic theoretical systems) allows us to notice that constructing a scene has a filtering function, insofar as it sets up an inside and an outside. The symmetry established on the originary scene filters out any extraneous mimesis—that is, any mimetic action not tending toward the appropriation of the central object. Everyone knows exactly what the others are doing. Anyone coming late to the scene would either fit right in or render himself irrelevant, his movements essentially random. But this perfect information becomes a surplus of information, increasing entropy exponentially: everyone knows everyone else wants the object, but no one knows who is going to get it and what the others and oneself will do if the drive to appropriate it continues. Everyone seems capable of anything, and so no information can be derived from the scene. The aborted gesture of appropriation, then, restores that fleeting condition of perfect information in a more sustainable form. From here on in we can always know that we might at least be converging through signs upon the same object in inverse proportion to the degree in which we are converging upon it with the aim of possessing it at the expense of the other. The clearest information then is when we are most certain that appropriative gestures have been suspended—but this information becomes intelligence insofar as we can include in our representations the entire spectrum of possible advances upon the object. The richest conversation, then, would be one in which both participants are completely aware—increasingly aware, as the conversation proceeds—of all that they might do to each other, while also being increasingly certain that they won’t do any of it. It’s in this running up and down the scale of possible but suspended actions and counter-actions that we can see the event “profiled” against the scene.
If we ask, how do we determine the boundaries of a scene and the end of an event, the only answer can be, upon another scene, in another event. A scene is composed, or, an existing scene is adapted, so as to concentrate focus on the center—it’s easy to think of examples, such as ritual scenes, but also theaters and lecture halls. But, then, a beggar locked out of a house of worship might weaken the boundaries separating the inside of the scene from its outside; the same with a protest that breaks in from the outside into a lecture hall, or the opening night festivities outside the theater. With highly formalized events such as plays, lectures and rituals we may feel certain we know when they end—indeed, we might say that much of civilization and community depends upon everyone knowing the relation between inside and outside and beginnings and ends. But there’s no reason for accepting the formal closure as the end of an event, to the exclusion of its various reverberations and resonances. And with less formalized scenes and events—conversations, encounters, strolls, exchanges of emails, romantic affairs, lifetimes—which, tellingly, we often try to model on more carefully enclosed events like works of art—the possibilities are endless. But we can only make sense of some “relationship” by imagining an end to it, even if that end can also be imagined as a new beginning.
To close one event, then, is to open another—the event of closing the previous event. To identify the boundaries of the scene is to construct another scene within or surrounding that scene. I long ago took from Peirce a model for thinking about this. Imagine we want to determine the exact line where one entity ends and the adjacent one begins. We can represent the two entities as different colors, say red and blue. Within each entity, we have all blue and all red, respectively; at the boundary, we have an equal number of blue and red “particles” on each side—and, of course, we can always develop sensory mechanisms enabling us to detect smaller “particles,” in which case the boundary might shift insofar as we can see a bit more blue on the red side than before. Whatever we take to distinguish a same from an other can be thought of in this way, and the more complex the entities the different kinds of “particles” we might be interested in. Similarly with events: the beginning of an event is the middle of another event and the end of yet another event—so, when we identify the end, middle or beginning of some event, we do so in the middle, beginning or end of another event.
I’m borrowing from a “philosopher” here, albeit a rogue one—the only one, to my knowledge, to take the “community of inquirers” rather than the individual mind or consciousness as the “epistemological unit.” And, of course, a semiotically based thinker, whose icon/index/symbol triad overlays helpfully if inexactly over the ostensive/imperative/declarative one we draw from the originary hypothesis. But we end up back in metaphysics if we try to “define” scenes and events outside of their evidencing themselves in linguistic acts and artifacts. Data ultimately has to resolve itself into ostensives, imperatives, interrogatives and declaratives. More precisely, into the “proper” relations between these speech forms. Any imperative is issued from an ostensive, any imperative extends itself into an interrogative, and any declarative will “cover” the interrogative by supplying a new ostensive that will enable the fulfillment of the imperative and thereby keep us upon the scene of the ostensive that began the sequence. All the ethics and morality we would ever need must be here. “Bad” and “evil” lie in imperatives that obscure the ostensive they issue from, in interrogatives that are actually imperatives, in declaratives that don’t situate you on a field of ostensives, etc.. But, to anticipate the deconstruction of these boundaries, such “deviationist” speech acts might be performed in protected or enclosed spaces so as to provide examples of what we must learn to look out for. But those protected or enclosed spaces can open up into the “general economy,” and “make a scene” out of a scene trying to enforce scenelessness. So, the most important boundary is that between obscuring and participating in the design of a speech act sequence. You can say that there’s no greater indictment of an authority figure than commanding people to do what cannot be done, but one can imagine a productive and necessary pedagogical scene that does just that precisely in order to reveal the boundaries of the possible in a way that is more credible than had it been presented declaratively—in which case, we could say that we’d have to trace the imperative through to the declaratives it issues in. But that would also mean we might always be able to save a misfired imperative by having it land where it should have been aimed in the first place—and then, perhaps, it’s a question of what the commander has to say when he sees the results. We have to construct and replay the entire sequence within another sequence, with the aim of (obeying the imperative to) exhaustively fit all the speech acts into the sequence—at this point, we would no longer need to speak about “morality” and “ethics.”
I’m obeying the imperative to remap the human on the model of the originary scene, an imperative that is drawn out of the originary hypothesis’s “revelation” to me. Obeying this imperative entails treating everyone else as if they’re doing the same thing, which amounts to assuming that we’re all still within the originary event, ensuring it remains open by acknowledging provisional closures (as soon as the ritual comes to an end we treat that ritual as the middle of something). But this also means doing so, to the best of my ability, through all the idioms spoken by everyone, the more differentiated the more one tries to learn them. It’s easy to get lost in another’s idiom, but at some point some boundary will emerge distinguishing between idioms and then you might find you’re making the idioms speak about themselves. It’s good to try and sustain that indefinitely, speaking in and about an idiom, thereby creating new idioms. An idiom is particular set of ostensive-imperative-interrogative-declarative sequences, so learning an idiom is learning when, for example, someone asking a question or making a statement is really requesting that you do something.
Linguistic data, then, most fundamentally, is an idiom being found oscillating between two sides of a boundary, and any idiom can be found there in the right scene/event articulation, so we can create practices of maximizing the production of linguistic data that would call for (issue an imperative to establish) disciplinary spaces to form around them. These practices would have us translating infrastructures so as to display that idiom on that boundary surrounded by a field of idioms held constant within boundaries for the moment. These are the same samples I’ve been speaking of. It’s a question of developing the habit of asking what (implicit) question is a declarative answering, what imperative would have extended and converted itself into that question, and what shared object issued that imperative to sustain itself. And then asking what other ostensives, imperatives, interrogatives and declaratives are like these, and how? I’m very drawn to Paul North’s proposed world of likenesses here: everything is like everything else in some respects, and everything is constituted not by its own essence or becoming, but by the networks of similitudes continually constituting it. For me, when we single out the same out of world of likenesses and enlikenings which are deferred (kept at various degrees of “like,” prevented from advancing on to “same”) we can say that we know something—contingent upon others joining and sustaining that scene or idiom of knowing.
Here’s an idea for a think tank. Create sets of ostensive-imperative-interrogative-declarative (let’s go to the acronym: OIID) sequences out of Anna Wierzbicka’s natural semantic primes. We’d need a lot, so there’d be plenty of work for plenty of researchers. Then, choose a range of training texts—again, we’d need a lot here. Then, translate the texts into the OIID sequences. (Tons of work here too.) This could then be computerized, like any translation or language generation program. (Lots of work for tech guys.) Then, the OIID translations are translated back into English. Here’s the trick: this second translation doesn’t just replicate the original text insofar as we compose the OIID sequences to function as filters that would produce revelatory differences from the original. The original texts will usually have defective OIID sequences, which we can eliminate from the prime OIID sequences. So, the OIID sequences into which the original text is translated would seek out different English chunks to be retranslated into. The difference between the original and retranslated text exposes the “ideological” implications of the original. We can expect the retranslated text to have both more perfected OIID sequences and more incoherent ones, and algorithms can be continually refined to achieve different effects. These effects will be visible to the trained eye—this is also a pedagogical program, aimed at training a new datafied officer class. Now, I do insist that this can be done (while also mentioning that I’m completely unafraid of anyone stealing the idea because, well, anyone who really gets and is taken by this idea would want to work on it with me anyway) but the purpose of such a program is also to make it possible to start acting and thinking as if it’s in process and we’re already working on a rudimentary form of this translation program, attuning ourselves to exposing and noting the kinds of similarities and differences it is designed to reveal. In this way, scenic design practices work directly on the idiomatic field, events are built directly into scenes, and aal of language is turned into data around which an unlimited number of disciplinary scenes can be constituted.