The False Ruins of the Future
Conversations pertinent to large-scale change – whether positive, negative or unclear – are fraught with discursive problems, namely, those involving prevailing eschatological modalities, which drive up anxiety and fear and abolish any semblance of reason, thereby making effective, human-positive1 communication, and thus, adaptation, difficult or impossible, as the future – in a eschatological schema – is conceptually fixed. Invariable. If one truly believes that through divine providence or scientific validation, the end of the world is swiftly approaching, then there is no reason to engage in skeptical discourse. One knows the end is neigh. What is there to debate? Who can jaw with the reaper? What hubris! This anti-discursive attitude is intensified by numerous cognitive propensities, most of which are not made explicit in classical folk psychological schemas. That is to say, we do not think the way we think we think. Our “map of the world” is accurate to the degree necessary for survival and propogation but it does not disclose itself to us, hence why one cannot think a thought, rather, the thought renders itself apparent to the self, as if of its own accord. When one recoils in fear from a snake it is not because that individual “thought through” the risks posed by the presence of the reptile, but rather, because some portion of the biological system “raised the alarm.”
Further, when such eschatological beliefs become societal norms, then, so to shall their attendant anti-discursive or anti-dialogical behaviors which is a problem as a certain degree of discursive openness is required for coherent and efficient societal or civilization operation.
The best contemporary example of the potency of eschatological thinking can be found in contemporary climate catastrophism2, a modern end-times political theology which holds that unless immediate, collective, drastic action is taken by near the whole of the world – typically in the form of total de-industrialization – the earth’s slight warming trend will intensify, causing widespread ecological disaster and the end of humanity, or, at least, the end of civilization as such. The acolytes of this disasterism arrive at these conclusions – generally – through the regurgitation of a narrative propagated via various media outlets, both mainstream and independent, credentialed and rogue. In this modern iteration of the end of all things, is not divine providence which closes the book of history, but rather, man’s own hubris and destructive drives (typically conceptualized as masculine); the ceaseless accumulation of natural resources and so on. Whilst it is obvious that there is clearly a mild warming trend it is not at all obvious that this will lead to invariable disaster, certainly nothing so extreme as human extinction; what makes the climate catastrophist’s eschatology even more unlikely is that they often give time-tables for when global warming is to begin causing the collapse of civilization, yet these predictions are consistently proven wrong (consider the predictions made by Al Gore in An Inconvenient Truth). It is pertinent to mention that similar claims emerged in the 1970s, only at that time the fear was of global cooling, not global warming. In 1963, the American climatologist, J. Murray Mitchell showed that a global cooling trend had been occurring since 19403. Then, in 1968, ecologist Paul Ehrlich, in his book The Population Bomb decried the western world using the atmosphere as “a garbage dump.” In 1970 the Washington Post published a article colorfully titled Colder Winters Held Dawn Of New Ice Age: Scientists See New Ice Age In The Future. Concern over global cooling reached an apex in 1972 through 1973 when Asia and parts of North America witnessed peculiarly severe winters. In 1972, famed geologist and founder of paleoceanography, Cesare Emiliani, wrote, “Man’s activity may either precipitate this new ice age or lead to substantial or even total melting of the ice caps…”4. One could continue piling on example after example of such dire predictions; as is obvious, these predictions were false. It is pertinent to mention that many academics and scientists, of the time, contested this view and argued that global warming was the true future threat and not global cooling, but certainly, the most apocalyptic declarations of the period came from the global cooling camp.
The lesson here to draw is that humans are relatively (one might even go so far as to say, intrinsically) bad at predicting the future with any degree of accuracy; the further out into the future we attempt to wade – as a general rule – into the future, the less accurate such predictions become. Despite the faulty models and the erroneous predictions of the global cooling scare of the 1970s, many activists, scientists and politicians are now convinced that man-made global warming is consuming the world and that extinction – either of humans, or various non-human species – is inevitable. The fear strikes so deep that a whole new name for the present epoch has been devised, the “anthropocene,” a word which is often spied in the pages of academic journals trumpeting, in the most convoluted and speculative of ways, all manner of anti- and post-human futures5.
Of course, this does not mean that climate variation should not be taken into consideration, rather, it absolutely should. However, even if the projected temperature increase of 4 ◦c and sea level rise of 0.7 m ends up being correct (and it might not), that is not a change which humans, as a species, could not successfully adapt to; we have, after all, already survived an ice age6. There can be no valid or sound plotting-out of any future trajectory for our species if, from the outset, one assumes that everything is for naught, nor if one assumes everything will always work out in humanity’s favor, generally, or some portion of humanity’s favor. Persistent and totalizing pessimism is every bit as irrational as persistent and totalizing optimism, and yet the former eventuality tends to win out in popular discourse. This may strike one as odd, given that Man is now at the height of his powers. His technology has never been greater, his mind has never been sharper (chiefly due to machine-computational intensification and improvement in record keeping), his planetary reach has never been broader. Already, other planets are within his reach! Mars lies ripe for terraforming! Yet, despite these facts and promising horizons, a spectral precipice looms on the dim horizon. One which goes under a variety of names and descriptors: The end times, climate hell, nuclear winter, A.I. catastrophe, gray goo, the horror of instrumental convergence7, alien invasions, and so on. There are also a variety of positive end of history scenarios: the various after-lives of the theists, the unified earth of Star Trek, wherein all scarcity has been banished through the utilization of matter-wrangling machines, Francis Fukuyama’s all encompassing global liberal democracy (which itself bears similarity to the totalitarian republic of the Jedi in Star Wars prequels) and so on. However, a cursory glance at the news and one realizes instantly that the negative scenarios are far, far more popular than the positive ones (even if utterly fictitious). Thus, the obvious question: Why?
In the words of Charles Tatum8, “Bad news sells best, cause good news is no news.” A variation of this sentiment which has also proved popular is, “If it bleeds, it leads.” Meaning, of course, that coverage of shocking or horrific events are far more widely covered by contemporary media industries than positive stories because of the powerful emotions they instinctively and immediately elicit. When a journalist is choosing between covering a story of a firefighter rescuing a kitten from a tree or a meth-crazed ax murderer, he or she is going to go with the latter at a far higher rate. When any given person looks to the news, they will gravitate to stories of violence, murder, rape and other highly negative events at a far higher rate then they will stories about kittens being rescued from trees, seals being rescued from clubbing operations or an amputee being gifted a robotic limb.
There is a evolutionary explanation for this propensity towards the macabre. Everyday the human brain receives a tremendous amount of information, far too much information to properly and thoroughly process. Thus, filters are required. Enter the amygdala or corpus amygdaloideum, the “almond tonsil,” the brain’s danger detector. The amygdala sits within the temporal lobe and is principally responsible for mediating memory, decision-making and emotional responses such as fear, anxiety and aggression and is larger in males (both adolescent and adult) than in females. The function of the amygdala is extremely important for survival, some 150,000 years ago it could mean the difference between life and death. For example, if you were a hunter-gatherer alone in a forest and you heard a rustling, it would generally be nothing but every once in a while it would be a real source of danger (a poisonous snake, a wolf, a bear, a warrior from a rival tribe, etc), thus, for the purpose of survival optimization, it is preferable, for the hunter-gatherer, to react as if the rustling were a threat every time. If it is nothing then there is nothing lost but a trivial amount of energy, if it is something, then one’s life is saved. This, in essence, is what is typically referred to as negativity bias, the propensity to pay more attention to things which entail danger, which are threatening. A simple example would be a picture where there are a number of relatively normal looking people and one individual in the background making a threatening gesture; one’s attention would instantly gravitate towards the threatening man in the background. It is for this relatively simple reason that bad news sells best. Understanding both this and psychometric analysis then allows one, if sufficiently amorally disposed, to be able to sway large portions of the population by forming a profile, determining psychological proclivities and then exploiting them in the best way possible through the utilization of negativity bias-infused narratives. A example of this technique in action can be seen in the political campaigns of the US Democrats and Republicans, wherein, almost invariably, a ideological particularity (aversion to “the other” for example) or policy (such as gun rights) will be singled out and presented in a wholly negative light, often with the insinuation if not outright declaration that cataclysm is the logical conclusion of the implementation/continuation of the said policy and/or ideology regardless of whether or not this is actually the case. The methodology of Reefer Madness applied to the modern age.
There has been much work done surrounding the psychology of negativity given the attendant lucrative and political potentialities. To properly understand the immense potential of cultivated negativity it is pertinent to turn to the marketing methodology known as psychographics9. In distinction to demographics, which focuses on age, class, race, location, fertility rates, etc, psychographics focuses instead upon psychological characteristics of a demographic, such as personality attributes, bias intensity and community attachment. Thus if, for example, one’s collected demographic information for a anti-aging crème is:
Married, with children.
Household income 90K +
One’s psychographic information on this hypothetical client might read:
Frequent user of Instagram and LinkedIn.
Highly preoccupied with personal appearance.
Wants improvement in appearance without major lifestyle changes.
Chooses expensive high-quality products over inexpensive middling-to-low-quality products.
Looking for a career oriented man.
During the 2016 US elections the issue of psychographics received a great deal of attention when a rumor began that data acquisition and marketing firm, Cambridge Analytica (CA), had swayed the election in Donald Trump’s favor through the utilization of mass emotional manipulation. The company’s collection of information was extensive (though Ted Cruz’s team were unimpressed with their technical proficiency) but one little firm certainly couldn’t have swayed the entire election by itself, such a explanation is so reductionist as to be absurd. However, the company certainly did have a impact (in both Cruz’s and Trump’s campaign, as well as numerous others), specifically through the collection of massive amounts of publicly available information, principally from the social media giant Facebook. CA’s primary methodology relied heavily on machine learning. They would begin with a thorough survey which was often presented as a psych-test, which were then sent out to Facebook where they would be filled in by hundreds of thousands of different users. The answer to the psych-test surveys were then analyzed by CA and combined with preexisting data to form personality models which were then applied at the appropriate scale for a given target population.
“Each one of these data points on its own is not that revealing, but the sum of them begins to paint a fairly comprehensive picture. Today in the United States we have somewhere close to four or five thousand data points on every individual … So we model the personality of every adult across the United States, some 230 million people.” — Alexander Nix, chief executive of Cambridge Analytica, October 2016.
To understand CA’s pyschographic strategy, it is helpful to turn to their inspiration, the work of data analyst Michal Kosinski. Kosinski’s preferred methodology was to create online questionnaires and present them as online quizzes and disseminate them (principally via Facebook) and then plug the collected information (such as “likes” and “shares”) into the Big Five psychological model to determine personality. This enabled Kosinski to make a number of deductions based off of behaviors like consumer-brand affiliation; for example, he found that users who “liked” the cosmetics brand MAC were likely to be homosexual, whereas a high indicator for homosexuality was “liking” pages related to the rap band, Wu Tang Clan. Further, following the pop musician Lady Gaga correlated highly with extroversion whereas “liking” philosophy pages correlated highly with introversion. The correlation for each data point was slight and often too weak to make reliable predictions, however, the more data points which Kosinski aggregated, the more reliable the predictions became. At sufficiently large data thresholds, Kosinski was able to tell more about a given user than their closest friends could, purely based off of their digital footprints.
In 2012, Kosinski showed that with only 68 Facebook “likes” it was possible to determine a user’s skin color, sexual orientation and political and religious affiliation with a high degree of probability. In 2013, Kosinski published a paper titled Private Traits & Attributes Are Predictable From Digital Records Of Human Behavior, wherein he further documented his refined psychological assessment model, utilizing the standard Big Five10 personality attribution schema. Kosinski’s refined model was able to accurately discriminate between homosexual & heterosexual men in 88% of all cases, tell between Caucasian and African in 95% of all cases and discern correctly between democrat and republican in 85% of all cases, all based on 1 to 700 “likes” with a median of 68 “likes” per individual. The relevance of such studies to eschatological thinking can be found in the conclusions section of Kosinski’s paper wherein he writes:
“Predicting users’ individual attributes and preferences can be used to improve numerous products and services. For instance, digital systems and devices (such as online stores or cars) could be designed to adjust their behavior to best ﬁt each user’s inferred profile (30). Also, the relevance of marketing and product recommendations could be improved by adding psychological dimensions to current user models. For example, online insurance advertisements might emphasize security when facing emotionally unstable (neurotic) users but stress potential threats when dealing with emotionally stable ones.”
So here we can see the tremendous potential of merging data-mining, marketing and psychoanalytics; by preying upon personality vulnerabilities one can manipulate whole populations of the digital ecosystem with a (increasingly) high degree of accuracy and the most interesting part of it all is that very few people are ever going to realize they have been manipulated in a targeted campaign at all.
The major takeaway from Kosinski’s studies is the simple fact that most people are not nearly as unique and unpredictable as they think they are in relation to their fellows. Of course, it is a somewhat bitter pill to swallow; no one likes being told that they are not special, that they are highly predictable, that they are not as unique as they thought they were; nor do most people understand the amount of data they are sharing (how many people, after all, read every single user agreement statement?) so there is a tremendous psychological barrier to taking psychographics seriously (and thus, to guarding against its utilization) even as it is used against various different populations (principally through marketing or political campaigns).
Dialogic means of or relating to dialogue. Let us establish a simple schema, what we shall call the dialogic model.
- To be dialogical is to be disposed to dialogue, to discourse.
- To be non-dialogical is to be in a state outside of dialogue.
- To be anti-dialogical is to be opposed to discourse or, to enter into discourse only to subvert or disrupt it.
A dialogical individual is one who is conducive to conversation with his fellows. A non-dialogical individual is one who removes himself or herself from conversation or the prospect of conversation (such as a extreme introvert). A anti-dialogical individual is one who actively works to disrupt discourse (such as a political protester who attempts to shout down peaceful, conversant opposition). Clearly, from the standpoint of those who are in favor of a thriving civilization, a decidedly pro-dialogical approach is most generally beneficial whereas the non-dialogical approach is generally neutral and the anti-dialogical approach is decidedly negative (because discourse is necessary in a civilized society, baring conversation the only other options are systems-flight, total submission or violence). Thus, the goal should clearly be to build towards optimally dialogical societies.
This framework may appear to be trivial and obvious but in making these modalities explicit it also makes the actions attendant to the modalities explicit which thus predisposes inquiring individuals to adhere to said framework provided they broadly agree with the tenets which underpin them. The reason why a dialogical framework is pertinent to the aforementioned topics, namely, eschatological thinking, predisposition to negativity and psychometrics, is that it can be a useful tool to navigate and disentangle the irrational discursive artifacts produced by fixed future thinking. Of its own accord it is of little use, given that most political discussions are not driven by reasoned dialogue but rather by raw emotion. Hence, a new approach is required, we term the proffered alternative to classical dialogics, technodialogics.
Technodialogics (or technodiaology) differs from classical dialogics in a number of consequential regards, first and foremost its users acknowledges the increasing centrality of technological mediation in human-to-human interactions and attempts to re-modulate every technology and space wherein it operates towards proadaptive and contraentropic narrativity and is thus opposed to all eschatology and to any discourse which discloses the future of the human and to anything which freezes the present, chilling discourse through factual distortion and emotional manipulation via the belief in a fixed end-point of social development (whether positive or negative). That is to say, technodialogics is fundementally exotropic and is thus concerned with fostering discussions of continual and incremental improvement which are evaluated via formal, logical, objective (falsifiable) or probabilistic (i.e. Bayseian) metrics.
Whilst it is obvious that human-to-human communication is increasingly modulated by the web and the technological apparatuses which are born out of and which plug into it, this is rarely made explicit and just like with the dialogic model, the more explicit a conceptual structure is made, the more easy it is to utilize, especially for those who, in part, acknowledge its worth and who might, in part, be operating in accordance with it. The strength in making explicit concepts which were hitherto implicit is to be found in the increase of personal articulation and thus, potential intermediation. Further, the precise degree of cybernetic integration witnessed by all industrial western-styled societies is rarely taken into consideration in questions of inter-personal communication. The usage of social media platforms, message boards, streaming services, etc, are, of course, recognized as popular, but the particular character of the sites themselves are themselves sketched out in the same way that the character of the City of Pittsburgh or New York City are. Cyberspaces like Twitter, Facebook, Gab, Tumblr or Mastadon are much newer than earthspaces such as New York City or Pittsburgh and thus have been given less time for historical accumulation and thus, description, but the former have a peculiarity of character much the same (given that they are continuously converging upon each other, cloud to earth, earth to cloud). Technodialogics accounts and maps these cyberspaces without making any significant distinctions between them and real world municipalities, for, after all, they have their own unique populace, their own forms and customs, their own culture, their own religions, their own linguistic peculiarities and economies and their own collective fears and aspirations. It is this latter quality which technodialogicians must pay particular attention to for all sociopolitical decisions, whether in earthspace or cyberspace, are determined by collective fear or hope; fear of geographic displacement, economic turmoil, war, the loss of a particular collective identity, etc, or, hope for a better, more prosperous future, hope of life-saving medical advancements in science and technology, hope for a more stable political ordering and thus, a better life for a population’s children, etc.
In addition to the character of spaces, technodialogics incorporates psychometric data acquisition into the discursive process, utilizing personality models, not to sell a product, but to foster more sanguine discussions for more optimal and mutually beneficial cooperation whether between individuals, corporations, NGOs or governments. Through the utilization of the Big Five personality model in conjunction with the dialogic model and sufficient data points, it is possible to determine the relative pro, anti, or non-dialogicality of a particular individual and thus, provided even more data points, the pro, anti, or non-dialogicality of any given group or groups whose information is readily available. Combining these metrics with a specific study of network effects and the psychological effects of site utilization would then provide even further tools which could be used to determine what vector of conversation a particular individual or group is most amenable to and at what time, on what sites.
Technodialogics also takes account of the importance of planetary-scale computing and its efficacy in fostering cross and inter-cultural discussion. Solidified and congealed political and religious currents are significant forces for in-group cohesion but also act as a bulwark against further discourse due deep-seated out-group aversion as the alien by mere dint of its otherness, intrinsically threatens the bounds of normalcy, of “usness”. However, it is now easier than ever before, through global communication systems, to engage in inter-group discourse without engendering significant erosion of cultural or territorial sovereignty, which large numbers of foreign bodies can portend (as witnessed in the European migrant crisis). The question of autonomous sovereignty and identity is crucial to successful communication, regardless of whether one is operating at the individual or collective level and, of course, the larger up one scales (that is, the larger the number of people to which one is attempting to communicate to) the more important this concept becomes. In summation, technodialogics is a syncretic process for measuring and improving dialogicality through the judicious incorporation of the best available options and mediation and navigation of the best available spaces. In this way, technodialogics is a normative sýnnefocratic mindset that entails a prescriptive methodology for interpersonal, discursive mediation.
1By this we mean any action which increases human thriving or the potential for human thriving without unduly damaging the potential for any other humans – concomitant to the general project – to thrive.
2It is important to note that we are not here dismissing the importance of widespread temperature change, but are only taking issue with those views that hold that extinction is a invariable conclusion (which it obviously isn’t).
3Thomas C. Peterson et al. (2008) The Myth Of The 1970s Global Cooling Scientific Consensus. American Meteorological Society.
4Cesare Emiliani. (1972) Quartenary Hypsithermals. Quaternary Research. 2 (3): 270-3.
5It is important to draw a distinction between anti-humanism, post-humanism and transhumanism, so as not to conflate the whole of speculative post-human thought with transhumanism as transhumanism is specifically concerned with improving the human condition through the utilization of technology and is thus a distinctive sub-set of posthumanism but not wholly representative of it as posthumanism includes numerous schools of thought.
6See: DNA Evidence Proves That Early Humans Survived The Last Ice Age.
7The most famous example of problems arising from instrumental convergence are illustrated via the paperclip maximizer scenario postulated by philosopher Nick Bostrom in 2003. Bostrom stated: “Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.” He went on to say that he did not believe that such a scenario per-se was likely, but rather, he meant for the scenario to illustrate the intrinsic danger of super-intelligent machines.
8Charles “Chuck” Tatum is a fictional character and the central focus of the 1951 Billy Wilder film, Ace In The Hole. Tatum was a intelligent, amoral sensationalist who would do anything to pen a lurid headline even if that meant creating a disaster.
9Psychographics is also referred to alternatively as “psychometrics.”
10The Big Five or Five-factor (FFM) personality model contains five personal attributes which are used for psychological assessments: Openness, conscientiousness, extraversion, agreeability, neuroticism. The assessment model is sometimes denoted by the acronyms OCEAN or CANOE.