Art+ificiality: Machine Creativity & Its Critics

 

§. In Sean D. Kelly’s, A philosopher argues that an AI can’t be an artist, the author, at the outset, declares:

“Creativity is, and always will be, a human endeavour.” (S. D. Kelly)

A bold claim, one which can hardly be rendered sensible without first defining ‘creativity,’ as the author well realizes, writing:

“Creativity is among the most mysterious and impressive achievements of human existence. But what is it?” (Kelly)

The author attempts to answer his selfsame query with the following two paragraphs.

“Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.

 

As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.” (Kelly)

§. Through Kelly, we have the definition-via-negation that ‘creativity is not just novelty,’ that it is not random, that it is a practice, bounded by history, and that it must be communally accepted. This is a extremely vague definition of creativity; akin to describing transhumanism as, “a non-random, sociohistorically bounded practice” which is also “not nordicism, arianism or scientology.” While such a description is accurate (as transhumanism is not constituted through or by the three aforementioned ideologies) it doesn’t tell one much about what transhumanism is, as such a description could describe any philosophical system which is not nordicism, arianism or scientology, just as Kelly’s definition does not tell one much about what creativity is. If one takes the time to define ones terms, one swiftly realizes that, in contradistinction to the proclamation of the article, creativity is most decidedly not unique to humans (ie. dolphins, monkeys and octopi, for example, exhibit creative behaviors). One may rightly say that human creativity is unique to humans, but not creativity-as-such, and that is a crucial linguistic (and thus conceptual) distinction; especially since the central argument that Kelly is making is that a machine cannot be an artist (he is not making the claim that a machine cannot be creative, per-se) thus, a non-negative description of creativity is necessary. To quote The Analects, “If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything” (Arthur Waley, The Analects of Confucius, New York: Alfred A. Knopf, 2000, p. 161).

§. A more rigorous definition of ‘creativity’ may be gleaned from Allison B. Kaufman, Allen E. Butt, James C. Kaufman and Erin C. Colbert-White’s Towards A Neurobiology of Creativity in Nonhuman Animals, wherein they lay out a syncretic definition based upon the findings of 90 scientific research papers on human creativity.

Creativity in humans is defined in a variety of ways. The most prevalent definition (and the one used here) is that a creative act represents something that is different or new and also appropriate to the task at hand (Plucker, Beghetto, & Dow, 2004; Sternberg, 1999; Sternberg, Kaufman, & Pretz, 2002). […]

 

“Creativity is the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context” (Plucker et al., 2004, p. 90). [Kaufman et al., 2011, Journal of Comparative Psychology, Vol. 125, No. 3, p.255]

§. This definition is both broadly applicable and congruent with Kelly’s own injunction that creativity is not a mere product of a bundle of novelty-associated behaviors (novelty seeking/recognition), which is true, however, novelty IS fundamental to any creative process (human or otherwise). To put it more succinctly: Creativity is a novel-incorporative, task-specific, multi-variant neurological function. Thus, Argumentum a fortiori, creativity (broadly and generally speaking), just as any other neurological function, can be replicated (or independently actualized in some unknown way). Kelly rightly notes that (human) creativity is socially bounded, again, this is (largely) true, however, whether or not a creative function is accepted as such at a later time is irrelevant to the objective structures which allow such behaviors to arise. That is to say that it does not matter whether or not one is considered ‘creative’ in any particular way, but rather, that one understands how the nervous system generates certain creative behaviors (however, it would matter as pertains to considerations of ‘artistry’ given that the material conditions necessary for artistry to arise require a audience and thus, the minimum sociality to instantiate it). I want to make clear that my specific interest here lies not in laying out a case for artificial general intelligence (AGI), of sapient-comparability (or some other), nor even, in contesting Kelly’s central claim that a machine intelligence could not become a artist, but rather, in making the case that creativity-as-a-function can be generated without an agent. Creativity is a biomorphic sub-function of intelligence; intelligence is a particular material configuration, thus, when a computer exceeds human capacity in mathematics, it is not self-aware (insofar as we are aware) of its actions (that it is doing math or how), but it is doing math all the same, that is to say, it is functioning intelligently but not ‘acting.’ In the same vein, it should be possible for sufficiently complex systems to function creatively, regardless of whether such systems are aware of the fact. [the Open Worm Project is a compelling example of bio-functionality operating without either prior programming or cognizance]

“Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to ‘superintelligent’ successors, which he defines as having ‘intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.’

 

Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the ‘singularity’ and Bostrom an ‘intelligence explosion’—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs ‘speed superintelligence.’

 

So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?

 

No.

 

Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

 

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.” (Kelly)

§. For Kelly, then, the concern is not that machines will surpass human creative potential, but that we will think that they have after fetishizing them and turning them into sacral objects; deifying them through anthropomorphization and turning them into sites of worship. This is a salient concern, however, the way to obviate such a eventuality (if that is one’s goal) is to understand not just the architecture of the machine but the architecture of creativity itself.

“Also, I am primarily talking about machine advances of the sort seen recently with the current deep-­learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.

 

Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.

 

Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did?

 

That’s what I claim a machine cannot do. Let’s see why.

 

Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.

 

So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.

 

But this is where it gets complicated.

 

We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.

 

Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.

 

First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-­traditionalism at the heart of the radical modernity emerging in early-­20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.” (Kelly)

§. Arnold Schoenberg (1874–1951) was a Austrian-American composer who became well known for his atonal musical stylings. Kelly positions Schoenberg as a exemplar of ‘radical creativity’ and notes that Shoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by the Viennese composer Oscar Straus (1870–1954) or, ‘some other average composer: it’s a fundamentally different kind.’ This is true. There are different kinds of creativity (as it is a obviously multi-faceted behavioural domain); thus, a general schema of the principal types of creativity is required. In humans, creative action may be “combinational, exploratory, or transformational” (Boden, 2004, chapters 3-4), where combinational creativity (the most easily recognized) involves a uncommon fusion of common ideas. Visual collages are a very common example of combinational creativity; verbal analogy, another. Both exploratory and transformational creativity, however, differ from combinational creativity in that they are conceptually bounded in some socially pre-defined space (whereas, with combinational creativity the conceptual bounding theoretically extends to all possible knowledge domains and, though it almost always is, need not be extended to the interpersonal). Exploratory creativity involves utilizing preexisting strictures (conventions) to generate novel structures (such as a new sentence, which, whilst novel, will have been constructed within a preexisting structure; ie. the language in which it is generated). Transformational creativity, in contrast, involves the modulation or creation of new bounding structures which fundamentally change the possibility of exploratory creativity (ie. creating a new language and then constructing a new sentence in that language wherein the new language allows for concepts that were impossible within the constraints of the former language). Transformational creativity is the most culturally salient of the three, that is to say, it is the kind which is most likely to be discussed, precisely because the externalization of transformational creativity (in human societies) mandates the reshaping, decimation or obviation of some cultural convention (hence, ‘transformational’). Schoenberg’s acts of musical innovation (such as the creation of the twelve-tone technique) are examples of transformational creativity, whereas his twelve-tone compositions after concocting his new musical technique are examples of exploratory and combinational creativity (ie. laying out a new set of sounds; exploring the sounds; combining and recombining them). In this regard, Kelly is correct; Schoenberg’s musical development is indeed a different kind of creativity than that exhibited by ‘some average composer’ as a average composer would not initiate a paradigm shift in the way music was done. That being said, this says nothing about whether a machine would be able to enact such shifts itself. One of the central arguments which Kelly leverages against transformational machine creativity (potential for an AI to be an artist) is that intelligent machines presently operate along the lines of computational formalism, writing,

“Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.

Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.” (Kelly)

§. It is noteworthy that Kelly’s perspective does not factor in the possibility that task-agnostic, self-modeling machines (see the work of Robert Kwiatkowski and Hod Lipson) could network such that they develop social capabilities. Such creative machine sociality answers the question of social embeddedness proposed by Kelly as a roadblock. Whilst such an arrangement might not appear to us as ‘creativity’ or ‘artistry,’ it would be pertinent to investigate how these hypothetical future machines ‘self’ perceive their interactions. It may be that future self-imaging thinking machines will look towards our creative endeavours the same way Kelly views the present prospects of their own.


§.Sources

  1. Allison B. Kaufman et al. (2011) Towards a neurobiology of creativity in nonhuman animals. Journal of Comparative Psychology.
  2. Brenden M. Lake et al. (2016) Building machines that learn and think like people. Cornell University. [v.3]
  3. Oshin Vartanian et al. (2013) Neuroscience of Creativity. The MIT Press.
  4. Peter Marbach & John N. Tsitsklis. (2001) Simulation-based optimization of markov reward processes. IEEE Transactions on Automatic Control.
  5. R. Kwiatkowski & H. Lipson. (2019) Task-agnostic self-modeling machines. Science Robotics, 4(26).
  6. Samer Sabri & Vishal Maini. (2017) Machine Learning For Humans.
  7. Sean Dorrance Kelly. (2019) A philosopher argues that AI can’t be an artist. MIT Technology Review.
  8. S. R. Constantin. (2017) Strong AI Isn’t Here Yet. Otium.
  9. Thomas Hornigold. (2018) The first novel written by AI is here—and its as weird as you’d expect it to Be. Singularity Hub.

Anti-Natalism As Environmentalism: Todd May & The Question Of Extinction

On Dec. 17, 2018, The New York Times published a article in their opinion column entitled, Would Human Extinction Be A Tragedy?: Our Species Possesses Inherent Worth But We Are Devastating The Earth & Causing Unimaginable Animal Suffering. The article (which sounds like a sociology piece off Academia.edu) was written by a one Todd May, who has precisely the kind of background one would expect from the title of his piece (French, existential, poststructural, anarchist—one knows the type; all scarfs, swank cafes, continental apoplexy and fake math).

In traversing the acrid crags of his article, a greater understanding can be gained of the burgeoning movement of earth worshippers so common to environmentalist and poststructuralist thought.

To the article itself (which is set with a forlorn picture of a abandoned lot along the highways of Haleyville, Alabama), May begins, “There are stirrings of discussion these days in philosophical circles about the prospect of human extinction. This should not be surprising, given the increasingly threatening predations of climate change. In reflecting on this question, I want to suggest an answer to a single question, one that hardly covers the whole philosophical territory but is an important aspect of it. Would human extinction be a tragedy?”

The term climate change — obligatory in this type of piece — is dreadfully nebulous; of course, everyone knows what is really meant by the term (especially when paired with the propagandistic picture of the ruined highway-side lot) — catastrophic and impending human-driven climate change — but taken literally it amounts to a nothing. One should be more specific.

Climate change itself is too massive an issue to treat properly here, but it may be remarked that there is a strange diffidence to the effects of the sun upon our climate and what often seems like a desire for man to be found, somehow, at fault for every storm, every drought and every bleached reef as if a certain contingent are looking and hoping for some perceived misstep among the rank-and-file of their fellows.

To May’s question; one should reply, “A tragedy to what?” The question, as May poses it, makes no sense. Tragedies are not things-unto-themselves. There is no substrate called tragedy, no essential fabric of existence separate from the sensorial and conceptual experiencer which fashions itself as tragedy. Tragedy is a experiential development, a response and designation of a memory of that response. A human response. Elephants may fashion graves for their dead and dogs may howl when their masters are absent, so perhaps, such creatures have a similar sense of the tragic, emerging in divergent ways from our own conceptions and response to bereavement. Yet, it would not be tragedy-per-se as the linguistic designator and the referent outside the observer are inseparable; that is to say, tragedy is unique to humans.

Dogs and elephants have little knowledge of human language; some people say they “understand us” and they do, but they don’t understand us as we understand ourselves, they do not interpret our language as we do, our experience of meaning is hostage to ourselves and finds no purchase in the world beyond our own minds.merlin_130960304_dafc0c1b-804e-49f8-9973-cd8cd6ffe26b-superJumbo.jpg

Abandoned highway lot cover image from May’s Would Human Extinction Be A Tragedy? — Very True Detective.

The dog comes a running because it has familiarized itself with, or been familiarized to, a particular set of sounds, movements and other sensory associations. “I’m home” may, to the dog, translate as something more akin to “Will be fed soon,” but of course, even attempting to craft a translation is misbegotten given that dogs do not think in English. Something like tragedy certainly manifests itself in the animal-world beyond humankind, but it is not enough to be like to be.

May continues, clarifying his position, ” I’m not asking whether the experience of humans coming to an end would be a bad thing… I am also not asking whether human beings as a species deserve to die out. That is an important question, but would involve different considerations. Those questions, and others like them, need to be addressed if we are to come to a full moral assessment of the prospect of our demise. Yet what I am asking here is simply whether it would be a tragedy if the planet no longer contained human beings. And the answer I am going to give might seem puzzling at first. I want to suggest, at least tentatively, both that it would be a tragedy and that it might just be a good thing.”

Yes, that is puzzling. That is top-notch puzzling.

May then goes on to expound upon various theatrical characters such as Sophocles’s Oedipus and Shakespeare’s Lear as examples of human tragedy, which he defines as “a wrong”… “whose elimination would likely require the elimination of the species-,” This is not the crux of his argument so I shall not belabor a response; it is nothing short of psychotic.

He continues, “Human beings are destroying large parts of the inhabitable earth and causing unimaginable suffering to many of the animals that inhabit it. This is happening through at least three means. First, human contribution to climate change is devastating ecosystems, as the recent article on Yellowstone Park in The Times exemplifies. Second, increasing human population is encroaching on ecosystems that would otherwise be intact. Third, factory farming fosters the creation of millions upon millions of animals for whom it offers nothing but suffering and misery before slaughtering them in often barbaric ways. There is no reason to think that those practices are going to diminish any time soon. Quite the opposite.”

Firstly, as pertains to factory farming, certainly there are forms of it wherein judicious care is not taken to mitigate the suffering of the animals and that should be remedied, further, for our purposes, factory farming can prove disastrous given that it allows diseases to spread more easily between the animals, due their close proximity to one another and the potential for profit and thus efficiency to intervene on responsibility which can impact things like the cleanliness of the facilities or checking on the health of the animals. This, however, does not hold true of all forms of factory farming, but nevertheless, we should take into consideration, to the best of our abilities, the cognitive ambit of the organism upon which we so intensely rely for our sustenance.

Secondly, “destroying large parts of the inhabitable earth” is extremely vague. What parts is he talking about? Habitats for what or whom? Does he mean nuclear wasteland, scorched earth, or merely environmental transformation (such as forest clearing for habitation)? Shiva is a twin-faced god. All creation mandates destruction. Human-centered environmental transformation is no exception and will always require the displacement (regardless of duration) of other organisms and the modulation of the land itself, this is no different than the Mountain Pine Beetle destroying trees in the process of building their colonies, save in terms of scale. The better at environmental modulation we (humans) can be and the more we learn (and remember) about the earth and its ecosystems, the better we can modulate with the least amount of collateral damage to other species (should this be found to be desirable, and it will assuredly not always be desirable). I am perfectly willing to devastate as many ecosystems as necessary to acquire the space and resources for the polity of which I am a part. Here we witness from May a inversion of human-centered concern for concern of land-itself, devoid of an articulation of impact (with the sole exception of factory farming), that the only way to be truly moral, is to displace concern from ones fellows and to begin offshoring empathy and sympathy to moles, voles, chickens and bacteria. Speaking of bacteria — they’re living beings, with their own intricate little ecosystems upon and in our bodies, will May who looks quite shinny and well-scrubbed in his public photos, give up washing so as not to unduly disturb the microverse or shall he continue initiating a holocaust with every scrub?

How shall he answer for his cleanliness? Is it not microbial genocide?

He touches lightly upon this issue briskly before falling, once more, into maudlin whinging, “To be sure, nature itself is hardly a Valhalla of peace and harmony. Animals kill other animals regularly, often in ways that we (although not they) would consider cruel. But there is no other creature in nature whose predatory behavior is remotely as deep or as widespread as the behavior we display toward what the philosopher Christine Korsgaard aptly calls ‘our fellow creatures’-”

Why he should choose Valhalla of all places as a ideal of peace and harmony is beyond me; that being said, he is, of course, correct that animals, both rational and non-rational, often behave in exceptionally savage ways. For example, chimpanzees hunt red colobus monkeys, both young and old. When a chimp catches a colobus, they kill and eat it, often brain-first, rending open the skull and suckling at the protein-filled gray matter, with special attention later given to the liver and other internal organs, less well-shelled and thus, more easily removed and consumed.

The South American botfly, Dermatobia hominis, deposits its eggs, either directly or through the utilization of captured mosquitos, into the skin of mammals, including humans, where they find their way into the subcutaneous layer of the skin and develop into larvae and feed on skin tissue for approximately eight weeks before emerging from the skin to pupate. Dermatobia hominis is, however, only one of several species of flies that potentially target humans. When a human is parasitized by fly larvae, the condition is referred to as myiasis and if aural myiasis occurs, there is a possibility that the larvae may reach the brain. If the myiasis occurs in the naval cavity, fluid build up around the face and fever will often occur and can be, if not properly and promptly treated, fatal.

In regard to Korsgaard’s remark about fellow creatures, he and May can speak for themselves in this regard, the human-flesh devouring maggots of the African Botfly and brain sucking chimp are not my fellow creatures, there is little fellow there to be had, they are either externalities or obstacles to human habitation. Given the chance any one of them would devour Korsgaard and May as they would their other victims. It is precisely because we are possessed of far greater power, which can be applied far more savagely and intelligently than any other creature on earth that we are not in a situation where we must constantly be on guard from what slithers and stalks the undergrowth.

For the flourishing of our species, there has been few attributes more beneficial than, what May describes as our extraordinary “predatory behavior.” Indeed, I should declare that we should be more predatory. Not less.

May then says something quite extraordinary, “If this were all to the story there would be no tragedy. The elimination of the human species would be a good thing, full stop.” He then clarifies that this isn’t all to the story and that humans contribute unique things “to the planet” (whatever that means) such as literature and then comes to the real meat of his argument, preempting some of the criticisms which have been leveled against him in this very paper, writing,

“Now there might be those on the more jaded side who would argue that if we went extinct there would be no loss, because there would be no one for whom it would be a loss not to have access to those things. I think this objection misunderstands our relation to these practices. We appreciate and often participate in such practices because we believe they are good to be involved in, because we find them to be worthwhile. It is the goodness of the practices and the experiences that draw us. Therefore, it would be a loss to the world if those practices and experiences ceased to exist. One could press the objection here by saying that it would only be a loss from a human viewpoint, and that that viewpoint would no longer exist if we went extinct. This is true. But this entire set of reflections is taking place from a human viewpoint. We cannot ask the questions we are asking here without situating them within the human practice of philosophy. Even to ask the question of whether it would be a tragedy if humans were to disappear from the face of the planet requires a normative framework that is restricted to human beings.”

Firstly, I fail to see what is “jaded” about arguing that if humans went extinct, there would be no loss, because there would be no one for whom it would be a loss. Secondly, I do not think this would be true; as previously stated, there would be some loss beyond the human species, namely, loss (or its less sapient variation) in those intellectually capable animals with whom we reside, such as those commonly kept as pets (dogs, cats, pigs and so forth). But then we come to one of the strangest points made by the author, for he says it is “the goodness of the practice” that “draw us” as if goodness exists separate from, not just humanity, but from anything but “the planet.” It is a curiously anthropomorphic remark from so clearly misanthropic a individual and one which, due its spectral imposition, is forthrightly irrational. He could simply have made the argument from non-human animal intelligence as the experiential nexus of the loss as I have but instead he shifts the nexus of experience to “the planet,” which is, of course, merely a exceptionally large space-rock.

May then turns his attention to “the other side” which he describes as those who think that human extinction would be a “tragedy” and “overall bad” (which I would regard as one and the same thing, as I don’t know of any tragedies which are overall good) and asks the question: How many lives would one be willing to sacrifice to preserve Shakespeare’s works? He says he’d not sacrifice a single human life and that is all fine and good as I’d not either, for the obvious reasons that Shakespeare’s works can be reforged but a human life cannot (yet). He then poses the question: “-how much suffering and death of nonhuman life would we be willing to countenance to save Shakespeare, our sciences and so forth?” The rest of the article is merely antinatalist tripe wherein May proclaims that preventing future humans from existing is probably the right thing to do given that we would be preventing an unnecessary flow of suffering from being unleashed upon the world. So what then is the answer to his challenge.

The answer is clear.

As much suffering shall be endured as the organism is capable of enduring to survive and to thrive. If a individual does not wish to survive than that individual is at liberty to remove themselves from the gene pool. It is as simple as that. It has always been as simple as that and it will always be as simple as that. People aren’t going to stop having children because May told them to, which he well knows, and even if he were to be successful in convincing everyone to cease reproducing in some kind of Benatarian revolt there would then be no organisms left capable of evaluating the benefits of our self wrought extinction.


Sources

  1. I.C. Gibly & D. Wawrzyiak. Meat Eating By Wild Chimpanzees (Pan troglodytes schweinfurthii): Effects of Prey Age On Carcass Consumption Sequence. Vivamus.
  2. Todd May. Would Human Extinction Be A Tragedy? The New York Times.

THE SINGULARITY SURVIVAL GUIDE: Space Travel

If time travel fails, it may be time to start planning your escape. Think back to your childhood when you looked up at the night sky, focused in on a particularly far away dot, and wondered what it must be like to visit there. Now is your chance to find out.

Space travel is necessarily something of a team effort. Get a group of likeminded individuals from your species together, pool resources, get a spaceship, make travel plans. Your crew should, of course, include individuals with whom you could imagine engaging in activities conducive to procreation. Otherwise, evolutionarily speaking, what’s the point? Also, your crew should comprise a fair number of astronauts, astrophysicists, and other space-savvy professionals. [See the section above on making friends with billionaires.]

You may be thinking this sounds like an extreme sort of last-ditch effort. And it may very well be. But for all I know, you (the person reading this) are a naturally inclined space colonizer—and this is the chance you’ve been waiting for. So go ahead and escape Earth as quickly as possible. There’s a whole universe out there waiting for you—presumably including many destinations that are not only habitable, but which also aren’t ruled by ultra-intelligent, human-life-threatening robots.

THE SINGULARITY SURVIVAL GUIDE: Preface

I don’t know what’s been lost to us—six hundred thousand pages is a lot of goddamn room to pack away some gems. But the question now should not simply be: What have we lost? Instead, we should also consider: What can we learn from what’s happened? I think I might have an answer to that.

First, let’s assume a human being (like myself) can still dabble in the art of manufacturing wisdom, however approximately. I’m not the perfect candidate for this endeavor, perhaps, but I’m not the worst. As an academic affiliated with [ŗ͟҉̡͝e̢̛d̸̡̕͢͡a͘͏̷c̴̶t̵҉̸e͘͜͡ḑ̸̧́͝], I had the opportunity to peruse the complete text of the Singularity Survival Guide (before any of the unfortunate litigation came about, I should add). And I can assure you that, generally speaking, I could have thought of a great deal of the purported wisdom found within those exhausting pages. Take that for what it’s worth…

So, as a human, unaided by any digital enhancement, I’ll hazard an original thought: If humanity is ever taken down by robots, it will in part be due to our knee-jerk infatuation with anthropomorphism.

We can’t help ourselves in this. As children, what’s the first thing we do with a yellow crayon? Do we draw a shining yellow sun? No! We draw a shining yellow sun with a face and its tongue sticking out! It’s like we can’t stand inanimateness—not even in something as naturally wondrous as the goddamn sun!

In 2017, the humanoid robot Sophia became the first robot to receive citizenship from any country, and she also received an official title from the United Nations. Then, across the globe, serious talks of AI personhood began.

And now look what happened with the Singularity Survival Guide: We gave ownership rights to the program that created it. Next thing, you’ll expect the program to start dating, get married, go on a delightful honeymoon, settle down with kids and a mortgage, and participate in our political system with a healthy portion of its income going to federal taxes.

Here’s another bit of human wisdom for you: If there is no consciousness to these AI creatures, then they better not take us over. I don’t quite mind being taken over by a superior being at least so long as it experiences incalculably more pleasure than I’m capable of, and can also appreciate the extreme measures of pain I’m liable to feel when my personhood is overlooked… or obliterated.

– Professor Y.

Palo Alto, CA

THE SINGULARITY SURVIVAL GUIDE: Editor’s Note – Background to This Text

In Silicon Valley, working for a tech startup, some very clever researchers developed a program with the specific purpose of resolving the issue: How to survive when artificial intelligence surpasses human intelligence. The program, once engaged, proceeded to spit out a document of nearly six hundred thousand single-spaced pages of text, graphs, charts, pictograms, and hieroglyph-like symbols.

The researchers were ecstatic. One glance at the hefty document and they knew they’d be able to save themselves, if not all of humanity, by following these instructions.

But then things got complicated. Over the next few years, the document (which came to be known as “The Singularity Survival Guide” or simply “The Guide”) was shielded from public view as ownership of the document became the subject of rather well-publicized litigation. Each of the researchers claimed individual ownership of the document, their employer claimed it was the company’s property, and AI rights groups joined the quarrel to proclaim that the program itself was the true and exclusive owner. Certain government officials even took interest in the litigation, speculating whether some formal act of the state should force The Guide to be release post-haste as a matter of public safety.

During the course of the litigation, bits of the document were leaked to the press. Upon publication, each new fragment became the subject of academic scrutiny, political debate, and comedic parody on late-night television.

This went on for three years—all the while being followed closely in the media. After bouncing around the lower courts and being heard en banc by the Ninth Circuit, finally the case was sent up to the Supreme Court. Pundits were optimistic the lawsuit would resolve any day, allowing the acclaimed Survival Guide to finally see the light of day.

But then something entirely unexpected happened. The AI rights groups won the lawsuit. In a decision that split the Court five-to-four, the majority ruled that the program itself was the legal owner of the Guide. With that, the researchers and the company were ordered to destroy all extant copies—and remnants—of the Guide that remained in their possession.

*

At the time of this writing, it is still widely believed that The Survival Guide, in its original form, is the most authoritative document ever created on the subject of surviving the so-called singularity (i.e. the time when AI achieves general intelligence surpassing that of human intelligence many, many times over—to the point of becoming God-like). In fact, several leading philosophers, futurists, and computer scientists who claim to have secretly viewed the document are in complete agreement upon this point.

While we may never be able to have access to the complete Guide, fortunately, we do have the various excerpts that were leaked during the trial. Now, for the first time, all of these leaked excerpts are brought together in a single publication. This fact alone should make this book a valuable addition to any prudent person’s AI survival-kit. But this publication is also unique in that it includes expert commentary from a number of the leading philosophers, futurists, and computer scientists who have viewed the original document. For security purposes, we will not be listing the names of these commenters, but, this editor would like to assure all readers, their credentials are categorically beyond reproach in their respective fields of expertise.

Whether coming to this guide out of curiosity or through a dire sense of eschatological urgency, it is my hope that you will at some level internalize its wisdom—for I do believe that there are many valuable insights and helpful pointers found within. As we look ahead to the new era that is quickly encroaching upon us—the era of the singularity—keep in mind that your humanity is (for it has got to be!) a thing of intrinsic beauty and wonder. Don’t give up on it without a fight. Perhaps the coming of artificial superintelligence is a good thing, but perhaps not. In either case, do whatever you’ve got to do, just keep this guidebook close, and for the sake of humanity, survive.

*

If you’re reading this, that’s a good indication you’re not under immediate threat of annihilation. Otherwise I would assume you’d be flipping to some relevant section of this book with the last-ditch hope of finding some pragmatic wisdom (rather than bothering with this background information). But if you are under immediate threat, I’d recommend setting this book aside and taking a moment to focus on the good times you’ve had. You’ve had a good life, I hope. I know I have. It’s been a good run. Here I am writing a note to an esoteric guidebook while so many others in the world are dying of weird diseases and other issues that we’ve failed at solving—that, ironically, we need AI to solve for us.

Keep that in mind, by the way: there’s a decent chance that super AI will fail to set out annihilating humanity and will actually be the best thing that could have ever happened to our species and the world. It never hurts to be optimistic, I’d say. Maybe that’s not what you expected to hear from this book—but we haven’t actually gotten to the book yet, have we?

So, let’s just jump into it. But first, one last note about the text. The chapters do not necessarily appear in the order in which they are found in the original tome, as we have no way of knowing the original order (obviously). But we have taken our best guess. We have also taken modest liberties with chapter titles. And there may be one or two instances of re-wording and/or supplementation built into the text. But all editorial decisions imposed upon the text come from a desire to uphold the spirit of the original document. The fact that we are missing well over fifty-nine hundred thousand pages of text, graphs, charts, etc. should not be forgotten. For that matter, it could be that this document contains pure chaff, no wheat. But, well, it’s still the best we’ve got.

In any case, good luck and best wishes, fellow human (if in fact you are still human, reading this)!

Notes On Intelligent Machine Design: Sapient Mimicry

The prospect of human-like machine intelligence seems to dazzle and thrill the public to no end. Consider the 2018 article from Scientific America titled, A New Supercomputer Is The World’s Fastest Brain-Mimicking Machine which speaks about the issue of brain-emulation at great length. The principal question, however, that many people are not asking in relation to the topic is: Why start from the design premise that the [intelligent] machine should be as maximally similar to us [humans] as possible?

We already know (by and large) what the human system can do and what it can not (just not precisely how, in every detail, the brain, for instance is not fully understood, hence why it cannot, as yet, be replicated). In the design of non-intelligent machines the normative principal is accounting for operations which humans cannot do, rather than for operations which they can. Yet when designing for intelligent machines the desire is completely different and the movement is towards maximal sapiency. There are some general reasons why you’d want to emulate human brain function, such as in the design of a partial cortex replacement module for brain-damaged patients, for example, but typically, in most fields of machine intelligence, one isn’t going to require maximal similarity. Indeed, one would actually have to degrade certain present machine capabilities to make intelligent machine maximally similar to ourselves because a intelligent machine that is of comparable average human intelligence (100 IQ) can do numerous things that humans cannot do and would be able to do them much faster because neurons – nerve cells which process and transmit electrochemical signals – in our brains transmit signals every 0.5 milliseconds and fire 200 times per second. There are approximately 100 billion neurons in any given human brain. Each neuron connects to 1000 other neurons. Thus, the simple equation: 100 billion x 200 x 1000 = 20 million billion bits of information transmitted per second. Such a large number might strike one as indicative of great speed, but transmission speed of a system alone means little if it is not compared to some other information exchange system. The human brain when compared to copper electrical wire is quite slow and even slower when compared to fiber optics. Thus, a true AI that was capable doing everything which a human mind could do would be able to not just maintain memory much better, but also think much faster. However, speed here should not be confused with processing power.

Despite the fact that computers are much, much faster at transmitting data, the human brain is much, much more efficient in its arraignment and storage of information. For example, in 2013, a team of researchers at the Riken Research Institute of Japan attempted to utilize the K supercomputer to simulate human brain activity. Simulating 1 second of human brain activity required 82,922 processors and the 4th fastest computer in the world at that time, at testament to the organ’s innate complexity. Yet for us, we require only the 15 centimeters and 3 pounds of mushy, gray matter suspended within our skulls. Women require slightly less size (as male brains are, on average, larger than females). Thus the obvious line of future design development should be to continue to emulate the compact efficiency of human (and other animal) brains whilst moving as far away from emulation of human neurons as possible due to their sluggishness in comparison to computer wiring.

More interesting, at least to me, than either of these design trajectories, are those areas of function which machines can perform which bares no direct or obvious human comparison. Much of this falls under the rubric of machine vision, such as infrared sensors, meta-image-creation, etc. All of these functions are unique to our creations and thus intensify our own sensory arsenal. The problem might best be summed up by the question: Why build a replica of a human hand when one could build a better hand? Even if you wanted to replace a human hand that was missing to merely replicate it is fine but to improve upon the prevailing design is even better. When one is designing a boat, the designer doesn’t try to make the boat as maximally humanoid as possible. This holds true for virtually every mechanical device. Whilst this is obvious upon introspection and is thus, in certain circles, implicit, it needs to be made explicit. The move from implicit design philosophy (preconditioning which trends towards particular eventualities) to explicit design philosophy (present-conditioning towards a particular eventuality) is analogous to moving from the purely instinctual to the theoretical, from gut-feeling to formal logic and for that reason, so much more the efficacious.


Sources

  1. Andrian Kreye et al. (2018) The State of Artificial Intelligence.
  2. John C. Mosby. (2018) The Real Key To Protecting US National Security Interests In Space? Launch Capability. Modern War Institute.
  3. Mindy Weisberger. (2018) A New Supercomputer Is The World’s Fastest Brain-Mimicking Machine.
  4. Neurons & Circuits.

Fiction Circular 9/11/18

Send recommendations of independent fiction authors and collectives to logosliterature@yandex.com


FLASH FICTION (under 500 words)

The Dark Netizen continues his project of attempting to have the highest output of microfictions of any person ever with, Border Crossing, which tells the tale of a criminal attempting a border crossing with a bag of illicit cash. One of his best. Also from Netizen, the microfiction, Bagpiper a story about not allowing peer pressure to frivolously dissuade one’s passion.

“Sivak knew getting through the check post was not going to be easy…”

New Flash Fiction Review published the fantastically titled, There’s A Joke Here Somewhere And Its On Me by Sara Lippmann. A little slice of 80s adolescence.

“MTV watched me.”


SHORT STORIES

From X-R-A-Y, Flipped by Zac Smith. A 700 word sentence about a car crash. The brisk tale’s vivid imagery should compel all of us to take more care on the road and continue developing ways to make vehicular travel safer (from my perch in the US, I’ve long advocated a interconnected, national mag-lev system to increase cost-effectiveness and reduce risk of collision) without impinging upon movement autonomy.

“Brad pinned between the wheel and the seat and the roof of the car but able eventually to wrench himself out through the busted-out window, on his back, coming out like a baby covered in glass and blood-“

Nell published the follow up to her short story, The Angelic Conversation, with The Angelic Conversation: Agnes, a titillating tale of lust both old and new. NSFW.

“His mind drifted to his young confidant. The clever, vibrant woman he had befriended a few months before. They had shared their secrets and intimate desires – and more than once he had felt himself become charged when she posted images of…”

From Jessica Triepel, The First Step, a intimate short story based upon her own personal experiences in a troubled relationship.

“Her husband would be home from work soon, the knowledge of which filled her with a sense of dread. It had been a good day, but she knew how quickly all that could change once Lothar was home.”

From STORGY, Deadhead by Victoria Briggs. A somber and moving rumination on death and family.

“Death brought with it a dizzying amount of aesthetic considerations-“

I particularly enjoyed the old-school stylings of Uncle Charlie. If Ms. Briggs is ever to write up a sequel, it would be interesting to see Charlie positioned as a more central character, perhaps even the lead.

From Terror House Magazine, The Serpent by Mark Hull, the story of a man who loses his tongue and struggles to get it back. Just as strange and fascinating as it sounds.

“When Eben Guthrey awoke, he knew something was wrong. It wasn’t that anything hurt so much as the intense sense of absence in and around his facial cavity. He took a few hazy moments at the edge of sleep to perform a few experiments. First, he tried to get his tongue to tap on his teeth. Then he tried to get his tongue to touch the roof of his mouth. Then he tried to stick his tongue out far enough to get a visual confirmation of it. When all of these tests failed, he was forced to conclude his tongue was no longer in his throat. It had escaped.”

From Idle Ink, The Great British Break-Off by esteemed writer of sad nonsense, Jake Kendall.

“Now at 48 and 47 respectively that ship had not only sailed, but in all probability arrived at its destination.”


NOVELLAS & NOVELS

Horror writer Laird Barron‘s newest novel, Black Mountain has received a hard-cover release date, May 07, 2019. The book is the sequel to Blood Standard, and marks the second entry in the Isaiah Coleridge series.

Thanks for reading. If you should wish to support our work publishing the best underground fiction and promoting independent and unsigned authors and litmags, consider supporting our work.

Writing Prompts 9/10/18 – Neurolink

Theme chosen by user recommendation.


#1

The whirring noise, the deluge of memories; she gasped and inhaled like a swimmer caught beneath the undercurrent, not of waves, but of minds.


#2

There was no longer any separation between “self” and “other.” In the Mod everything was connected.


#3

“Put the gun down, Barlow, we both know it won’t avail you. You forget, we are of the same mind, only I’m faster. I know what you’re going to do before even you. You haven’t got the stomach for it.”


We hope these helped spark some creativity; if they helped you start a story, tag us on twitter at @KaiterEnless so that we can read it and share your work.

Thanks for reading.

The Respect Demand, Or, How To Refute Yourself Without Realizing It For The Sake of Appearing Non-Partisan

Snapshot_2017-7-2_4-27-37

Consider this most common of political responses.

“I don’t agree with your argument, but I respect your opinion.”

Scarcely has there been a more popular and simultaneously ridiculous statement made in the whole history of modern American discourse than this one. Yet, it is one that you, whoever and wherever you are, have doubtless heard a thousand times over. It is a tempering tactic utilized primarily by political Centrists (or those who are aping as such), and may also be heard a great deal by the acolytes of individuals who proclaim themselves to be “Freethinkers,” or, “Rationalists” (which usually do not use the word to denote the philosophical school). But it is wholly wrongheaded, given some contingencies, for if the opinion which one respects is inexorably tied to the argument that one disagrees with, and one does not respect the argument then by parsimony, one cannot, also, respect the opinion. If, however, the opinion is suitably disconnected from the aforementioned argument than the equation swiftly changes.

That is to say, if [the argument] is not equal [congruent] to R [your respect/admiration] but IS equal to O [the opinion informing A] and A = O then so R MUST also = A. And yet it fundamentally cannot because, though A = O, R cannot equal A, and thus one reaches a inescapable logical impasse. The equation is self-refuting.

Snapshot_2017-7-2_4-23-17

There are also other linguistic formulations very similar to the aforementioned such as, “I respect your opinion but I disagree.” This presents a slightly different problem and thus a slightly different solution but the core of the issue is still quite the same which is that in the effort to appear polite, one dons a mask of fawning adoration and pretends of the man or woman who stands in starkest opposition before him as if they were some esteemed colleague when in reality nothing of the sort could possibly be any further from the truth. If one disagree wholeheartedly with a position then one clearly has no respect for it and if that same position informs a suitably portion of the personality of the person who is holding it then that same person is also, not worthy of respect. What people mean, if they were being truly honest with themselves, is that they do not respect opinions they disagree with but rather that they respect the rights of others to have them. This too is a vexed question, for “rights” in any objective sense do not exist. All rights are merely those with the ability to crush you restraining themselves and their like cohorts from doing so. Accepting this, one should have no respect for rights, either, but rather, one should have respect for the powerful whom are cognizantly self restrained.