Intelligence: Artificial & Otherwise

Scroll down to content

To speak sanguinely about Artificial intelligence (AI) – real and speculative – one must first ask the question: Is AI possible? Before that question can even be rendered answerable, however, one must define one’s terms, especially given the proclivity for intelligence-as-such, that is, as a material process, to be conflated and constrained wholly to sapience (human intelligence). If one’s definition for intelligence-as-such is constrained solely to human intelligence it is definitionally refuting, as it would be to claim that intelligence is a human-exclusive process (which it isn’t). It may be the case (and indeed is likely) that the concept of intelligence is unique to humans, but the process there described is clearly not. No one would contend that pigs, dogs, monkeys and dolphins do not have their own, unique, forms of non-sapient intelligence. However, if one theorizes from the ludicrously anthropocentric1 position that human intelligence is the sum-total of intelligence-as-such than clearly AI (often used synonymously with MI, or Machine Intelligence) has not yet been developed and is, indeed, impossible. This is conceptually egregious.

Intelligence is a particular configuration of matter; a durable process of some system which allows for the further processing of information (both internal and external to the originary entity) which then allows the system to react, in some way, to the information there processed. Thus defined, AI is not only possible, but already actual. This is to say that a contemporary computer IS artificially intelligent, it is not conscious of its intelligence but there is no reason why any given entity must be conscious of its intelligence for it to display intelligence because intelligence is a function of a particular material configuration. The complexity of intelligence, however, prohibits simple and all-encompassing characterization in a way which is not comparable to flight, swimming, lifting or running. For example, if a roboticist were to create a fully-functional machine that, in every detail, imitated the structure of a bat, no one would say that this machinic creation wasn’t really capable of flight. If it were swooshing about a room via the power of its metallic wings one would readily admit it were flying without a qualm. Similarly, if this same genius roboticist were to create a fully-functional replica of a fish and then placed it into a stream and watched it slip through the liquid, no one would say that this replica-fish was not really swimming. However, when it comes to computers performing tasks, such as mathematical problem-solving, the cry “that isn’t real intelligence” is invariably raised.

Sam Harris elaborates upon the issue, “We already know that it is possible for mere matter to acquire ‘general intelligence’—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.”2

Writing the same year, Benjamin H. Bratton makes a similar case, “Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.” And somewhat later in his text, “Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology. Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then?”3

Why indeed? Of course, artificial intelligence-as-such and the desire to create artificial intelligence which is human-like, or human-exact, are two completely different issues. It may be that the process of creating human-like machine intelligence is at some point discovered and deemed imminently desirable. Whatever is decided in the future, I would recommend the acronym SEAI (Sapient Emulating Artificial Intelligence) to differentiate, with brevity and clarity, general artificial intelligence from human-like artificial intelligence systems.

1Anthropocentrism has two principal classes: (a.) the belief that humans are the most, or one of the most, significant entities in the known universe. (b.) the belief that humans are the fundemental, indispensible or central component of all existence which leads to the interpretation of reality solely through human-familiar conception. All utilizations of ‘anthropocentrism’ in this paper are (b.)-type. The author finds no fault with definition (a.) and has extensively remarked upon this topic elsewhere; see: Kaiter Enless. (2018) Suzerain. Logos.

2Sam Harris. (2015) Can We Avoid A Digital Apocalypse? A Response To The 2015 Edge Question. SamHarris.org.

3Benjamin H. Bratton. (2015) Outing AI: Beyond The Turing Test. NYTimes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: