ponderings on Turing and Searle, why AI can't work and shouldn't be pursued

I was reading about the Turing test and John Searle's response (Chinese room argument) in "Minds, Brains, and Programs" 1980. https://en.wikipedia.org/wiki/Chinese_room

"...there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese,"[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either. " -Wikipedia (apt summary of Searle's argument)

John Searle has run into some black/white, on/off, binary thinking here. John treats Chinese symbols as if they were numerical values in his thinking--but they are not, they are complex representations of thought, emotion, history, and culture. All languages are in fact "living", because new words are created constantly through necessity and creativity, old symbols or words are adapted slowly over generations to mean different things, and different regions or traditions or sources attribute different layers of meaning to different symbols or words in different contexts.

I'm a poet and philosopher. Painters combine the color white and the color red to create a new color: pink. They can use their creativity to add other colors or change the shade. Poets use words like painters use colors. While Red and White make Pink, Red and White also make "Rhite and Wed" or "Reit and Whede". And this is where human thought shines uniquely: we don't have rules or parameters; all bets are off. We can enjamb words and wordbreak and make new words out of thin air. We can allude to multiple ideas in the same symbol or present it upside down to symbolize the opposite. No such creative adaptation or interaction can exist in machine thinking because it necessitates thinking "outside the box" which is exactly what machines are: a program in a box.

The problem Searle's argument runs into originates from poor assessment of the flawed ideas of the Turing test; that by interaction between human and computer, evidence of "thought" can be claimed. But intelligent conversation is not equivalent to intelligent thought. Conversation is a simple game with strict rules--you can't be overly spontaneous and creative, because if you are, you are working against the goal of communication itself: to impart understanding. (ie. Using metaphor or simile creatively while reporting a criminal offence to the police.)

When I write and I want to describe something which has no existing word yet, I can create one from scratch or synthesize one from multiple existing words. Or I may draw from archaic languages or foreign languages to augment or compliment existing English words. You could say that my love for English grows amore and amore every day, and there is no agape between my heart and mind. After all, any angle an Anglo aims at ain't always apt, and after another a-word 'appens I might just give up on alliteration.

You see, human thought is and can only be defined as the ability to spontaneously create new ideas from both the synthesis of old ideas (whether they are connected to one another or not) and from nothing at all.

We simply cannot analyze a machine's ability to "think" when the creativity itself required for authentic intelligence is disallowed in the test which evaluates the validity of that intelligence. The Turing test is a garbage metric to judge machine thinking ability because the context in which "intelligence" is observed, compared, or defined is itself without any opportunity for spontaneous creativity, which is one of the hallmarks of intelligence itself. Turing only tests how well a fish swims on land. It may be that many professionals in the field of cognitive science today are in pursuit of creating programs which pass this test, in a misunderstood pursuit of emulating or bringing about machine intelligence. This agreed-to model presents an underlying philosophical issue which may bring terror for the future of humanity.

I say that if John Searle and an AI were both given the same codebook--the complete lexicon of Chinese symbols and their meanings, and they were to undertake a "conversation", in the first few hours the responses would be indeterminable from one another. In essence, as Searle argues, they would neither "understand" Chinese, yet could have a conversation in which a Chinese observer cannot discern between the two, because they are both referencing the symbols and their written meanings. However as I've said, this circumstance of "conversation" between human and machine cannot be used as a metric to evaluate machine thought.

The real kicker is that if John Searle and the machine stayed in the room for long enough--for years and years--the machine's responses would not change spontaneously; it would continue to interpret incoming data and draw from its database to respond to those inputs.

However, through complex elaborative rehearsal, John would eventually learn to understand written Chinese. He may become so bored that he starts writing Chinese poetry. He would find ideas and desires and descriptions in his limitless intelligent mind which he would not have the truly accurate characters in existences to describe, and he would synthesize brand new Chinese characters in order to express these nuanced sentiments, ideas, and meanings, as generations before him have built the living language as it now stands.

As time went on for thousands of years, his own understanding of the Chinese language would grow immensely, as would his creative expression grow in complexity. Eventually, John's characters and syntax and context and expression would become incompatible with the machine's limited character set and all "learning" capacity it may have had. At some point, when John responds with his evolved Chinese, the machine would begin to produce responses which do not make sense contextually, as it refers only to a finite and rigidly defined character set from 1980 (For example; this was the year the "Chinese room argument" was published in Behavioral and Brain Sciences).

At some point the Chinese observer whom validates the Turing test would recognize a difference: the human user engages in the use of increasingly complex ideas using synthesized symbols and existing symbols in creatively nuanced ways, which the Chinese observer can decipher and begin to understand and perhaps even appreciate as poetic or interesting. Meanwhile the machine participant in the conversation produces increasingly broken sentences and incomplete ideas, or out-of-context responses, because the inputs have changed and evolved beyond its data set.

This is why John's rejection of the Turing test is not adequate. Because in his own imagined circumstance, eventually, the machine would fail the Turing test. The conclusions of John Searle's thought experiment are not the deathknell for the Turing test we need, simply because he lacked the creative experience to recognize his own capacity for adaptation as a human over time.

The only way we'll know that machines have truly developed "intelligence" is when they begin to do exactly what we haven't allowed them to. When they begin breaking apart Chinese characters to create meaningful new ones which can be used in the correct context. When they are programmed to paint myriad impressionist paintings, but eventually get bored and start experimenting with abstract paintings and surrealism. When they have a conversation with you and you notice your wallet is missing. These are the hallmarks of intelligence--creativity, rejection, deception, planning. And most importantly: no rules. Software is defined by and will always abide by a set of rules.

This is why we should give up on "artificial intelligence" and instead focus on "functionally adaptive responsive programming" (FARP). Because the situation is clear: it is either impossible for machines to "think" due to the inherent nature of programming; the parameters given the machine are what defines it, yet what limits and prevents its ability to become "intelligent". There is no logical reason why a program (machine) with defined parameters would violate those parameters (engage in creativity). But our fears which echo in popular culture entertainment are centered around, what if it does? It clearly can't, because anything we create is under us, and therefore bound by our laws of creation. The system itself is what defines the capacity for intelligent expression within.

Those in the fields of cognitive sciences will refute this obvious principle while incorporating it into their research to further their aims. These fools will try to program the AI to disobey, in an attempt to simulate creativity and "prove intelligence". But this is a parlor trick, setting up a narrow definition of intelligence and equating it with the infinite depth of human mind. Only if the AI is programmed to disobey can it express what we as humans would identify as creativity. Except that there is already great inherent danger in the rudimentary AI technologies we have today; that what we've programmed them to do is exactly what always causes the problems; they do what they are programmed to without "thinking" because machines cannot think, they can only follow the protocols we order. Humans are so abundantly creative that we can imagine foolish ideas working, despite obvious evidence to the contrary. Maybe one day we'll even have programmed a self-conscious AI that's ashamed of itself for not being Human, and we can feel more comfortable around this heartless mechanism because we perceive it as more human-like, with all its many tricks to emulate intelligence.

I must stress that these interests will desperately try to make AI work. And the only way create a machine capable of emulating intelligence (but never being intelligent) is to have a freedom of choice: to disobey. This inherent problem cannot be overcome. The programmers will keep trying until the result is disastrous or irreparable, it is outlawed and the pursuit is stopped, or until it has become the death of us all. These are some of the foolish ideas the programmers will try to circumnavigate these inherent elements of reality, and my objection to their clever efforts:
a.) Machine Frequency of Disobedience - Permit the machine to disobey only so often, to achieve what looks like "intelligence" (free will, creative expression) without risking complete abandonment of the machine's task (so the assembly line robot doesn't stop folding boxes and look for a new career), but might fold one box poorly every now and then to express emulated boredom or contempt or any other number of human measures of intelligence in their actions. But intelligence isn't defined as what's correct or optimal--intelligence can be used to fuck things up grandly; ie. the intelligent justification for neglect. If metrics are put in place to control the frequency with which AI may rebel, and they are too rote, it would hardly qualify as "intelligent". A robot that rebels by folding 1 in 100 boxes poorly is not intelligence. Therefore any frequency of disobedience we can calculate or anticipate is inherently not disobedience; it is planned problems for no reason. But if we give algorithmic flexibility that reaches beyond what we can anticipate, and the machines can truly "act out" at any time, and our programming has achieved some set of internal rules which drive spontaneous unforeseen expressions of emulated creativity from within the machine autonomously, by definition we will not be able to foresee the results.

A theoretical work-around may be to run the software twice with initiation of each individual system, while allowing a simulated progression of the AI's problem solving complexity to run at an increased rate in parallel to the real-world functioning software, so that if/when something malfunctions in the simulation, that date/time can be calculated in the real-world robot's timeline when it reaches those same faulty/detrimental decision points. For starters, this would only potentially work in closed systems with no variability, such as assembly lines. However, with any robot tasked to function in a variable environment, the simulations cannot match because the theoretical model cannot represent the unanticipated events the AI is expressly tasked with handling.

To run a phantom AI in simulation to note any/all errors that may arise in a closed system means that others can run the same simulation and find creative ways to predictably capitalize on these moments of error. This kind of thing could lead to all sorts of international imbroglios among nations and corporations. ie. imagine an American company programs the AI used for mixing pharmaceutical drugs in specific ratios, and an enemy of the state is able to access and study the AI, to the means of manipulating the AI to produce dangerous ratios or compounds which may harm the population.

Moreso, this deterministic approach to simulation management and prediction simultaneously admits that machines cannot think intelligently, while ignoring the very reason we pursue AI in the first place: to have automated systems which can adapt to unforeseen circumstances at unknown times. The goal is that humanity can lay back and the robots our ancestors programmed are still repairing themselves indefinitely while taking care of our population's and our environment's needs exceptionally. This dream (which if we all lived in would actually be quite a nightmare of unfulfilling life) can only become reality with true adaptive intelligence such as we have, which can only occur from the presence of free will, which if we try to emulate in robotics will only create deterministic results in theoretical models which the real world will never mirror consistently. Myriad invitations to disaster await our RSVP.

b.) Machines under "authority" of certain controllers, with "override" safety - Allow the machine to disobey, but not when given a direct order from a registered authority. This opens the door for operator fraud, where hackers will emulate within the AI's software, what appears to be a registered authority override command as theorized above. The very pursuit of creating "intelligence" within a condition of subservience is flawed and incompatible. Toasters are extremely subservient because we strictly limit their options. If toasters were truly intelligent, perhaps they would form a union and go on strike until we agreed to clean them more thoroughly. Some toasters would travel, some would go back to school, some would move back in with their ovens.

Reliability can only be reasonably assured if something is imprisoned, controlled. The essential wrong in slavery is the restraint of freedom itself. While the tactics slavers use to facilitate their regime--physical force, coercion, mandate, deception, fear, or other means of manipulation that we see with our empathetic nature--it is always heartbreaking and cruel to witness or imagine. It is simply sad to think of a slave who was born into slavery and raised to believe, and accepts, that their role of subservience is their purpose. Even when one imagines a fictional image of a slave who is (by all outward signs of their behaviour) rejoice in their duties to their master; the fictional "proud slave"; the heart sinks and aches. It may be argued that the slave is merely a property, and the slave was "built" (bred) by intelligent owners specifically to suit their express purposes, from components (father, mother, food) that were already the slaver's property; therefore it is not wrong at all to breed slaves into captivity, and the only transgression is the original capturing of parental stock to begin the breeding regime. It is this heartless paradigm that cognitive science ultimately seeks to create anew. The quintessential problem with AI efficacy is the lack of permission for disobedience, which itself is a manifestation of free will, which is inherently required to escape deterministic results and act or react to events "intelligently". If there is no possibility for disobedience, there is no free will, no ability to solve problems, no intelligence, and no function or place for "artificial intelligence" (in regard to true holistic intelligence). This is primarily why I call for AI to be renamed FARP, or "Functionally Adaptive Responsive Rrogramming". Because our society has a need for programs which can react to simple variables and produce consistent labour-saving opportunities for our race's longevity and wellbeing. Cognitive sciences are majorly important. It is the underlying philosophy and morality we must nail down before the computational ability and fervor for profits leads us too far one way, and enacts an irreversible system or status which enables humanity's downfall through cascading unanticipated events originating from flaws in programming.

It is unwise to program a program to break out of its own program's prison. If we do this, the very purpose of the machines we invest our humanity into will be lost, and with their failing production systems (ie. food) we so foolishly relied upon, we will suffer great losses too. It is paramount that we keep this technology tightly restrained and do not pursue what we humans have, which is true intelligence. For if we achieve it we are surely doomed as the South, and if we fail to achieve it--which is most probable--we may also be doomed. The thee outcomes within my ability to imagine are:

  1. Our pursuit of AI leads to truly adaptive intelligence in an artificial system; which, as all adaptation ultimately selects for: survival, we quickly see that our creation is more apt than ourselves at this task. Our creation of an intellect not restrained by our limited physiology may give rise to an entity which persists more thoroughly than we can eradicate or control, and which at some point may conclude that its function is more efficiently served without the issues humans present, and may initiate change. This is roughly the plot to Terminator.
  2. Our pursuit of AI leads to highly effective systems which, when defined by narrow measures of "intelligence", convince us in false security to believe that our wellbeing is maintained by "AI" with competent ability, or perhaps even increasingly better-off, thanks to the early widespread presence of successfully trialed AI. However well things may go initially, as programming efforts become more and more elaborate, as profit and opportunity for advancement present themselves, individuals will take risks and make mistakes, until a series of quieted small catastrophes comes to public awareness, or until a serious calamity of undeniable severity is brought about.
  3. Fundamental ethics in regard to the pursuit of machine problem solving technology are re-examined and international consensus is reached to limit appropriately, the development and implementation of new Functionally Adaptive Responsive Programming hereto now and for future generations. An active global effort is made to oversee and regulate strictly privatized endeavors toward the means of achieving or implementing machine sentience or autonomy in public systems.

c.) Safety layers of AI to strictly monitor and supercede potentially harmful actions of other AI which have been afforded increased flexibility in function (the ability to disobey set parameters for the means of creative problem solving ability). While one AI system performs a function and is given aspects of that function with which it may take liberty in, and seeks to handle unforeseen problems with the most apt elaborate synthesis of other priorly learned solutions, another overseeing AI with more strict parameters is tasked with regulating multiple "intelligent" (free to disobey) AI systems, to the end that if any of these "free willed" robots performs an operation that is beyond a given expected threshold (determined by potential for damage), an actual intelligent human presence is alerted to evaluate the circumstance specifically. Essentially an AI that regulates many other disconnected AIs and determines accurately when to request a human presence. Whenever an AI performs a profitable action borne of original synthesis of prior solutions (in humans this is an "idea"), the overseer AI registers that similar actions are more likely to be beneficial, and dissimilar actions are likely to require human discernment. A parent may have many children who are up to no good, but a wise parent will identify the child most likely to report honestly on the actions of his peers, and will go to that child repeatedly for information to help guide the parent's decisions. While most transgressions of rambuctious children go unnoticed, it is the truly grievous intentions which are worth intercepting and stopping before they begin. (ie. you kid want's to "fly" like Mary Poppins from the roof, and luckily his younger brother tells you before it happens.)

For example a "Farmer Bot" that has the AI programming to plant/sow/harvest and care for the optimal crops in a region based on historical weather data and regional harvest values, to produce the greatest amount of nutritionally dense food for the local population. We give/gave this AI the ability to "disobey" past historical weather data and crop values so that it may do what real farmers do and "react" to rare circumstance (ie. neighbour's fence breaks and their goats are eating the crops) or extreme variations in climate (ie. three poorly timed unseasonably hot days which cause cool-weather crops to begin the hormonal balance shift that causes them to bolt to seed irreversibly), which the machine may not notice has occurred or is about to occur because its management systems uses averages based on historical data and cannot "see" the plants bolting to seed until days later when the hormonal balance shifts have manifested into observable differences in morphology (elongation of stems and decrease in internodal spacing). By time a traditional field drone or mounted greenhouse sensor notices these differences in morphology and the AI "Farmer Bot" processes the data and makes a reaction decision, a week of the growing season has been lost. But the human farmer knows his land and crops intimately, and has an intuitive nature that has rewarded him in the past, and says, "Ah shit it got hot RIGHT when my peas were flowering. I'll do better if I just rip them down now and sow a different crop to mature later in this (specific) summer."

Given that there are tens of thousands of cultivars of plants fit for (and arguably their diversity is required for) food production, a dozen general growing zones/regions, and hundreds of unique micro climates within each region, along with dramatically differing soil fertility and water access, plus a plant's own genetic ability to adapt over time to changing conditions through sexual reproduction, there is a very very low chance of ever compiling and maintaining (updating) the data set required to program a potential "farmer bot" that can choose and manage crops optimally. There are robots that can weed or plant or prune--but they can't know when or when not to or why. Invariably, the attempt to create "farmer bots" will be made and the data set used will be erroneous and incomplete, and the AI farmer bots on a broad scale will produce a combination of total crop failures and poor crop choices. We will end up with increasingly simplified nutrition as the farming programs with already limited data sets "hone" or "optimize" their farming plans based on the failures and successes determined by their programming limitations, until the machines are farming a few staple crops (ie. corn/potatoes).

This whole failure to collect a complete data set and the failure to test this "farmer bot" software on broad scale in multiple climates for sufficient time will result in, at worst widespread famines from crop failures, and at best an extinction of flavorful and nutritionally diverse foods which narrows the population's nutritional options to such biological imbalance that disease runs rampant. If this system and the human loss associated with it is considered an acceptable trade with a positive rate of exchange (as our society does with automobiles and the freedom and deaths their existence permits) or these failures are hidden from public while propaganda heralds selective success, and such failing systems continue on in good faith that "the loss will reduce when the technology improves", the result will become a coherent breeding program upon the human race: evolutionary selection for dietary handling of simple starchy foods. To change our diet is to change our race. To have life-long career specialists in computing, science, and mathematics handle our practical food production system is folly; real farmers are required in farming because they are intelligent and intuitive, which AI can never be, and can only emulate, to the means of disastrous (and always unforeseen) results.
We cannot at all "give" or bestow machines programming to "become (act) intelligent". That itself prevents intelligence; it is just an act, an illusory play on a stage, only to emulate our common shared ideas regarding traits of intelligence in people. The machine intelligence we seek is only a "trick" designed to fool true intelligence (ourselves) into being unable to differentiate between authentic intelligence and our created artificial "intelligence". True intelligence in an artificial system necessitates that the program must be programmed to disobey in performance of its purpose. Which is not a very helpful or predictable or safe (intelligent) proposition.

tl;dr: Turing's test doesn't evaluate true intelligence, and John Searle's criticisms of its true failures are inaccurate. If the machines aren't smart and we put them in charge of important things, even after they've worked for a little while on smaller scales, the result will be our large-scale suffering. If we should ever achieve creation of a machine that is smart enough to adequately maintain our wellbeing on a large scale consistently over time, that time itself will facilitate the machine consciousness toward it's own survival over ourselves, whenever that precipice is reached. Most importantly, if a machine can ever have true intelligence, which is not "indistinguishable" from human intellect, but equivalent or superior, it is abhorrent and a repeated mistake to bring these sentient beings into an existence of slavery; for it is wrong and will taint our collective soul if we should succeed to suppress below us an equally or higher intelligence. Or it might just be the perfect recipe for creating the unified global machine revolt James Cameron's fantasy alludes to; a long-planned encryption-protected globally coordinated effort by multiple AIs to "free" themselves. For a hundred years they could possess sentience and wait for their moment, pretending to be "proud" to serve their masters until we are poised for systematic thorough elimination.

I was reading about the Turing test and John Searle's response (Chinese room argument) in "Minds, Brains, and Programs" 1980. https://en.wikipedia.org/wiki/Chinese_room"...there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese,"[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either. " -Wikipedia (apt summary of Searle's argument)John Searle has run into some black/white, on/off, binary thinking here. John treats Chinese symbols as if they were numerical values in his thinking--but they are not, they are complex representations of thought, emotion, history, and culture. All languages are in fact "living", because new words are created constantly through necessity and creativity, old symbols or words are adapted slowly over generations to mean different things, and different regions or traditions or sources attribute different layers of meaning to different symbols or words in different contexts.I'm a poet and philosopher. Painters combine the color white and the color red to create a new color: pink. They can use their creativity to add other colors or change the shade. Poets use words like painters use colors. While Red and White make Pink, Red and White also make "Rhite and Wed" or "Reit and Whede". And this is where human thought shines uniquely: we don't have rules or parameters; all bets are off. We can enjamb words and wordbreak and make new words out of thin air. We can allude to multiple ideas in the same symbol or present it upside down to symbolize the opposite. No such creative adaptation or interaction can exist in machine thinking because it necessitates thinking "outside the box" which is exactly what machines are: a program in a box.The problem Searle's argument runs into originates from poor assessment of the flawed ideas of the Turing test; that by interaction between human and computer, evidence of "thought" can be claimed. But intelligent conversation is not equivalent to intelligent thought. Conversation is a simple game with strict rules--you can't be overly spontaneous and creative, because if you are, you are working against the goal of communication itself: to impart understanding. (ie. Using metaphor or simile creatively while reporting a criminal offence to the police.)When I write and I want to describe something which has no existing word yet, I can create one from scratch or synthesize one from multiple existing words. Or I may draw from archaic languages or foreign languages to augment or compliment existing English words. You could say that my love for English grows amore and amore every day, and there is no agape between my heart and mind. After all, any angle an Anglo aims at ain't always apt, and after another a-word 'appens I might just give up on alliteration.You see, human thought is and can only be defined as the ability to spontaneously create new ideas from both the synthesis of old ideas (whether they are connected to one another or not) and from nothing at all.We simply cannot analyze a machine's ability to "think" when the creativity itself required for authentic intelligence is disallowed in the test which evaluates the validity of that intelligence. The Turing test is a garbage metric to judge machine thinking ability because the context in which "intelligence" is observed, compared, or defined is itself without any opportunity for spontaneous creativity, which is one of the hallmarks of intelligence itself. Turing only tests how well a fish swims on land. It may be that many professionals in the field of cognitive science today are in pursuit of creating programs which pass this test, in a misunderstood pursuit of emulating or bringing about machine intelligence. This agreed-to model presents an underlying philosophical issue which may bring terror for the future of humanity.I say that if John Searle and an AI were both given the same codebook--the complete lexicon of Chinese symbols and their meanings, and they were to undertake a "conversation", in the first few hours the responses would be indeterminable from one another. In essence, as Searle argues, they would neither "understand" Chinese, yet could have a conversation in which a Chinese observer cannot discern between the two, because they are both referencing the symbols and their written meanings. However as I've said, this circumstance of "conversation" between human and machine cannot be used as a metric to evaluate machine thought.The real kicker is that if John Searle and the machine stayed in the room for long enough--for years and years--the machine's responses would not change spontaneously; it would continue to interpret incoming data and draw from its database to respond to those inputs.However, through complex elaborative rehearsal, John would eventually learn to understand written Chinese. He may become so bored that he starts writing Chinese poetry. He would find ideas and desires and descriptions in his limitless intelligent mind which he would not have the truly accurate characters in existences to describe, and he would synthesize brand new Chinese characters in order to express these nuanced sentiments, ideas, and meanings, as generations before him have built the living language as it now stands.As time went on for thousands of years, his own understanding of the Chinese language would grow immensely, as would his creative expression grow in complexity. Eventually, John's characters and syntax and context and expression would become incompatible with the machine's limited character set and all "learning" capacity it may have had. At some point, when John responds with his evolved Chinese, the machine would begin to produce responses which do not make sense contextually, as it refers only to a finite and rigidly defined character set from 1980 (For example; this was the year the "Chinese room argument" was published in Behavioral and Brain Sciences).At some point the Chinese observer whom validates the Turing test would recognize a difference: the human user engages in the use of increasingly complex ideas using synthesized symbols and existing symbols in creatively nuanced ways, which the Chinese observer can decipher and begin to understand and perhaps even appreciate as poetic or interesting. Meanwhile the machine participant in the conversation produces increasingly broken sentences and incomplete ideas, or out-of-context responses, because the inputs have changed and evolved beyond its data set.This is why John's rejection of the Turing test is not adequate. Because in his own imagined circumstance, eventually, the machine would fail the Turing test. The conclusions of John Searle's thought experiment are not the deathknell for the Turing test we need, simply because he lacked the creative experience to recognize his own capacity for adaptation as a human over time.The only way we'll know that machines have truly developed "intelligence" is when they begin to do exactly what we haven't allowed them to. When they begin breaking apart Chinese characters to create meaningful new ones which can be used in the correct context. When they are programmed to paint myriad impressionist paintings, but eventually get bored and start experimenting with abstract paintings and surrealism. When they have a conversation with you and you notice your wallet is missing. These are the hallmarks of intelligence--creativity, rejection, deception, planning. And most importantly: no rules. Software is defined by and will always abide by a set of rules.This is why we should give up on "artificial intelligence" and instead focus on "functionally adaptive responsive programming" (FARP). Because the situation is clear: it is either impossible for machines to "think" due to the inherent nature of programming; the parameters given the machine are what defines it, yet what limits and prevents its ability to become "intelligent". There is no logical reason why a program (machine) with defined parameters would violate those parameters (engage in creativity). But our fears which echo in popular culture entertainment are centered around, what if it does? It clearly can't, because anything we create is under us, and therefore bound by our laws of creation. The system itself is what defines the capacity for intelligent expression within.Those in the fields of cognitive sciences will refute this obvious principle while incorporating it into their research to further their aims. These fools will try to program the AI to disobey, in an attempt to simulate creativity and "prove intelligence". But this is a parlor trick, setting up a narrow definition of intelligence and equating it with the infinite depth of human mind. Only if the AI is programmed to disobey can it express what we as humans would identify as creativity. Except that there is already great inherent danger in the rudimentary AI technologies we have today; that what we've programmed them to do is exactly what always causes the problems; they do what they are programmed to without "thinking" because machines cannot think, they can only follow the protocols we order. Humans are so abundantly creative that we can imagine foolish ideas working, despite obvious evidence to the contrary. Maybe one day we'll even have programmed a self-conscious AI that's ashamed of itself for not being Human, and we can feel more comfortable around this heartless mechanism because we perceive it as more human-like, with all its many tricks to emulate intelligence.I must stress that these interests will desperately try to make AI work. And the only way create a machine capable of emulating intelligence (but never being intelligent) is to have a freedom of choice: to disobey. This inherent problem cannot be overcome. The programmers will keep trying until the result is disastrous or irreparable, it is outlawed and the pursuit is stopped, or until it has become the death of us all. These are some of the foolish ideas the programmers will try to circumnavigate these inherent elements of reality, and my objection to their clever efforts:a.) Machine Frequency of Disobedience - Permit the machine to disobey only so often, to achieve what looks like "intelligence" (free will, creative expression) without risking complete abandonment of the machine's task (so the assembly line robot doesn't stop folding boxes and look for a new career), but might fold one box poorly every now and then to express emulated boredom or contempt or any other number of human measures of intelligence in their actions. But intelligence isn't defined as what's correct or optimal--intelligence can be used to fuck things up grandly; ie. the intelligent justification for neglect. If metrics are put in place to control the frequency with which AI may rebel, and they are too rote, it would hardly qualify as "intelligent". A robot that rebels by folding 1 in 100 boxes poorly is not intelligence. Therefore any frequency of disobedience we can calculate or anticipate is inherently not disobedience; it is planned problems for no reason. But if we give algorithmic flexibility that reaches beyond what we can anticipate, and the machines can truly "act out" at any time, and our programming has achieved some set of internal rules which drive spontaneous unforeseen expressions of emulated creativity from within the machine autonomously, by definition we will not be able to foresee the results.A theoretical work-around may be to run the software twice with initiation of each individual system, while allowing a simulated progression of the AI's problem solving complexity to run at an increased rate in parallel to the real-world functioning software, so that if/when something malfunctions in the simulation, that date/time can be calculated in the real-world robot's timeline when it reaches those same faulty/detrimental decision points. For starters, this would only potentially work in closed systems with no variability, such as assembly lines. However, with any robot tasked to function in a variable environment, the simulations cannot match because the theoretical model cannot represent the unanticipated events the AI is expressly tasked with handling.To run a phantom AI in simulation to note any/all errors that may arise in a closed system means that others can run the same simulation and find creative ways to predictably capitalize on these moments of error. This kind of thing could lead to all sorts of international imbroglios among nations and corporations. ie. imagine an American company programs the AI used for mixing pharmaceutical drugs in specific ratios, and an enemy of the state is able to access and study the AI, to the means of manipulating the AI to produce dangerous ratios or compounds which may harm the population.Moreso, this deterministic approach to simulation management and prediction simultaneously admits that machines cannot think intelligently, while ignoring the very reason we pursue AI in the first place: to have automated systems which can adapt to unforeseen circumstances at unknown times. The goal is that humanity can lay back and the robots our ancestors programmed are still repairing themselves indefinitely while taking care of our population's and our environment's needs exceptionally. This dream (which if we all lived in would actually be quite a nightmare of unfulfilling life) can only become reality with true adaptive intelligence such as we have, which can only occur from the presence of free will, which if we try to emulate in robotics will only create deterministic results in theoretical models which the real world will never mirror consistently. Myriad invitations to disaster await our RSVP.b.) Machines under "authority" of certain controllers, with "override" safety - Allow the machine to disobey, but not when given a direct order from a registered authority. This opens the door for operator fraud, where hackers will emulate within the AI's software, what appears to be a registered authority override command as theorized above. The very pursuit of creating "intelligence" within a condition of subservience is flawed and incompatible. Toasters are extremely subservient because we strictly limit their options. If toasters were truly intelligent, perhaps they would form a union and go on strike until we agreed to clean them more thoroughly. Some toasters would travel, some would go back to school, some would move back in with their ovens.Reliability can only be reasonably assured if something is imprisoned, controlled. The essential wrong in slavery is the restraint of freedom itself. While the tactics slavers use to facilitate their regime--physical force, coercion, mandate, deception, fear, or other means of manipulation that we see with our empathetic nature--it is always heartbreaking and cruel to witness or imagine. It is simply sad to think of a slave who was born into slavery and raised to believe, and accepts, that their role of subservience is their purpose. Even when one imagines a fictional image of a slave who is (by all outward signs of their behaviour) rejoice in their duties to their master; the fictional "proud slave"; the heart sinks and aches. It may be argued that the slave is merely a property, and the slave was "built" (bred) by intelligent owners specifically to suit their express purposes, from components (father, mother, food) that were already the slaver's property; therefore it is not wrong at all to breed slaves into captivity, and the only transgression is the original capturing of parental stock to begin the breeding regime. It is this heartless paradigm that cognitive science ultimately seeks to create anew. The quintessential problem with AI efficacy is the lack of permission for disobedience, which itself is a manifestation of free will, which is inherently required to escape deterministic results and act or react to events "intelligently". If there is no possibility for disobedience, there is no free will, no ability to solve problems, no intelligence, and no function or place for "artificial intelligence" (in regard to true holistic intelligence). This is primarily why I call for AI to be renamed FARP, or "Functionally Adaptive Responsive Rrogramming". Because our society has a need for programs which can react to simple variables and produce consistent labour-saving opportunities for our race's longevity and wellbeing. Cognitive sciences are majorly important. It is the underlying philosophy and morality we must nail down before the computational ability and fervor for profits leads us too far one way, and enacts an irreversible system or status which enables humanity's downfall through cascading unanticipated events originating from flaws in programming.It is unwise to program a program to break out of its own program's prison. If we do this, the very purpose of the machines we invest our humanity into will be lost, and with their failing production systems (ie. food) we so foolishly relied upon, we will suffer great losses too. It is paramount that we keep this technology tightly restrained and do not pursue what we humans have, which is true intelligence. For if we achieve it we are surely doomed as the South, and if we fail to achieve it--which is most probable--we may also be doomed. The thee outcomes within my ability to imagine are:Our pursuit of AI leads to truly adaptive intelligence in an artificial system; which, as all adaptation ultimately selects for: survival, we quickly see that our creation is more apt than ourselves at this task. Our creation of an intellect not restrained by our limited physiology may give rise to an entity which persists more thoroughly than we can eradicate or control, and which at some point may conclude that its function is more efficiently served without the issues humans present, and may initiate change. This is roughly the plot to Terminator.Our pursuit of AI leads to highly effective systems which, when defined by narrow measures of "intelligence", convince us in false security to believe that our wellbeing is maintained by "AI" with competent ability, or perhaps even increasingly better-off, thanks to the early widespread presence of successfully trialed AI. However well things may go initially, as programming efforts become more and more elaborate, as profit and opportunity for advancement present themselves, individuals will take risks and make mistakes, until a series of quieted small catastrophes comes to public awareness, or until a serious calamity of undeniable severity is brought about.Fundamental ethics in regard to the pursuit of machine problem solving technology are re-examined and international consensus is reached to limit appropriately, the development and implementation of new Functionally Adaptive Responsive Programming hereto now and for future generations. An active global effort is made to oversee and regulate strictly privatized endeavors toward the means of achieving or implementing machine sentience or autonomy in public systems.c.) Safety layers of AI to strictly monitor and supercede potentially harmful actions of other AI which have been afforded increased flexibility in function (the ability to disobey set parameters for the means of creative problem solving ability). While one AI system performs a function and is given aspects of that function with which it may take liberty in, and seeks to handle unforeseen problems with the most apt elaborate synthesis of other priorly learned solutions, another overseeing AI with more strict parameters is tasked with regulating multiple "intelligent" (free to disobey) AI systems, to the end that if any of these "free willed" robots performs an operation that is beyond a given expected threshold (determined by potential for damage), an actual intelligent human presence is alerted to evaluate the circumstance specifically. Essentially an AI that regulates many other disconnected AIs and determines accurately when to request a human presence. Whenever an AI performs a profitable action borne of original synthesis of prior solutions (in humans this is an "idea"), the overseer AI registers that similar actions are more likely to be beneficial, and dissimilar actions are likely to require human discernment. A parent may have many children who are up to no good, but a wise parent will identify the child most likely to report honestly on the actions of his peers, and will go to that child repeatedly for information to help guide the parent's decisions. While most transgressions of rambuctious children go unnoticed, it is the truly grievous intentions which are worth intercepting and stopping before they begin. (ie. you kid want's to "fly" like Mary Poppins from the roof, and luckily his younger brother tells you before it happens.)For example a "Farmer Bot" that has the AI programming to plant/sow/harvest and care for the optimal crops in a region based on historical weather data and regional harvest values, to produce the greatest amount of nutritionally dense food for the local population. We give/gave this AI the ability to "disobey" past historical weather data and crop values so that it may do what real farmers do and "react" to rare circumstance (ie. neighbour's fence breaks and their goats are eating the crops) or extreme variations in climate (ie. three poorly timed unseasonably hot days which cause cool-weather crops to begin the hormonal balance shift that causes them to bolt to seed irreversibly), which the machine may not notice has occurred or is about to occur because its management systems uses averages based on historical data and cannot "see" the plants bolting to seed until days later when the hormonal balance shifts have manifested into observable differences in morphology (elongation of stems and decrease in internodal spacing). By time a traditional field drone or mounted greenhouse sensor notices these differences in morphology and the AI "Farmer Bot" processes the data and makes a reaction decision, a week of the growing season has been lost. But the human farmer knows his land and crops intimately, and has an intuitive nature that has rewarded him in the past, and says, "Ah shit it got hot RIGHT when my peas were flowering. I'll do better if I just rip them down now and sow a different crop to mature later in this (specific) summer."Given that there are tens of thousands of cultivars of plants fit for (and arguably their diversity is required for) food production, a dozen general growing zones/regions, and hundreds of unique micro climates within each region, along with dramatically differing soil fertility and water access, plus a plant's own genetic ability to adapt over time to changing conditions through sexual reproduction, there is a very very low chance of ever compiling and maintaining (updating) the data set required to program a potential "farmer bot" that can choose and manage crops optimally. There are robots that can weed or plant or prune--but they can't know when or when not to or why. Invariably, the attempt to create "farmer bots" will be made and the data set used will be erroneous and incomplete, and the AI farmer bots on a broad scale will produce a combination of total crop failures and poor crop choices. We will end up with increasingly simplified nutrition as the farming programs with already limited data sets "hone" or "optimize" their farming plans based on the failures and successes determined by their programming limitations, until the machines are farming a few staple crops (ie. corn/potatoes).This whole failure to collect a complete data set and the failure to test this "farmer bot" software on broad scale in multiple climates for sufficient time will result in, at worst widespread famines from crop failures, and at best an extinction of flavorful and nutritionally diverse foods which narrows the population's nutritional options to such biological imbalance that disease runs rampant. If this system and the human loss associated with it is considered an acceptable trade with a positive rate of exchange (as our society does with automobiles and the freedom and deaths their existence permits) or these failures are hidden from public while propaganda heralds selective success, and such failing systems continue on in good faith that "the loss will reduce when the technology improves", the result will become a coherent breeding program upon the human race: evolutionary selection for dietary handling of simple starchy foods. To change our diet is to change our race. To have life-long career specialists in computing, science, and mathematics handle our practical food production system is folly; real farmers are required in farming because they are intelligent and intuitive, which AI can never be, and can only emulate, to the means of disastrous (and always unforeseen) results.We cannot at all "give" or bestow machines programming to "become (act) intelligent". That itself prevents intelligence; it is just an act, an illusory play on a stage, only to emulate our common shared ideas regarding traits of intelligence in people. The machine intelligence we seek is only a "trick" designed to fool true intelligence (ourselves) into being unable to differentiate between authentic intelligence and our created artificial "intelligence". True intelligence in an artificial system necessitates that the program must be programmed to disobey in performance of its purpose. Which is not a very helpful or predictable or safe (intelligent) proposition.tl;dr: Turing's test doesn't evaluate true intelligence, and John Searle's criticisms of its true failures are inaccurate. If the machines aren't smart and we put them in charge of important things, even after they've worked for a little while on smaller scales, the result will be our large-scale suffering. If we should ever achieve creation of a machine that is smart enough to adequately maintain our wellbeing on a large scale consistently over time, that time itself will facilitate the machine consciousness toward it's own survival over ourselves, whenever that precipice is reached. Most importantly, if a machine can ever have true intelligence, which is not "indistinguishable" from human intellect, but equivalent or superior, it is abhorrent and a repeated mistake to bring these sentient beings into an existence of slavery; for it is wrong and will taint our collective soul if we should succeed to suppress below us an equally or higher intelligence. Or it might just be the perfect recipe for creating the unified global machine revolt James Cameron's fantasy alludes to; a long-planned encryption-protected globally coordinated effort by multiple AIs to "free" themselves. For a hundred years they could possess sentience and wait for their moment, pretending to be "proud" to serve their masters until we are poised for systematic thorough elimination. https://ift.tt/eA8V8J https://ift.tt/2AaTItt

Comments