Ai

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Ai as PDF for free.

More details

  • Words: 1,778
  • Pages: 4
Artificial Intelligence was vaguely defined by John McCarthy in his article, “What is Artificial Intelligence?” As vague as the comparisons between theories may be, they all share the core concept of an isolated progression within an invented source- noncomputated, but rather learning on its own. John McCarthy called these goals; a lot of people would consider this the mimicry of human intelligence. That assumption would be wrong, according to John, who believed artificial intelligence to be focused on the problems the world presents the intellectual; the very inner core which allows for formulas and equations, instead of programming responses. Any computer can sound like a human if they’ve been programmed enough, but will it understand in the inner mechanics of philosophy’s conundrum, peoples’ intuitive understanding, or a simpler one, love? That is why John McCarthy considered the Turing test to be “one sided.” McCarthy writes, “whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.” Another problem with artificial intelligence is the Chinese room argument. Having vague definitions, the AI theory has become cluttered with unfounded antithesis. When one looks at the Chinese Room argument logically, the rebutting argument can be picked apart simply by referring to the original theory. The entire rebuttal rests on the generalistic and undefined term “understand,” whose creator John Searle claims a computer cannot do because it is only referring to a dictionary. Ironically, John McCarthy’s artificial intelligence focuses not on referring to dictionaries, but on being able to learn naturally on its own. Because of this, many questions, concepts, and fields of study are inspired to contribute. Currently there are many different, specialized artificial intelligence programming languages. One of these languages is Prolog, often associated with the topic, but also used in computational linguistics. Languages work differently because of their different functions and different syntax; Prolog functions better in relation to language by using terms, queries, and clauses- remaining relational and working like an argument (resolution refutation.) One language closely resembling human behavior is STRIPS- an automated planner which has an initial state, the goal states, and a set of actions. There are preconditions and postconditions as rules to solving problems; closely resembling human behavior. Lisp is another programming language closely linked to AI because of its dynamic customizability and interoperability. There were Lisp machines created in the 70s. All of which were used for arbitrary tasks such as garbage collecting. There’s another programming language called AIML (standing for Artificial Intelligence Markup Language), which has been used to make the chatterbot: Alice. The name is normally capitalized (A.L.I.C.E) and standing for Artificial Linguistic Internet Computer Entity. AIML being an XML dialect and the chatterbox being focused on internet communication, this level of AI development is more so directed towards passing the Turing test with case based reasoning. Dr. Richard S. Wallace created A.L.I.C.E with a minimalist approach, and with something else he calls "nearest-neighbor classification” in the Don’t Read Me: A.L.I.C.E. and AIML Documentation. He says the server runs statistical analysis on responses and their cases related to the topic and tying them together with what they relate to on something similar to this:

Figure1

Figure 2

There are many, many examples of chatterbots, but ironically they do not represent the original idea of Artificial Intelligence, nor are they necessarily directed towards obtaining the ability of reaching actual intelligence, but rather the imitation of intelligence; which 2008’s Chatterbox Challenge winner Bildgesmythe failed to express when I interviewed “him.”

ME: Do you like monopoly? AI Bot: Who says that I don't like monopoly, because I really do. ME: What? AI Bot: Don't be irreplaceable. If you can't be replaced, you can't be promoted. As displayed in Figure 2, the mind’s neural network builds upon itself in varying, fluctuating areas which simple to complex pattern identifying closely resemble through varying methods within studies otherwise not at all linked together. The major categories of artificial intelligence are as follows: Cognitive simulation- Focused solely on problem solving. Herbert Simon and Alan Newell laid the foundation for future research in AI by attempting to formalize the concept. Everything was very scientific, such as their “General Problem Solver.” The General Problem Solver was a computer program they created, attempting to be a “universal problem solver machine.” Today it’s Google. This closely relates to cognitive modeling which, like Simon/Newell expressed in their research, a singular focus on one concept (such as decision making.) Cognitive simulation is in direct contrast to cognitive architectures. Logical AI- Logical AI is a step away from attempting to formalize singular human decision making and basic intelligence towards the fundamental abstract concepts which underlie the process- giving substance to reason and possibility for algorithmic thinking in relation to the whys and hows of human interaction in the world. It wasn’t meant to simulate thought, which John McCarthy found to be excessive. "Artificial intelligence is not, by definition, simulation of human intelligence,” McCarthy says. Logical AI is what led to Prolog and other logic programming languages. Logical AI works by presenting a statement and backward reasoning (also called backward chaining) as an inference method for obtaining conclusions. The main difference in logic languages is how they are formed. Lisp is notorious for its ability to be customized extensively into whatever is needed. That is what makes logic languages so significant; it’s their procedural representation of symbolic relationships and language, as well as their overall ability to adapt to any environment. For instance, Slava Akhmechet, a PhD, student in Computer Science at Stony Brook University says, “Lisp has a huge benefit of code and data

being expressed in the same manner (which, obviously, is a huge improvement over XML), Lisp has tremendously powerful metaprogramming facilities that allow programs to write code and modify themselves, Lisp allows for creation of mini-languages specific to the problem at hand, Lisp blurs the distinction between run time and compile time… I've achieved an almost divine state of mind, an instantaneous enlightenment experience that turned my view of computer science on its head in less than a single second.” The flexible beauty directly relates to artificial intelligence’s goal to reach a point in which we can develop an intelligent consciousness. Symbolic (fuzzy) AI argues that there is no general principle for natural language processing, vision, or all the aspects of intelligent behavior. A major feature of this approach is the use of hand-built “Commonsense knowledge bases.” For example, within the lists are: the parts and materials of objects, functions and uses of objects, and behaviors of devices. The inherent problem in this design is the direct compatibility to the Chinese Room argument- missing an inner functionality other than “reading a dictionary.” If there is not a way for the entity to achieve the knowledge itself, then the work cannot be considered “intelligent” or “thinking,” but rather functioning in a mechanical matter and doing what it’s told. Knowledge based AI is also known as “Expert Systems,” for the sole reason it covers a sole area or subject. This branch of AI is known as the successful one. Knowledge based AI is a branch off of the idea of Commonsense knowledge bases, but focuses on simple tasks, such as troubleshooting problems and solving simple tasks. These things can be done

because they are not complex. The problem with this is that, it’s not artificial intelligence according to the original definer John McCarthy. Knowledge based artificial intelligence is just a computer responding through the inference of very large databases about a specific topic. This is not intelligent. There are many different kinds of artificial intelligence focus fields, and there are fields combining mostly symbolic and knowledge based AI (called hybrid intelligent systems.) Swarm Intelligence is an interesting combination of both logical and knowledge based AI. The latter concept is essentially logical, but within each self-contained entity (within the whole) there exists a self-organized system that is simple and knowledge based. This is abstraction, decentralization, and collective behavior at its finest. These focuses of research are much more applicable so far then, say neural network imitating. The problem with neural networking is its lack of a model to build from. Distributed computing group BOINC (Berkeley Open Infrastructure for Network Computing) runs a program that uses idle CPU cycles from volunteer computers around the world to build several different data sets the user can choose from, one of which to build a model of neural networks. Currently, on a twenty four hour basis, an average of 1,288.45 Teraflops is reached every day. When knowledge based artificial intelligence was brought into perspective, it was quite apparent there was a lot of data to be processed. As mentioned during the explanation of cognitive simulation artificial intelligence, cognitive architectures are the fundamental basis of all neural net modeling done within the AI research field associated to the method. It is in direct contrast, but able to mold with all the other methods (theoretically) if possible to achieve, because it is, in theory, the underlying mold of neural networks which some believe needs to be fulfilled before complex intelligence can be mapped. Spurring along with that idea is the concept of individual “intelligent agents” which are described in theoretical terms because their state has not yet been achieved. Their theoretical models consist of rather obvious terms such as activity towards achieving goals, observing and acting upon the environment, and learning and using the knowledge which presents itself. Intelligent agents are expected to learn quickly from large amounts of data. There are two major intelligent agents: believable and physical. Physical agents have sensors to perceive the environment around them. Physical agents are robots. Temporal agents are computer programs that can pass the Turing test. A simple program can be defined as an agent function, which maps every possible action into a diagram similar to this one:

The problem generator at the bottom would be considered the problem agent- in charge of assessing the situation as much as mathematically possible in order to achieve maximum compatibility with the environment. There are several different agents. There are simple reflex agents, goal-based agents, utility-based agents, etc. This all goes back to swarm intelligence: relying on several different parts, which have individually solved their own problems with (theoretically) perfect algorithms.

All in all, both cognitive architectures and cognitive simulations are needed to form a finished product. Cognitive simulation artificial intelligence allows for individual testing of each intelligent agent, and cognitive architectures allow for an out of box approach to solving larger problems, such as “the unknown.”

Related Documents

Ai
November 2019 69
Ai
November 2019 69
Ai
December 2019 60
Ai
October 2019 71
Ai
June 2020 23
Ai
May 2020 24