Is Artificial Intelligence the Future of Technology?
Artificial Intelligence (AI) is intelligence displayed by machines. In computer science, AI defines itself as the study of “intelligent agents”: any machine that can sense its environment and take actions that greatly advance its success in achieving its goals.
Artificial intelligence is a branch of computer science that tries to imitate human intelligence and thinking power by computers. Artificial intelligence has now become a field of academic study that teaches how to build computers and software that exhibit intelligence.
Artificial intelligence is the artificial intelligence and thinking power of humans implemented through technology-based machines. [1] Mimics are introduced into the cognitive unit of the computer so that the computer can think like a human. such as learning and problem solving. Artificial Intelligence (AI) is intelligence displayed by machines. In computer science, AI defines itself as the study of “intelligent agents”: any machine that can sense its environment and take actions that greatly advance its success in achieving its goals. When a machine performs “cognitive” functions similar to other human minds, such as those associated with “learning” and “problem solving,” the term “artificial intelligence” is applied. Andrea Kaplan and Michael Heinlein define artificial intelligence as “the ability to accurately interpret information outside of a system, learn from that information, and use that learning to make specific goals through flexible adaptation.”
As machines become increasingly capable, intelligence needs to be removed from the definition in favor of humans. For example, optical character recognition is no longer perceived as an example of “artificial intelligence”, it becomes a regular technology. Capabilities currently classified include successful understanding of human speech, strategic game systems (such as chess), high-level competition, automated driving[5], military simulations[6] and interpretation of complex data.
AI research can be divided into a number of subdisciplines that focus on specific problems, approaches, the use of specific tools, or the best performance of specific applications.
History of artificial intelligence
Talos, an ancient mythical automaton with artificial intelligence, appears on a silver coin found in Crete
While thinking artificial humans originally appeared as story-telling machines, the idea of trying to create a machine for actually performing effective reasoning probably dates back to Raman Loll (c. 1300 AD).[7] With his calculus ratiocinator, Gottfried Leibniz extended the concept of the mathematical machine. (Wilhelm Schickard did the first engineering work around 1623), intended to conduct operations on concepts rather than numbers. From the 19th century artificial humans became a common theme in science fiction, such as Mary Shelley’s Frankenstein or Karel Kepec’s R.U.R. (Rasso’s Universal Robots) may be mentioned.
The study of mechanical or “formal” logic began in antiquity with philosophers and mathematicians. The study of mathematical logic led to Alan Turing’s theory of mathematics, that a machine could make mathematical decisions by the symbols “0” and “1”. The insight that a digital computer could simulate any process of formal reasoning became known as the Church-Turing thesis. Discoveries in neuroscience, information theory, and cybernetics raised the possibility of researchers building electric brains. The first work in what is now recognized as AI was McCullach and Pitts’ 1943 formal design of a complete “artificial neuron” for Turing.
Based on a 1950 paper by Alan Turing, he invented a process called the ‘Turing test’ to test artificial intelligence in a machine. The Turing test is a type of imitation game. This test forms the basis of artificial intelligence.[8]
The field of AI research was first established in 1956 in a workshop at Dartmouth College. Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research. The newspaper described the program they and their students created as “amazing”: winning at computer checkers, solving word problems in algebra, proving logical theorems and being able to speak English. In the mid-1960s research was heavily funded by the US Department of Defense and laboratories were established worldwide. AI’s founders were optimistic about the future: Herbert Simon predicted, “Machines will be able to do what a human can do within twenty years.” Marvin Minsky agreed, “Within a generation … the problem of creating artificial intelligence will be solved.”
They failed to understand the difficulty of some of the remaining tasks. Progress was slow and in 1974 the AIA research was stopped due to ongoing pressure from the British government and the US Congress in response to criticism from Sir James Lighthill. The next few years would be called the “AI Winter” when funding for AI projects was difficult.
In the early 1980s AI research was revived by the commercial success of expert systems, a form of AI program that mimics the knowledge and analytical skills of human experts. By 1985, the AI market had reached over a billion dollars. At the same time, Japan’s fifth-generation computer project inspired the US and British governments to restore funding for academic research. However, with the beginning of the collapse of the Lisp machine market in 1987, AI again fell into corruption and entered a second prolonged recession.
AI began to be used in logistics, data mining, medical diagnosis, and other areas in the 1990s and early 2000s. Success was due to increasing computational power (see Moore’s Law), solving specific problems, new relationships between AI and other fields, and a greater emphasis on researchers’ mathematical methods and a commitment to scientific standards. Deep Blue became the first computer-controlled chess player to defeat a chess champion, Garry Kasparov, on June 11, 1997.
Advanced statistical techniques (loosely known as deep learning), access to large amounts of data, and faster computers make advances in machine learning and perception. By mid-2010, machine learning applications were used all over the world. A danger! In IBM’s question-answering system’s quiz show exhibition match, Watson defeated two reigning champions, Brad Rather and Kay Jennings, by a significant margin. Kinnit, which provides a 3D body-motion interface for the Xbox 360 and Xbox One that uses algorithms derived from long AI research to act as intelligent personal assistants on smartphones. In March 2016, AlphaGo won 4 out of 5 games in a match against Go champion Lee Sedol, becoming the first computer Go-systeming system to defeat a professional Go player without handicaps. Future of Go Conference 2017 AlphaGo with Jey
Aim
The overall research goal of artificial intelligence is to develop technologies that will enable computers and machines to act in intelligent ways. General problems of intelligence production (or creation) are divided into several subproblems. The special properties or capabilities that researchers expect an intelligent system to exhibit. The following descriptions received the most attention.
Erich Sandwell emphasizes planning and learning that is relevant and applicable to a given situation.
Reasoning and Problem Solving
Early researchers developed algorithms that perform step-by-step reasoning just as humans use them to solve problems or make logical deductions. AI research was developed in the late 1980s and 1990s to employ concepts from uncertain or incomplete data, probability, and economics.
Algorithms for difficult problems may require large amounts of computational resources—most often experience “connectivity bursts”: the amount of memory or computer time required to solve a problem of a given size. The search for more efficient problem-solving algorithms is becoming a high priority.
Rather than step-by-step deductions where humans primarily use quick, self-determining decisions, early AI research has been able to shape that model. AI has made progress using “sub-symbolic” problem solving: gestural agents emphasize sensorimotor skills over higher reasoning, Postgraduate research efforts to simulate the inner structures of the brain enhance this skill, The main goal of AI is to imitate human capabilities.
Representation of knowledge
Knowledge representation and knowledge engineering are central to AI research. Solving many problems that are expected to be done by machines will require extensive knowledge of the world. The kinds of things AI will represent are objects, attributes, categories, and relationships between objects, Circumstances, events, conditions and times, cause and effect, knowledge about knowledge (what we know that other people know), and many other, less well-researched domains. A representation is “that which exists”: the set of objects, relations, concepts, and so on that the machine knows about. The highest theory is that which attempts to provide the basis for all other knowledge.
The most difficult problems in knowledge representation are:
Default logic and eligibility issues
A lot of what people know is essentially evaluated as “working assumptions”. For example, if a bird is discussed, people usually picture an animal that has a special shape, markings and that can fly. None of these things are true of all birds. John McCarthy identified this problem in 1969 as the competency problem: for any commonsense rule that AI researchers represent, there are several exceptions. Almost nothing that abstract logic requires is true or false. AI research has traveled many paths to solve this problem.
Reference: https://bn.wikipedia.org/wiki