Artificial Intelligence (AI, eng. Artificial intelligence, AI) - the science and technology of creating intelligent machines, especially intelligent computer programs. AI is related to the similar task of using computers to understand human intelligence, but is not necessarily limited to biologically plausible methods.

What is artificial intelligence

Intelligence(from Latin intellectus - sensation, perception, understanding, understanding, concept, reason), or mind - the quality of the psyche, consisting of the ability to adapt to new situations, the ability to learn and remember based on experience, understand and apply abstract concepts and use one's own knowledge for environmental management. Intelligence is a general ability for cognition and solving difficulties, which combines all the cognitive abilities of a person: sensation, perception, memory, representation, thinking, imagination.

In the early 1980s computational scientists Barr and Feigenbaum proposed the following definition artificial intelligence(AI):


Later, a number of algorithms and software systems began to be referred to as AI, the distinguishing feature of which is that they can solve some problems in the same way as a person thinking about their solution would do.

The main properties of AI are language understanding, learning and the ability to think and, importantly, act.

AI is a complex of related technologies and processes that are developing qualitatively and rapidly, for example:

  • natural language text processing
  • expert systems
  • virtual agents (chatbots and virtual assistants)
  • recommendation systems.

AI Research

  • Main article: Research in the field of artificial intelligence

AI standardization

2018: Development of standards in the field of quantum communications, AI and the smart city

On December 6, 2018, the Technical Committee "Cyber-Physical Systems" on the basis of RVC together with the Regional Engineering Center "SafeNet" began developing a set of standards for the markets of the National Technology Initiative (NTI) and the digital economy. By March 2019, it is planned to develop technical standardization documents in the field of quantum communications, and , RVC reported. Read more.

The impact of artificial intelligence

Risk to the development of human civilization

Impact on the economy and business

  • The impact of artificial intelligence technologies on the economy and business

Impact on the labor market

Artificial intelligence bias

At the heart of everything that is the practice of AI (machine translation, speech recognition, natural language processing, computer vision, automating driving, and more) is deep learning. This is a subset of machine learning, characterized by the use of neural network models, which can be said to mimic the way the brain works, so they can hardly be classified as AI. Any neural network model is trained on large datasets, so it acquires some “skills”, but how it uses them is not clear to the creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such an AI system and can systems built on the basis of machine learning be trusted? The value of the answer to last question goes beyond scientific laboratories. Therefore, the attention of the media to the phenomenon, called AI bias, has noticeably escalated. It can be translated as "AI bias" or "AI bias". Read more.

Artificial intelligence technology market

AI market in Russia

The global AI market

Applications of AI

The areas of application of AI are quite wide and cover both technologies that are familiar to hearing, and emerging new areas that are far from mass application, in other words, this is the whole range of solutions, from vacuum cleaners to space stations. It is possible to divide all their diversity according to the criterion of key points of development.

AI is not a monolithic subject area. Moreover, some AI technologies appear as new sub-sectors of the economy and separate entities, while simultaneously serving most areas in the economy.

The development of the use of AI leads to the adaptation of technologies in classical sectors of the economy along the entire value chain and transforms them, leading to the algorithmization of almost all functionality, from logistics to company management.

The use of AI for defense and military purposes

Use in education

Use of AI in business

AI in the power industry

  • At the design level: improved forecasting of generation and demand for energy resources, assessment of the reliability of power generating equipment, automation of generation increase in case of a demand surge.
  • At the production level: optimizing preventive maintenance of equipment, increasing generation efficiency, reducing losses, preventing theft of energy resources.
  • At the promotion level: optimization of pricing depending on the time of day and dynamic billing.
  • At the level of service delivery: automatic selection of the most profitable supplier, detailed statistics consumption, automated customer service, optimization of energy consumption based on the habits and behavior of the customer.

AI in manufacturing

  • At the design level: improve the efficiency of new product development, automated supplier evaluation and analysis of requirements for spare parts and parts.
  • At the production level: improving the process of executing tasks, automating assembly lines, reducing the number of errors, reducing the delivery time of raw materials.
  • At the promotion level: forecasting the volume of support and maintenance services, pricing management.
  • At the level of service delivery: improving fleet route planning, demand for fleet resources, improving the quality of training of service engineers.

AI in banks

  • Pattern recognition - used incl. to recognize customers in branches and send them specialized offers.

AI in transport

  • The auto industry is on the verge of a revolution: 5 challenges of the era of self-driving driving

AI in logistics

AI in brewing

The use of AI in public administration

AI in forensics

  • Pattern recognition - used incl. to detect criminals in public spaces.
  • In May 2018, it became known about the use of artificial intelligence by the Dutch police to investigate complex crimes.

According to The Next Web, law enforcement has begun digitizing more than 1,500 reports and 30 million pages related to cold cases. Materials are being transferred to a computer format, starting from 1988, in which the crime was not solved for at least three years, and the offender was sentenced to more than 12 years in prison.

Once all content is digitized, it will be connected to a machine learning system that will analyze the records and decide which cases use the best evidence. This should reduce the time it takes to process cases and solve past and future crimes from weeks to days.

Artificial intelligence will distribute cases according to their “solvability” and indicate the possible results of the DNA examination. Then it is planned to automate the analysis in other areas forensic examination and maybe even cover data in areas like social Sciences and testimonies.

In addition, according to one of the developers of the system Jeroen Hammer (Jeroen Hammer), API functions for partners may be released in the future.


The Dutch police have special unit specializing in the development of new technologies for solving crimes. It was he who created the AI ​​system for quickly searching for criminals on the evidence.

AI in the judiciary

Developments in the field of artificial intelligence will help to radically change the judicial system, make it more fair and free from corruption schemes. This opinion was expressed in the summer of 2017 by Vladimir Krylov, Doctor of Technical Sciences, technical consultant of Artezio.

The scientist believes that the AI ​​solutions that already exist can be successfully applied in various sectors of the economy and public life. The expert points out that AI is successfully used in medicine, but in the future it can completely change the judicial system.

“Viewing daily news reports about developments in the field of AI, one is only amazed at the inexhaustibility of the imagination and the fruitfulness of researchers and developers in this field. Reports of scientific research are constantly interspersed with reports of new products breaking into the market and reports of amazing results obtained using AI in various fields. If we talk about the expected events, accompanied by a noticeable hype in the media, in which AI will again become the hero of the news, then I probably will not risk making technological forecasts. I can assume that the next event will be the appearance somewhere of an extremely competent court in the form of artificial intelligence, fair and incorruptible. This will probably happen in 2020-2025. And the processes that will take place in this court will lead to unexpected reflections and the desire of many people to transfer most of the processes of managing human society to AI.

The use of artificial intelligence in judicial system the scientist recognizes as a "logical step" for the development of legislative equality and justice. The machine mind is not subject to corruption and emotions, can strictly adhere to the legislative framework and make decisions taking into account many factors, including the data that characterize the participants in the dispute. By analogy with the medical field, robot judges can operate with big data from public service repositories. It can be assumed that machine intelligence will be able to quickly process data and take into account much more factors than a human judge.

Psychological experts, however, believe that the absence of an emotional component in the consideration of court cases will negatively affect the quality of the decision. The verdict of the machine court may turn out to be too straightforward, not taking into account the importance of people's feelings and moods.

Painting

In 2015, the Google team tested neural networks to see if they could create images on their own. Then artificial intelligence was trained on the example of a large number of different pictures. However, when the machine was “asked” to depict something on its own, it turned out that it interprets the world around us in a somewhat strange way. For example, for the task of drawing dumbbells, the developers received an image in which the metal was connected human hands. This probably happened due to the fact that at the training stage, the analyzed pictures with dumbbells contained hands, and the neural network misinterpreted this.

On February 26, 2016, at a special auction in San Francisco, Google representatives raised about $98,000 from psychedelic paintings painted by artificial intelligence. These funds were donated to charity. One of the most successful pictures of the car is presented below.

A picture painted by Google artificial intelligence.

The definition of artificial intelligence cited in the preamble, given by John McCarthy in 1956 at a conference at Dartmouth University, is not directly related to understanding human intelligence. According to McCarthy, AI researchers are free to use methods that are not observed in humans if it is necessary to solve specific problems.

At the same time, there is a point of view according to which intelligence can only be a biological phenomenon.

As the chairman of the St. Petersburg branch of the Russian Association of Artificial Intelligence T. A. Gavrilova points out, in English language phrase artificial intelligence does not have that slightly fantastic anthropomorphic coloring that it acquired in a rather unsuccessful Russian translation. Word intelligence means "the ability to reason reasonably", and not at all "intelligence", for which there is an English equivalent intellect .

Members of the Russian Association of Artificial Intelligence give the following definitions of artificial intelligence:

One of the private definitions of intelligence, common to a person and a “machine”, can be formulated as follows: “Intelligence is the ability of a system to create programs (primarily heuristic) in the course of self-learning to solve problems of a certain complexity class and solve these problems” .

Prerequisites for the development of the science of artificial intelligence

The history of artificial intelligence as a new scientific direction begins in the middle of the 20th century. By this time, many prerequisites for its origin had already been formed: among philosophers there had long been disputes about the nature of man and the process of knowing the world, neurophysiologists and psychologists developed a number of theories regarding the work of the human brain and thinking, economists and mathematicians asked questions optimal calculations and presentation of knowledge about the world in a formalized form; finally, the foundation of the mathematical theory of computation - the theory of algorithms - was born and the first computers were created.

The capabilities of new machines in terms of computing speed turned out to be greater than human ones, so the question arose in the scientific community: what are the limits of the capabilities of computers and will machines reach the level of human development? In 1950, one of the pioneers in the field computer science, English scientist Alan Turing, writes an article entitled "Can a machine think?" , which describes a procedure by which it will be possible to determine the moment when a machine becomes equal in terms of intelligence with a person, called the Turing test.

The history of the development of artificial intelligence in the USSR and Russia

In the USSR, work in the field of artificial intelligence began in the 1960s. At the Moscow University and the Academy of Sciences, a number of pioneering studies were carried out, headed by Veniamin Pushkin and D. A. Pospelov. Since the early 1960s, M. L. Tsetlin and colleagues have been developing issues related to the training of finite automata.

In 1964, the work of the Leningrad logician Sergei Maslov was published " reverse method establishing derivability in the classical predicate calculus”, which was the first to propose a method for automatically searching for proofs of theorems in predicate calculus.

Until the 1970s, in the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences of "computer science" and "cybernetics" were mixed at that time, due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction "artificial intelligence" as a branch of computer science. At the same time, informatics itself was born, subjugating the progenitor “cybernetics”. In the late 1970s, a Dictionary on artificial intelligence, a three-volume reference book on artificial intelligence and an encyclopedic dictionary on computer science, in which the sections "Cybernetics" and "Artificial Intelligence" are part of computer science along with other sections. The term "computer science" became widespread in the 1980s, and the term "cybernetics" gradually disappeared from circulation, remaining only in the names of those institutions that arose during the era of the "cybernetic boom" of the late 1950s and early 1960s. This view of artificial intelligence, cybernetics and computer science is not shared by everyone. This is due to the fact that in the West the boundaries of these sciences are somewhat different.

Approaches and directions

Approaches to understanding the problem

There is no single answer to the question of what artificial intelligence does. Almost every author who writes a book about AI starts from some definition in it, considering the achievements of this science in its light.

  • top-down (eng. Top-down AI), semiotic - the creation of expert systems, knowledge bases and inference systems that imitate high-level mental processes: thinking, reasoning, speech, emotions, creativity, etc .;
  • ascending (English Bottom-Up AI), biological - the study of neural networks and evolutionary calculations that model intellectual behavior based on biological elements, as well as the creation of appropriate computing systems, such as a neurocomputer or biocomputer.

The latter approach, strictly speaking, does not apply to the science of AI in the sense given by John McCarthy - they are united only by a common ultimate goal.

Turing test and intuitive approach

This approach focuses on those methods and algorithms that will help an intelligent agent survive in the environment while performing its task. So, here algorithms for searching path and making decisions are studied much more thoroughly.

Hybrid approach

Hybrid approach suggests that only the synergistic combination of neural and symbolic models achieves the full spectrum of cognitive and computational capabilities. For example, expert inference rules can be generated by neural networks, and generative rules are obtained using statistical learning. Proponents of this approach believe that hybrid information systems will be much stronger than the sum of various concepts separately.

Models and methods of research

Symbolic modeling of thought processes

Analyzing the history of AI, one can single out such an extensive direction as reasoning modeling. Long years the development of this science has moved along this path, and now it is one of the most developed areas in modern AI. Reasoning modeling implies the creation of symbolic systems, at the input of which a certain task is set, and at the output it is required to solve it. As a rule, the proposed problem has already been formalized, that is, translated into mathematical form, but either does not have a solution algorithm, or it is too complicated, time-consuming, etc. This direction includes: proving theorems, making decisions, and game theory, planning and dispatching , forecasting .

Working with natural languages

An important direction is natural language processing, which analyzes the possibilities of understanding, processing and generating texts in a "human" language. Within this direction, the goal is such natural language processing that would be able to acquire knowledge on its own by reading existing text available on the Internet. Some direct applications of natural language processing include information retrieval (including text mining) and machine translation.

Representation and use of knowledge

Direction knowledge engineering combines the tasks of obtaining knowledge from simple information, their systematization and use. This direction is historically associated with the creation expert systems- programs that use specialized knowledge bases to obtain reliable conclusions on any problem.

The production of knowledge from data is one of the basic problems of data mining. There are various approaches to solving this problem, including those based on neural network technology, using neural network verbalization procedures.

Machine learning

Issues machine learning concerns the process independent acquisition of knowledge by an intellectual system in the process of its operation. This direction has been central from the very beginning of the development of AI. In 1956, at the Dartmund Summer Conference, Ray Solomonoff wrote a paper on an unsupervised probabilistic machine called the Inductive Inference Machine.

Robotics

Machine creativity

The nature of human creativity is even less understood than the nature of intelligence. Nevertheless, this area exists, and here the problems of writing music, literary works (often poems or fairy tales), artistic creativity are posed here. The creation of realistic images is widely used in the film and games industry.

Separately, the study of the problems of technical creativity of artificial intelligence systems is highlighted. The theory of inventive problem solving, proposed in 1946 by G. S. Altshuller, marked the beginning of such research.

Adding this feature to any intelligent system allows you to very clearly demonstrate what exactly the system perceives and how it understands. By adding noise instead of missing information or filtering noise with the knowledge available in the system, concrete images are produced from abstract knowledge that are easily perceived by a person, this is especially useful for intuitive and low-value knowledge, the verification of which in a formal form requires significant mental effort.

Other areas of research

Finally, there are many applications of artificial intelligence, each of which forms an almost independent direction. Examples include the programming of intelligence in computer games, nonlinear control, intelligent systems information security.

In the future, it is assumed that the development of artificial intelligence is closely connected with the development of a quantum computer, since some properties of artificial intelligence have similar principles of operation with quantum computers.

It can be seen that many areas of research overlap. This is true of any science. But in artificial intelligence, the relationship between seemingly different directions is especially strong, and this is connected with the philosophical debate about strong and weak AI.

Modern artificial intelligence

There are two directions of AI development:

  • solving problems related to the approximation of specialized AI systems to human capabilities, and their integration, which is implemented by human nature ( see Intelligence Boost);
  • the creation of artificial intelligence, representing the integration of already created AI systems into a single system capable of solving the problems of mankind ( see Strong and weak artificial intelligence).

But in currently in the field of artificial intelligence, there is an involvement of many subject areas that are more practical than fundamental to AI. Many approaches have been tried, but no research group has yet come up with the emergence of artificial intelligence. Below are just a few of the most notable AI developments.

Application

Some of the most famous AI systems are:

Banks use artificial intelligence systems (AI) in insurance activities (actuarial mathematics), when playing on the stock exchange and managing property. Pattern recognition methods (including both more complex and specialized ones and neural networks) are widely used in optical and acoustic recognition (including text and speech), medical diagnostics, spam filters, air defense systems (target identification), as well as to ensure a number of other national security tasks.

Psychology and cognitive science

The methodology of cognitive modeling is designed to analyze and make decisions in ill-defined situations. It was proposed by Axelrod.

It is based on modeling the subjective ideas of experts about the situation and includes: a methodology for structuring the situation: a model for representing expert knowledge in the form of a signed digraph (cognitive map) (F, W), where F is a set of situation factors, W is a set of cause-and-effect relationships between situation factors ; methods of situation analysis. At present, the methodology of cognitive modeling is developing in the direction of improving the apparatus for analyzing and modeling the situation. Here, models for forecasting the development of the situation are proposed; methods for solving inverse problems.

Philosophy

The science of "creating artificial intelligence" could not but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

The philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI”. The first group answers the question: “What is AI, is it possible to create it, and, if possible, how to do it?” The second group (the ethics of artificial intelligence) asks the question: “What are the consequences of the creation of AI for humanity?”

The term "strong artificial intelligence" was introduced by John Searle, and his approach is characterized by his own words:

Moreover, such a program would be more than just a model of the mind; she in literally words itself will be mind, in the same sense that the human mind is mind.

At the same time, it is necessary to understand whether a “pure artificial” mind (“metamind”) is possible, understanding and solving real problems and, at the same time, devoid of emotions that are characteristic of a person and necessary for his individual survival [ ] .

On the contrary, weak AI advocates prefer to view programs only as a tool for solving certain tasks that do not require the full range of human cognitive abilities.

Ethics

Other traditional confessions rarely describe the issues of AI. But some theologians nonetheless pay attention to it. For example, Archpriest Mikhail Zakharov, arguing from the point of view of the Christian worldview, puts next question“Man is a rationally free being, created by God in His image and likeness. We are accustomed to referring all these definitions to the biological species Homo Sapiens. But how justified is this? . He answers this question like this:

Assuming that research in the field of artificial intelligence will ever lead to the emergence of an artificial being superior to man in intelligence, with free will, does this mean that this creature is a man? … man is a creation of God. Can we call this creature a creation of God? At first glance, it is a human creation. But even when creating man, it is hardly worthwhile to literally understand that God with His own hands fashioned the first man from clay. This is probably an allegory, indicating the materiality of the human body, created by the will of God. But without the will of God, nothing happens in this world. Man, as a co-creator of this world, can, by fulfilling the will of God, create new creatures. Such creatures, created by human hands according to God's will, can probably be called God's creations. After all, man creates new species of animals and plants. And we consider plants and animals to be God's creations. The same can be said about an artificial being of a non-biological nature.

Science fiction

The topic of AI is considered from different angles in the work of Robert Heinlein: the hypothesis of the emergence of AI self-awareness when the structure becomes more complex beyond a certain critical level and there is interaction with the outside world and other mind carriers (“The Moon Is a Harsh Mistress”, “Time Enough For Love”, characters Mycroft, Dora and Aya in the "History of the Future" series), problems of AI development after hypothetical self-awareness and some social and ethical issues ("Friday"). The socio-psychological problems of human interaction with AI are also considered by Philip K. Dick's novel “Do Androids Dream of Electric Sheep? ”, also known from the film adaptation of Blade Runner.

In the work of the science fiction writer and philosopher Stanislav Lem, the creation of virtual reality, artificial intelligence, nanorobots and many other problems of the philosophy of artificial intelligence. Especially worth noting is the futurology Sum technology. In addition, the adventures of Iyon the Quiet repeatedly describe the relationship between living beings and machines: the on-board computer rebellion followed by unexpected events (11th journey), the adaptation of robots in human society (“The Washing Tragedy” from “Memories of Iyon the Quiet”), the construction of absolute order on the planet through the processing of living inhabitants (24th journey), inventions of Corcoran and Diagoras ("Memoirs of Iyon the Quiet"), a psychiatric clinic for robots ("Memoirs of Iyon the Quiet"). In addition, there is a whole cycle of stories and stories of Cyberiad, where almost all the characters are robots, which are distant descendants of robots that escaped from people (they call people pale and consider them mythical creatures).

Movies

Since almost the 1960s, along with the writing of fantastic stories and novels, films about artificial intelligence have been made. Many novels by authors recognized all over the world are filmed and become classics of the genre, others become a milestone in the development

From the moment when artificial intelligence was recognized as a scientific direction, and this happened in the mid-50s of the last century, the developers of intelligent systems have had to solve many problems. Conventionally, all tasks can be divided into several classes: human language recognition and translation, automatic theorem proving, creation of game programs, image recognition and machine creativity. Let us briefly consider the essence of each class of problems.

Proof of theorems.

Automated theorem proving is the oldest application of artificial intelligence. A lot of research has been carried out in this area, which resulted in the appearance of formalized search algorithms and formal representation languages, such as PROLOG - a logical programming language, and predicate calculus.

Automatic proofs of theorems are attractive because they are based on the generality and rigor of logic. Logic in a formal system implies the possibility of automation, which means that if you imagine a task and related to it Additional information in the form of a set of logical axioms, and special cases of the problem - as theorems that require proof, you can get a solution to many problems. Systems of mathematical justification and automatic proofs of theorems are based on this principle. In past years, repeated attempts have been made to write a program for automatic proofs of theorems, but it has not been possible to create a system that allows solving problems using a single method. Any relatively complex heuristic system could generate many irrelevant provable theorems, with the result that programs had to prove them until the right one was discovered. Because of this, the opinion has arisen that large spaces can only be dealt with with the help of informal strategies specially designed for specific cases. In practice, this approach turned out to be quite fruitful and was, along with others, the basis of expert systems.

At the same time, reasoning based on formal logic cannot be ignored. A formalized approach allows solving many problems. In particular, using it, you can manage complex systems, check the correctness of computer programs, design and test logical circuits. In addition, automatic theorem proving researchers have developed powerful heuristics based on the evaluation of the syntactic form of logical expressions. As a result, it became possible to reduce the level of complexity of the search space without resorting to the development of special strategies.

Automatic theorem proving is also of interest to scientists for the reason that for particularly complex problems it is also possible to use the system, although not without human intervention. Currently, programs often act as assistants. Specialists break the task into several subtasks, then heuristics are thought out to sort out possible reasons. Next, the program proves lemmas, checks less essential assumptions, and makes additions to the formal aspects of the proofs outlined by the person.

Pattern recognition.

Pattern recognition is the selection of essential features that characterize the initial data from the total set of features, and on the basis of the information received, the assignment of data to a certain class.

The theory of pattern recognition is a branch of computer science whose task is to develop the foundations and methods for identifying and classifying objects (objects, processes, phenomena, situations, signals, etc.), each of which is endowed with a set of certain features and properties. In practice, it is necessary to identify objects quite often. A typical situation is recognizing the color of a traffic light and deciding whether to this moment cross the street. There are other areas in which object recognition cannot be dispensed with, for example, the digitization of analog signals, military affairs, security systems, and so on, so today scientists continue to actively work on creating image recognition systems.

The work is carried out in two main directions:

  • · Research, explanation and modeling of the recognition abilities inherent in living beings.
  • · Development of theoretical and methodological foundations for creating devices that would allow solving individual problems for applied purposes.

The formulation of recognition problems is carried out using a mathematical language. While the theory of artificial neural networks is based on obtaining results through experiments, the formulation of pattern recognition problems is not based on experiment, but on the basis of mathematical evidence and logical reasoning.

Consider the classical formulation of such a problem. There are many objects in relation to which it is necessary to classify. A set consists of subsets, or classes. Specified: information describing the set, information about classes and description of a single object without indicating its belonging to a particular class. Task: based on the available data, determine which class an object belongs to.

If there are monochrome images in the problems, they can be considered as functions on a plane. The function will be a formal record of the image and at each point express a certain characteristic of this image - optical density, transparency, brightness, etc. In this case, the model of the image set will be the set of functions on the plane. The formulation of the recognition problem depends on what the stages following recognition should be.

Pattern recognition methods include the experiments of F. Rosenblatt, who introduced the concept of a brain model. The task of the experiment is to show how psychological phenomena arise in a physical system with known functional properties and structure. The scientist described the simplest experiments on recognition, but their feature is a non-deterministic solution algorithm.

The simplest experiment, on the basis of which psychologically significant information about the system can be obtained, is as follows: the perceptron is presented with a sequence of two different stimuli, to each of which it must react in some way, and for different stimuli the reaction must be different. The purpose of such an experiment may be different. The experimenter may be faced with the task of studying the possibility of spontaneous discrimination by the system of the presented stimuli without outside interference, or, conversely, studying the possibility of forced recognition. In the second case, the experimenter teaches the system to classify various objects, which may be more than two. The learning experience goes as follows: the perceptron is presented with images, among which there are representatives of all classes to be recognized. The correct response is reinforced according to the memory modification rules. After that, the experimenter presents a control stimulus to the perceptron and determines the probability of obtaining a given response for images of this class. The control stimulus may match one of the objects presented in the training sequence or be different from all the objects presented. Depending on this, the following results are obtained:

  • · If the control stimulus differs from all previously presented training stimuli, then in addition to pure discrimination, the experiment explores the elements of generalization.
  • · If the control stimulus causes the activation of a certain group of sensory elements that do not coincide with any of the elements that were activated under the influence of stimuli of the same class, presented earlier, then the experiment explores a pure generalization and does not include the study of recognition.

Despite the fact that perceptrons are not capable of pure generalization, they cope satisfactorily with recognition tasks, especially in those cases when images are shown in relation to which the perceptron already has some experience.

Human speech recognition and machine translation.

The long-term goals of artificial intelligence include creating programs that can recognize human language and use it to construct meaningful phrases. The ability to understand and apply natural language is a fundamental feature of human intelligence. Successful automation of this ability would make computers much more efficient. To date, many programs have been written that can understand natural language, and they are successfully applied in limited contexts, but so far there are no systems that can apply natural languages ​​with the same generality and flexibility as a person does. The fact is that the process of understanding natural language is not only a simple parsing of sentences into components and searching for the meanings of individual words in dictionaries. This is exactly what the programs do well. The use of human speech requires extensive knowledge about the subject of the conversation, about the idioms related to it, in addition, the ability to understand ambiguities, omissions, professionalism, jargon, colloquial expressions and many other things that are inherent in normal human speech.

An example is a conversation about football, where such words as “forward”, “pass”, “transfer”, “penalty”, “defender”, “forward”, “captain” and others are used. Each of these words is characterized by a set of meanings, and individually the words are quite understandable, but a phrase made up of them will be incomprehensible to anyone who is not fond of football and knows nothing about the history, rules and principles of this game. Thus, a body of background knowledge is needed to understand and apply human language, and one of the main problems in automating the understanding and application of natural human language is the collection and systematization of such knowledge.

Insofar as semantic meanings are widely used in artificial intelligence, scientists have developed a number of methods that allow them to be structured to some extent. Yet most of the work is done in problem areas that are well understood and specialized. An example is the "microworld" technique. One of the first programs where it was used was the SHRDLU program developed by Terry Winograd, which is one of the systems for understanding human speech. The possibilities of the program were quite limited and were reduced to a “conversation” about the location of blocks of different colors and shapes, as well as planning the simplest actions. The program gave answers to questions like "What color is the pyramid on the cross bar?" and could give instructions like "Put a blue block on a red one." Such problems were often touched upon by artificial intelligence researchers and later became known as the “world of blocks”.

Despite the fact that the SHRDLU program successfully "talked" about the location of the blocks, it was not endowed with the ability to abstract from this "microcosm". It used methods that were too simple to convey the semantic organization of subject areas of higher complexity.

Current work in the field of understanding and applying natural languages ​​is directed mainly towards finding sufficiently general representational formalisms that can be adapted to the specific structures of given areas and applied in a wide range of applications. Most of the existing techniques, which are modifications of semiotic networks, are studied and applied in writing programs that can recognize natural language in narrow subject areas. In the same time, modern possibilities do not allow creating a universal program capable of understanding human speech in all its diversity.

Among the variety of problems of pattern recognition, the following can be distinguished:

  • Classification of documents
  • Determination of mineral deposits
  • Image recognition
  • · Barcode recognition
  • Character recognition
  • · Speech recognition
  • face recognition
  • · Number plate recognition

Artificial intelligence in gaming programs.

Game artificial intelligence includes not only the methods of traditional AI, but also the algorithms of computer science in general, computer graphics, robotics and control theory. Not only the system requirements, but also the budget of the game depend on how the AI ​​is implemented, so developers have to balance, trying to ensure that the game artificial intelligence is created at a minimum cost, while at the same time it is interesting and undemanding to resources. It uses a completely different approach than in the case of traditional artificial intelligence. In particular, emulations, deceptions and various simplifications are widely used. Example: a feature of first-person shooters is the ability of bots to move accurately and instantly aim, but at the same time, a person does not have a single chance, so the abilities of bots are artificially underestimated. At the same time, checkpoints are placed on the level so that the bots can act as a team, set up ambushes, etc. artificial intelligence image

In computer games controlled by game artificial intelligence, the following categories of characters are present:

  • mobs - characters with low level intelligence hostile to the human player. Players destroy mobs in order to pass the territory, get artifacts and experience points.
  • · non-player characters - usually these characters are friendly or neutral to the player.
  • · bots - characters hostile to the players, the most difficult to program. Their capabilities approach those of the game characters. At any given time, a certain number of bots oppose the player.

Within a computer game, there are many areas in which a wide variety of artificial game intelligence heuristic algorithms are used. The most widely used game AI is as one of the ways to control non-player characters. Another equally common method of control is scripting. Another obvious use of game AI, especially in real-time strategy games, is pathfinding, or a method to determine how an NPC can get from one point on the map to another. At the same time, obstacles, terrain and a possible “fog of war” must be taken into account. Dynamic balancing of mobs is also not complete without the use of artificial intelligence. Many games have tried the concept of unpredictable intelligence. These are games such as Nintendogs, Black & White, Creatures and the well-known Tamagotchi toy. In these games, the characters are pets whose behavior changes according to the actions of the player. The characters seem to be able to learn, when in fact their actions are the result of choosing from a limited set of choices.

Many game programmers consider any technique that creates the illusion of intelligence as part of game artificial intelligence. However, this approach is not entirely correct, since the same techniques can be used not only in game AI engines. For example, when creating bots, algorithms are used with information about possible future collisions entered into them, as a result of which bots acquire the “ability” to avoid these collisions. But these same techniques are an important and necessary component of a physics engine. Another example: an important component of a bot's aiming system is water data, and the same data is widely used in the graphics engine when rendering. The final example is scripting. This tool can be successfully applied in all aspects game development, but most often it is considered as one of the ways to control the actions of NPCs.

According to purists, the expression "game artificial intelligence" has no right to exist, as it is an exaggeration. As the main argument, they put forward the fact that only some areas of science about classical artificial intelligence are used in game AI. It should also be taken into account that the goals of AI are the creation of self-learning systems and even the creation of artificial intelligence capable of reasoning, while often limited to heuristics and a set of several rules of thumb, which are enough to create good gameplay and provide the player with vivid impressions and the feel of the game.

Currently, computer game developers are showing interest in academic AI, and the academic community, in turn, is beginning to be interested in computer games. This raises the question of the extent to which game and classic AI differ from each other. At the same time, gaming artificial intelligence is still considered as one of the sub-branches of the classical one. This is due to the fact that artificial intelligence has various application areas that differ from each other. If we talk about game intelligence, an important difference here is the possibility of cheating in order to solve some problems in "legitimate" ways. On the one hand, the disadvantage of deception is that it often leads to unrealistic character behavior and for this reason cannot always be used. On the other hand, the very possibility of such deception is an important difference between game AI.

Another interesting task of artificial intelligence is teaching a computer to play chess. Scientists from all over the world were engaged in its solution. The peculiarity of this task is that the demonstration of the logical abilities of the computer is possible only in the presence of a real opponent. The first such demonstration took place in 1974, in Stockholm, where the World Chess Championship among chess programs was held. This competition was won by the Kaissa program, created by Soviet scientists from the Institute of Management Problems of the USSR Academy of Sciences, located in Moscow.

Artificial intelligence in machine creativity.

The nature of human intellect has not yet been studied enough, and the degree of study of the nature of human creativity is even less. However, one of the areas of artificial intelligence is machine creativity. Modern computers create musical, literary and pictorial works, and the computer game and film industries have long used realistic images created by machines. Existing programs create various images that can be easily perceived and understood by a person. This is especially important when it comes to intuitive knowledge, for the formalized verification of which one would have to make considerable mental efforts. Thus, musical tasks are successfully solved using a programming language, one of which is the CSound language. Special software, with the help of which musical works are created, is represented by algorithmic composition programs, interactive composition systems, sound synthesis and processing systems.

Expert systems.

The development of modern expert systems has been carried out by researchers since the early 1970s, and in the early 1980s, expert systems began to be developed on a commercial basis. The prototypes of expert systems, proposed in 1832 by the Russian scientist S. N. Korsakov, were mechanical devices called "intelligent machines", which made it possible to find a solution, guided by given conditions. For example, the symptoms of the disease observed in the patient were analyzed, and the most appropriate medicines were suggested based on the results of this analysis.

Computer science considers expert systems together with knowledge bases. Systems are models of expert behavior based on the application of decision-making procedures and logical conclusions. Knowledge bases are considered as a set of inference rules and facts that are directly related to the chosen field of activity.

At the end of the last century, a certain concept of expert systems developed, deeply oriented towards a textual human-machine interface, which was generally accepted at that time. Currently, this concept has undergone a serious crisis, apparently due to the fact that in user applications, the text-based interface has been replaced by a graphical one. In addition, the relational data model and the "classical" view of the construction of expert systems are poorly consistent with each other. Consequently, the organization of knowledge bases of expert systems cannot be carried out effectively, at least with the use of modern industrial systems database management. Numerous examples of expert systems are given in the literature and online sources, called "common" or "widely known". In fact, all these expert systems were created back in the 80s of the last century and by now have either ceased to exist, or are hopelessly outdated and exist thanks to a few enthusiasts. On the other hand, developers of modern software products often refer to their creations as expert systems. Such statements are nothing more than a marketing ploy, because in reality these products are not expert systems (any of the computer legal reference systems can serve as an example). Enthusiasts are trying to combine approaches to creating a user interface with "classical" approaches to creating expert systems. These attempts have been reflected in projects such as CLIPS.NET, CLIPS Java Native Interface and others, but large software companies are in no hurry to fund such projects, and for this reason, development does not move beyond the experimental stage.

The whole variety of areas in which knowledge-based systems can be applied can be divided into classes: medical diagnostics, planning, forecasting, control and management, training, interpretation, fault diagnosis in electrical and mechanical equipment, training. Let's look at each of these classes in more detail.

a) Medical diagnostic systems.

With the help of such systems, it is determined how various disturbances in the activity of the body and their possible reasons. The most famous diagnostic system is MYCIN. It is used to diagnose meningitis and bacterial infections, as well as to monitor the condition of patients who have these diseases. The first version of the system was developed in the 70s. To date, its capabilities have expanded significantly: the system makes diagnoses on the same professional level, as a specialist doctor, and can be applied in different areas of medicine.

b) Predictive systems.

Systems are designed to predict events or the results of events based on available data characterizing the current situation or the state of an object. Thus, the Wall Street Conquest program, which uses statistical methods of algorithms in its work, is able to analyze market conditions and develop an investment plan. The program uses the algorithms and procedures of traditional programming, so it cannot be classified as a knowledge-based system. Already today, there are programs that can predict the flow of passengers, crop yields and weather by analyzing the available data. Such programs are quite simple, and some of them can be used on ordinary personal computers. However, there are still no expert systems that could, based on market data, suggest how to increase capital.

c) Planning.

Planning systems are designed to solve problems with large number variables in order to achieve specific results. For the first time in commercial field such systems were used by the Damascus firm Informat. The company's management ordered 13 stations to be installed in the office lobby, which provided free consultations for buyers wishing to purchase a computer. The machines helped to make a choice that best suits the budget and wishes of the buyer. Expert systems have also been used by Boeing for such purposes as repairing helicopters, determining the causes of failure of aircraft engines, and designing space stations. DEC has created the XCON expert system, which is able to determine and reconfigure VAX computer systems to meet customer requirements. DEC is currently developing a more powerful XSEL system that includes the XCON knowledge base. The purpose of creating the system is to help consumers in the selection of a computing system with the required configuration. The difference between XSEL and XCON is that it is interactive.

d) Interpretation.

Interpretive systems are able to draw conclusions based on the results of observation. One of the most famous interpretive systems is the PROSPECTOR system. It works using data based on the knowledge of nine experts. The effectiveness of the system can be assessed by one example: using nine various methods examination, the system discovered an ore deposit, the presence of which no expert could assume. Another known interpretive type system is HASP/SIAP. She uses data acoustic systems tracking and on their basis determines the location of ships in pacific ocean and their types.

e) Intelligent control and management systems.

Expert systems are successfully used for control and management. They are able to analyze data received from several sources and make decisions based on the results of the analysis. Such systems are able to carry out medical monitoring and control the movement of aircraft, in addition, they are used on nuclear power plants. Also, with their help, the financial activity of the enterprise is regulated and solutions are developed in critical situations.

f) Diagnosis and troubleshooting of electrical and mechanical equipment.

Knowledge-based systems are used in cases such as:

repair of diesel locomotives, automobiles and other electrical and mechanical devices;

diagnostics and elimination of errors and malfunctions in software and hardware of computers.

g) Computer systems of education.

The use of knowledge-based systems for educational purposes is quite effective. The system analyzes the behavior and activity of the object and, in accordance with the information received, changes the knowledge base. The simplest example such training computer game, in which the levels become more difficult as the player's skill increases. An interesting training system - EURISCO - was developed by D. Lenat. It uses simple heuristics. The system was applied in a game simulating fighting. The essence of the game is to determine the optimal composition of the flotilla, which could inflict defeats, observing many rules. The system successfully coped with this task, including in the composition of the flotilla one small vessel and several ships capable of conducting an attack. The rules of the game changed every year, but the EURISCO system has consistently won over three years.

There are many expert systems that, according to the content of knowledge, can be attributed to several types at once. For example, a system that performs planning can also be a learning system. It is able to determine the level of knowledge of the student and, based on this information, draw up a curriculum. Control systems are used for planning, forecasting, diagnostics and control. Systems designed to protect a house or apartment can track changes in the environment, predict the development of the situation and draw up a plan for further action. For example, a window has opened and a thief is trying to enter the room through it, therefore, it is necessary to call the police.

The widespread use of expert systems began in the 1980s, when they were first introduced commercially. ES are used in many areas, including business, science, technology, manufacturing and other industries characterized by a well-defined subject area. In this context, “well-defined” means that a person can divide the course of reasoning into separate stages, and thus any problem that is within the scope of this area can be solved. Therefore, a computer program can perform similar actions. It is safe to say that the use of artificial intelligence opens up endless possibilities for humanity.

The development of artificial intelligence is a matter of time. Sooner or later, machines will be able to compete on equal terms with humans in activities that require thought processes. Recently, Oxford University mathematics professor Marcus du Sautoy suggested that sentient technologies could be legally equated with humans.


Artificial intelligence has become a writer

Computer "self-awareness"

According to many scientists, sooner or later technologies will have the opportunity to independently develop their intelligence. This process is called "technological singularity". “At some point, we will be able to say that this thing has awareness of itself, and perhaps this will be the line beyond which this consciousness arises,” says du Sotoy.

But how can you tell if a machine is "self-aware"? Currently, the "Turing test" is used to determine the level of artificial intelligence. Its essence lies in the fact that the expert evaluates the conversation between a person and a machine on certain topics. At the same time, he does not know in advance which of these two is a computer program, and which is a human operator ... If the expert finds it difficult to say which of them is who, then the test is considered passed.

According to the American inventor and futurist Ray Kurzweil, by 2029 there will be machines that can pass the Turing test, and by the 2040s, artificial intelligence will surpass human intelligence by a billion times.

The latest generation uses structures that mimic the neural activity of the brain. Thus, the scanning process is able to reveal the presence of consciousness. How? Well, for example, in a person, neurons in a conscious and unconscious (say, sleeping) state work differently. If the computer brain reacts like a human brain in consciousness, then it means that it exists!

Three types of artificial intelligence

And what, in fact, should be understood by the phrase ""? It, according to experts, can be of three types.

The first type is narrowly focused, capable of performing only a number of certain functions. These are, for example, electronic assistants, car parking robots or programs that play chess.

The second type is general AI. It is closest to human. These are primarily humanoid, which are as similar as possible to us. They can play the role of porters in hotels, consultants in stores, lifeguards ... They will be taught to imitate human emotions in order to make interaction with a person more constructive.

The third type is superintelligence. This is exactly what some futurologists and science fiction writers are afraid of ... The capabilities of such intelligence will far exceed human ones. Most likely, such "highly intelligent" devices will eventually unite into a powerful network like Skynet from "Terminator" ...

No skynet!

To begin with, let's imagine that computers have become able to recognize themselves as "persons". Let's say they "understand" when they are being harmed. Well, let's suppose they don't clean it in time or they knock on the case with their fist in case of a freeze ... Or they simply overload the processor and memory with work ...

If the concept of "cruelty to animals" exists, then why not the concept of "cruelty to computers"? At the same time, do not forget that artificial intelligence is probably much smarter than any animal. And if so, then it will be necessary to provide electronic systems with the opportunity to protect their rights!

"AI computers could very soon have their own code of 'rights' that could allow them to sue you for neglecting them," Du Sautoy predicts.

However, maybe not everything is so terrible? During the recent Code 2016 conference, entrepreneur Elon Musk, who founded the nonprofit last year organization Open AI, whose goal is to create and develop friendly artificial intelligence, announced that in the future, people and high technology should learn to closely interact with each other. In particular, a person of the future will be able to connect a virtual avatar integrated into a special network to his own brain.

Avatar actions will be controlled intellectual programs that will not allow them to harm anyone or anything. “The development of technologies related to artificial intelligence should not be scary,” Musk said, “its presence and evolution does not necessarily mean that in the future we will all get something like Skynet.”

  • Mustafina Nailya Mugattarovna, bachelor, student
  • Bashkir State Agrarian University
  • Sharafutdinov Aidar Gazizyanovich, Candidate of Sciences, Associate Professor, Associate Professor
  • Bashkir State Agrarian University
  • COMPUTING MACHINES
  • TECHNICS
  • THE SCIENCE
  • ARTIFICIAL INTELLIGENCE

Today, scientific and technological progress is rapidly developing. One of its fast-growing industries is artificial intelligence.

Today, technological progress is rapidly developing. Science does not stand still and every year people come up with more and more advanced technologies. One of the new directions in the development of technological progress is artificial intelligence.

Humanity first heard about artificial intelligence more than 50 years ago. It happened at a conference held in 1956 at Dartmouth University, where John McCarthy gave the term a clear and precise definition. “Artificial intelligence is the science of creating intelligent machines and computer programs. For the purposes of this science, computers are used as a means to understand the features of human intelligence, at the same time, the study of AI should not be limited to the use of biologically plausible methods.

The artificial intelligence of modern computers is of a fairly high level, but not to the level that their behavioral abilities are not inferior to at least the most primitive animals.

The result of research on "artificial intelligence" is the desire to understand the work of the brain, to reveal the secrets of human consciousness and the problem of creating machines with a certain level of human intelligence. The fundamental possibility of modeling intellectual processes follows that any function of the brain, any mental activity, described by a language with strictly unambiguous semantics using a finite number of words, can in principle be transferred to an electronic digital computer.

Currently, some artificial intelligence models have been developed in various fields, but a computer has not yet been created capable of processing information in any new field.

Among the most important classes of tasks that have been posed to the developers of intelligent systems since the definition of artificial intelligence as a scientific direction, the following should be singled out. areas of artificial intelligence:

  • Proof of theorems. The study of methods for proving theorems played important role in the development of artificial intelligence. Many informal problems, for example, medical diagnostics, use the methodological approaches that were used to automate the proof of theorems when solving. The search for a proof of a mathematical theorem requires not only deduction from hypotheses, but also making intuitions about which intermediate statements should be proved for the general proof of the main theorem.
  • Image recognition. The use of artificial intelligence for pattern recognition has made it possible to create practically working systems for identifying graphic objects based on similar features. Any characteristics of objects to be recognized can be considered as signs. Features must be invariant to the orientation, size and shape of objects. The alphabet of signs is formed by the system developer. The quality of recognition largely depends on how well the established alphabet of features is. Recognition consists in a priori obtaining a vector of features for a separate object selected on the image and, then, in determining which of the standards of the alphabet of features this vector corresponds to.
  • Machine translation and understanding of human speech. The task of analyzing human speech sentences using a dictionary is a typical task of artificial intelligence systems. To solve it, an intermediary language was created to facilitate the matching of phrases from different languages. In the future, this intermediary language turned into a semantic model for representing the meanings of texts to be translated. The evolution of the semantic model has led to the creation of a language for the internal representation of knowledge. As a result, modern systems carry out the analysis of texts and phrases in four main stages: morphological analysis, syntactic, semantic and pragmatic analysis.
  • Game programs. Most game programs are based on a few basic ideas of artificial intelligence, such as enumeration of options and self-learning. One of the most interesting tasks in the field of gaming programs using artificial intelligence methods, is to teach a computer to play chess. It was founded in the early days of computing, in the late 1950s. In chess, there are certain levels of skill, degrees of quality of the game, which can give clear criteria for assessing the intellectual growth of the system. Therefore, scientists from all over the world were actively involved in computer chess, and the results of their achievements are used in other intellectual developments of real practical importance.
  • Machine creativity. One of the areas of application of artificial intelligence includes software systems that can independently create music, poetry, stories, articles, diplomas, and even dissertations. Today there is a whole class of musical programming languages ​​(for example, the C-Sound language). For various musical tasks, special software was created: sound processing systems, sound synthesis, interactive composition systems, algorithmic composition programs.
  • Expert systems. Artificial intelligence methods have found application in the creation of automated consulting systems or expert systems. The first expert systems were developed as research tools in the 1960s. They were artificial intelligence systems specifically designed to solve challenging tasks in a narrow subject area, such as, for example, medical diagnosis of diseases. The classic goal of this direction was initially to create a general purpose artificial intelligence system that would be able to solve any problem without specific knowledge in the subject area. Due to the limited capacity of computing resources, this problem turned out to be too difficult to solve with an acceptable result.

We can say that the main goal of developing artificial intelligence is optimization, just imagine how a person, without being in danger, could study other planets, extract precious metals.

Thus, we can conclude that the study and development of artificial intelligence has importance for the whole society. After all, with the use of this system, it is possible to secure and facilitate human life.

Bibliography

  1. Yasnitsky L.N. On the possibilities of using artificial intelligence [Electronic resource]: scientific e-library. URL: http://cyberleninka.ru/ (accessed 06/01/2016)
  2. Yastreb N.A. Artificial intelligence [Electronic resource]: scientific electronic library. URL: http://cyberleninka.ru/ (accessed 06/01/2016)
  3. Abdulatipova M.A. Artificial intelligence [Electronic resource]: scientific electronic library. URL: http://cyberleninka.ru/ (accessed 06/01/2016)