History Of Artificial Intelligence

Background of artificial intelligence:

The basis of artificial intelligence goes back to mythological stories and mythical creatures such as Talos in ancient Greece. But one of the main foundations of the emergence of artificial intelligence in the 20th century was the discovery of Aristotle's formal logic. In his argument, Aristotle expressed the concept of right thinking. This right thinking in the 20th century became the basis for building a machine that assumes rationally, not necessarily to think and act humanly. Without this argument in philosophy, there would be no modern computers today. (Leach, 2022)

 

Figure 1-Talos - www.en.wikipedia.org

Between ancient times and the 1st century AD, various mechanical and artificial elements and devices were made. One of the most important of them was the gears of the Hellenic Greeks. These gears moved the astronomical instruments of the Hellenes with their mechanism. (McCorduck, 2004)

                    

Figure 2-The Antikythera mechanism (fragment A – front and rear); visible is the largest gear in the mechanism, approximately 13 centimeters (5.1 in) in diameter.-www.wikipedia.org

               

Figure 3-Derek J. de Solla Price (1922–1983) with a model of the Antikythera mechanism-www.wikipedia.org

Until the 17th century, various events and discoveries occurred to develop the idea of an intelligent machine in various scientific fields. One of the most important occurred in 1672 by Gottfried Leibniz. By discovering binary numbers in mathematics and philosophy, he paved the way for 20th-century programming. He also proposed the idea of human logic calculations, which means dividing human thought into smaller parts; This idea is a way to analyze the manner of human thinking in the science of philosophy, which was the basis for the formation of artificial intelligence in the 19th century. (McCorduck, 2004) Without the presence of Leibniz and his creative thinking in the field of transforming human thought into a form of calculations, it would not be possible to build today's computers with humanoid reasoning functions. (Leach, 2022)

Figure 4-Gottfried Wilhelm Leibniz-mediengeschichte.dnb.de

In 1854, George Boole proposed Boolean algebra in the world of mathematics. These mathematical calculations transform mental reasoning operations into mathematical symbolic language. (McCorduck, 2004) He also suggested the details of propositional logic or Boolean logic. (Russell, 2010) This logic is a subset of algebra that determines whether propositions are true or false, usually represented as 1 and 0. This logic uses characters other than punctuation like X or Y. For example, instead of using the sentence "Socrates is a man", this form can be used: "X is Socrates and X is a man." So Socrates is a man. This method of logic was used in 1879 by Gottlob Frege in the early research of artificial intelligence under the title of first-order logic. (Russell, 2010) Boolean algebra formulas are the basis of today's computer calculations. (Leach, 2022) Aristotle's formal logic, Leibniz's binary numbers, and George Boley's Boolean algebra concurrently formed the early basis of machine programming. Human brain function was translated into mathematical formulas as a result of these discoveries. This transformation was the beginning of making smart cars.

Figure 5-George Boole -www.cosmosmagazine.com

Stepping into the historical background of artificial intelligence, in 1913 two prominent mathematicians, Bertrand Russell, and Alfred North Whitehead, revolutionized Aristotle's formal logic by publishing the book "Principia Mathematica" meaning mathematical principles. This three-volume book laid the foundation for the mathematics of artificial intelligence and the construction of intelligent machines. (Russell, 2010)

                         

Figure 6-Alfred North Whitehead-www.wikipedia.org

Almost simultaneously in 1943, two scientists in the field of neuroscience, McCullen and Walter Pietz, started activities in the field of artificial neural networks. By publishing an article entitled "Logical calculation of persistent ideas in neural activities", they established the foundations of designing artificial neural networks. (McCorduck, 2004) In the same year, Norbert Wiener and Arthur Rosenbluth, and Julian Bigelow named the word "cybernetics" for the first time; Wiener also awakened public curiosity about the existence of artificially intelligent machines by publishing his book of the same name in 1948. Wiener was the one who played a critical role in the development of control theory in the years after World War II. He was particularly interested in mechanical-biological control systems and their relationship with human perception. (Russell, 2010) To expand these interdisciplinary studies, John Von Neumann and Weiner and McCullen, and Pietz launched several conferences in which they introduced mathematical and computational models of human perception.

Figure 7-John Von Neumann-www.wikipedia.org

In 1948, John Newman said in response to questions at a conference, "No matter how much you insist that a machine cannot do, I will do what it can do." His answer refers to the Church-Turing thesis in the 1930s. This thesis says: If there is an algorithm to perform calculations, then the same algorithm is performed by a Turing machine. (Russell, 2010)

A piece of Article Written By Fargol Amini


Related content: