This article discusses some of my thoughts about artificial intelligence Malaysia (AI). To begin, a difference is draw between strong and weak AI, as well as the relatE concepts of broad and specific AI, emphasising that all manifestations of AI are weak and specialise. The major extant models are briefly presented, emphasising the critical role of corporality in achieving general AI.
Additionally, the necessity of providing common-sense knowledge to machines is highlight in order to progress toward the lofty goal of developing general AI. Additionally, the paper discusses new trends in AI that are focus on the analysis of vast amounts of data. This have enabled spectacular advances in recent years, while also addressing the current limitations associated with this approach to AI. The final section of the paper examines additional topics that are critical in AI and will continue to be so, and concludes with a brief perspective on the risks associated with AI.
AI, artificial intelligence malaysia
The ultimate goal of artificial intelligence (AI) is to develop a computer with a level of general intellect comparable to that of a person. This is one of the most audacious scientific goals ever suggest. It is on a par with other big scientific aims in terms of difficulty, such as understanding the origins of life or the Universe, or discovering the structure of matter. This desire in developing intelligent devices has resulted in the development of models or metaphors of the human brain in previous centuries.
Descartes, for example, pondered in the seventeenth century whether a sophisticated mechanical system of gears, pulleys, and tubes could potentially replicate thought. Two centuries later, the metaphor had evolved into telephone systems, as its connections appeared to be analogous to those of a neural network. Today, the most prevalent model is computational, based on the digital computer. As a result, that is the model we shall discuss in this post.
THE HYPOTHESIS OF THE PHYSICAL SYMBOL SYSTEM: WEAK AI VS. STRONG AI
Allen Newell and Herbert Simon (Newell and Simon, 1976) proposed the “Physical Symbol System” hypothesis, which states that “a physical symbol system possesses the necessary and sufficient means for general intelligent action.” In that sense, given that human beings are capable of generalise intelligent behaviour, we are also physical symbol systems. Allow us to define what Newell and Simon mean by a Physical Symbol System (PSS). A PSS is compose of a collection of elements call symbols that may merge to form larger structures via relations—much how atoms unite to form molecules. This can alter through the application of a set of processes. These processes can generate new symbols, construct or modify relationships between symbols, store symbols, and determine whether two symbols are identical or dissimilar.
These symbols are physical in the sense that they have an underlying physical-electronic or physical-biological layer (in the case of computers) (in the case of human beings). Indeed, while computers establish symbols using digital electronic circuits, humans do it using brain networks. Thus, the PSS theory holds that the type of the underlying layer (electronic circuits or neural networks) is irrelevant as long as it enables the processing of symbols. Bear in mind that this is a hypothesis and should not be pre-accept or pre-reject. In either case, its validity or refutation must be establish scientifically, by experimental testing. AI is the scientific discipline devoted to testing this idea in the context of digital computers, i.e., determining if a properly programmed computer is capable of general intelligent behaviour.