Today, Artificial Intelligence is predominantly dominated by artificial neural networks and deep learning. However, that’s not all. Streams like symbolic AI have dominated the field for most of its first six-decade. This type of AI is called “classical AI,” “rule-based AI,” or “good old-fashioned AI.”
From the mid-1950s through the mid-1990s, symbolic AI was the dominating paradigm in AI research. However, researchers would eventually abandon the symbolic approach favouring subsymbolic approaches due to technical limitations. Similarly, in the 1960s and 1970s, researchers were sure that symbolic techniques would one day succeed in constructing a machine with artificial general intelligence, which was the field’s objective.
What is symbolic AI?
Symbolic AI is when humans’ knowledge and behaviour rule in computer programs. It emulated high-level conscious reasoning used by people when solving puzzles, expressing legal sense, and doing arithmetic. They performed exceptionally well on “clever” skills such as mathematics and IQ exams. Newell and Simon developed the physical symbol systems theory in the 1960s: “A physical symbol system possesses the necessary and sufficient means of broad, intelligent action.”
On the other hand, the symbolic method failed miserably at many tasks that humans can readily handle, including learning, object recognition, and commonsense reasoning. Moravec’s paradox reveals that high-level “intelligent” activities were easy for AI, but low-level “instinctive” tasks were tremendously tricky.
Since the 1960s, philosopher Hubert Dreyfus has maintained that human skill on instinct rather than cognitive symbol manipulation and on having a “feel” for the situation rather than explicit symbolic knowledge. However, the problem remains unsolved: sub-symbolic thinking can make many of the same perplexing errors as human intuition, such as algorithmic bias.
What is neuro-symbolic AI?
To produce a more advanced AI, it combines deep learning neural network topologies with symbolic reasoning techniques. For instance, we employ neural networks to determine the shape or colour of an object.
Academics have begun to investigate emerging AI techniques such as neural networks and symbolic AI. While deep learning excels at detecting large-scale patterns, it fails to capture data’s compositional and causal structure. Symbolic models do an excellent job capturing compositional and causal structures, but not for complex relationships.
A neuro-symbolic system, for example, might recognize items using neural network pattern recognition and then use symbolic AI reasoning to understand them better. Moreover, like a person, a neuro-symbolic system utilizes logic and language processing to answer the question. As a result, it is more efficient than neural networks and requires less training data.
IBM and MIT Research
CLEVRER — CoLlision Events for Video REpresentation and Reasoning — was developed by MIT-IBM Watson AI Lab in collaboration with MIT CSAIL, Harvard University, and Google DeepMind. The paper states that it aids AI with object recognition and behaviour analysis. In contrast to standard deep learning systems, CLEVRER uses a fraction of the data required to test neural networks and neuro-symbolic thinking. It helped AI understand casual relationships and solve difficulties using common sense.
Conclusion
Symbolic AI is simple and effective at solving toy issues. The fundamental shortcoming of symbolic AI, on the other hand, is that it does not generalize well.
Symbolic AI systems are similarly prone to failure. If one assumption or rule is incorrect, the system may break all other laws and fail. It isn’t adaptable to changes. Whether the symbolic AI system is genuinely “learning” or simply making decisions based on surface rules that pay well is also a question. Some people believe that symbolic AI is no longer viable. However, nothing could be further from the truth. In reality, rule-based AI systems play a vital role in today’s applications. According to many renowned scientists, symbolic reasoning will continue to be a critical component of AI.
Moreover, while GANs have raised the complexity of tasks that neural networks can perform, neuro-symbolic AI may be able to handle even more complex jobs. It can build AI systems that do more challenging jobs with fewer data and more common sense.
Source: indiaai.gov.in