Artificial Intelligence (AI) refers to both the study of intelligent agents and the intelligent agents themselves. An “intelligent agent” is any device designed to achieve some goal, receive information from its environment as input, and output a response that maximizes the success of achieving said goal.
Currently, AI can be categorized into three groups: narrow-, general-, and super-artificial intelligence.
Artificial Narrow Intelligence (ANI) or Weak AI, are systems that are considered the least computationally potent AI. These systems include much of the contemporary machine learning and deep learning models with singular and narrowed functions such as Object Classification and Speech Recognition.
Artificial General Intelligence (AGI) or Strong AI, are systems that would pass the Turing Test, with intelligence that produces outputs that are indistinguishable from that of an adult human being. As of publication, no publicly known AGI has been developed.
Artificial Super Intelligence (ASI) or Superintelligence, is another form of AI yet to be developed that contains intelligence that can produce outputs that would vastly surpass the capacity of a human being.
The first true instance of AI is arguable, with some determining the mechanism used to produce “Ars generalis ultima” (The Ultimate General Art), published by Ramon Llull in 1308 was an artificial intelligence with the mechanical means to create new knowledge from logic and complex mechanical techniques.
In 1914, Spanish engineer Leonardo Torres y Quevedo demonstrates the first chess-playing machine in Paris, capable of receiving information about a chess game and playing a king and rook endgame against the king from any position without the aid of human intervention.
In 1950, Alan Turing publishes “Computing Machinery and Intelligence”, introducing with it, the concept of the imitation game. This would later be known as the Turing Test, which tests a machine’s ability to display behavior and produce output that is indistinguishable from that of an adult person.
The years ranging from 1956-1974 are considered the renaissance period of artificial intelligence with developments such as semantic nets, allowing machines to solve algebra word problems, and search algorithms that allowed machines to approach solving problems much like solving a maze.
Following this period, the field of AI experienced lulls and bursts of progress (between the years 1974 and 2011) where computing power and the amount of available data would be considerable bottlenecks. This period ended around 2011 with the development of agents such as Deep Blue and AlphaGo, that were capable of matching and exceeding the best human board game players in the world.
As AI continues to develop and impact more areas of society, it is important for users and especially developers of AI to consider the ethical ramifications of AI growth. AI ethics is discussed in further detail in the pages below.
- AI Ethics
- Foundation Models
- Generative AI
- Machine Learning
- Neural Networks
- Prompt Engineering
- Search Algorithms
- bpascazio10 contributions
- CaupolicanDiaz7 contributions
- YanisaHS6 contributions
- Christine_Yang5 contributions
- TECHBREAUX4 contributions
- DevGat7183 contributions
- noahpgordon3 contributions
- Archita091 contribution
- method29750241261 contribution
- hitesh_mittal1 contribution
- garanews1 contribution
- mdwiltfong1 contribution
- rykunk211 contribution
- apropos01 contribution
- Harshittheprohrammer1 contribution
- kapoorsaumitra1 contribution
- Shanmukha_M_K1 contribution
- StevenSwiniarski1 contribution
- sahilkumar191 contribution