Recent advancements in the sophistication of and capacity of artificial intelligence (AI) platforms as well as the public release of interactive generative AI tools has renewed public interest in the field. The current generation of AI algorithms and tools are descendants of pioneering work on cognitive science, computer science, economics, game theory, and mathematics going back to the 1950s.
Early Developments in Artificial Intelligence
1950 - Turing Test: Alan Turing proposed the Turing Test in his paper "Computing Machinery and Intelligence," providing a criterion to evaluate a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.
1956 - Dartmouth Conference*: This conference, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, is considered the official birth of artificial intelligence as a separate field of study. It's where the term "artificial intelligence" was first coined.
Late 1950s - Early AI Programs: The development of early AI programs like the Logic Theorist by Newell and Simon in 1956 and the General Problem Solver (GPS) in 1957, marked significant progress in the field.
1966 - ELIZA: Joseph Weizenbaum created ELIZA, an early natural language processing computer program, which demonstrated the superficiality of communication between humans and machines but was a big step in the development of conversational agents.
1980s - Machine Learning Takes Off: The shift towards machine learning, with the development of algorithms that could learn from and make predictions on data, was a major step forward. This era also saw the rise of neural networks.
1997 - Deep Blue Beats Kasparov: IBM's Deep Blue defeated world chess champion Garry Kasparov, showcasing the potential of AI in mastering complex games that require strategic thinking.
2006 - Renaissance of Neural Networks: The term "deep learning" was introduced, leading to a resurgence of interest in neural network research, driven by increased computing power and large amounts of data.
2012 - AlexNet and Deep Learning: The success of AlexNet, a deep convolutional neural network, in the ImageNet competition significantly advanced the field of computer vision and deep learning.
2016 - AlphaGo Beats Lee Sedol: Google DeepMind's AlphaGo defeated world champion Lee Sedol in the game of Go, a feat that was previously thought to be at least a decade away due to the game's complexity.
2020s - Generative AI and Large Language Models: The advent and widespread use of large language models like GPT-3 and generative AI tools have significantly impacted various industries and daily life, demonstrating the practical applications of AI.
These milestones highlight the evolution of AI from theoretical concepts to practical applications, showcasing the rapid advancements and expanding capabilities of AI systems.
Modern AI platforms are composed of multiple groups of algorithms with different goals. At their simplest, these platforms take training data, use machine learning algorithms to "learn" from this data, and then pass on what it has learned to a model which uses this knowledge to generate some output. Below are some simple definitions for key ideas related to modern AI platforms. See the IBM link below for more information.
Artificial intelligence (AI) is a field of study dedicated to creating computer programs or other machine-driven forms of intelligence.
Deep Neural Networks employ many layers of neural networks to deal with complex subjects.
Generative AI is a type of AI system that generates text, images, or other media in response to user prompts.
Large Language Models such as chatGPT apply deep neural networks to text data and generate output from prompts.
Machine learning is a sub field of AI focused on the problems of designing recursive algorithms capable of learning.
Natural Language Processing refers to a branch of artificial intelligence concerned with giving computers the ability to understand text and spoken word in the same way humans can.
Neural Networks are an approach to machine learning using many simple, but densely connected algorithms to solve complex problems.
Supervised learning is a machine learning technique where the authors of the model tell the machine learning algorithm how to handle the training data in order to generate the desired output.
Training data are the information that is digested by a machine learning algorithm.
Unsupervised learning is a machine learning technique where the machine learning algorithm creates its own labels for variables within the training data.