Recently, there has been a lot of discussion regarding artificial intelligence (AI) and how it might either bring about a world without work or soon devolve into a nightmare (the likes of which have been captured in countless Hollywood blockbusters like 2001: A Space Odyssey and the Terminator franchise). Regardless of where you stand on the utopia-dystopia spectrum, it is undeniable that AI is here to stay and is almost guaranteed to change civilization in ways that many people find difficult to fathom.
Given this, it is critical for all of us to have at least a basic understanding of what AI is and how it functions, even marketers whose business is already suffering the effects of the AI revolution. Understanding some of the languages used in this odd technological environment is the first step in achieving this.
Web3’s tech takeover from AI
Here is some key AI terminology you should be familiar with [this dictionary will be updated frequently, so we suggest checking back regularly]:
A/B testing is a type of randomized experimentation in which a human subject evaluates two variations of a specific model, A and B, to see which one performs better than the other.
Algorithm: A group of rules or instructions used, frequently by a computer, to carry out calculations or process data in order to solve a set of issues.
AlphaGo: An AI system created by DeepMind with the express purpose of playing the traditional Chinese board game Go. The first artificial intelligence (AI) model to defeat a skilled human Go player was AlphaGo in 2015. (Chinese-born Fan Hui). The following year, it defeated Lee Sedol, a South Korean professional Go player at the time. In 2019, Sedol announced his retirement from competitive Go, saying the South Korean news agency Yonhap News Agency that AI that specializes in the game “is an entity that cannot be defeated.”
Artificial General Intelligence: AGI; sometimes known as Strong AI, is the ability of an AI software to reason on a level with the average adult human. In other words, AGI would theoretically be capable of solving issues across a wide range of categories, just like a human brain can (we have yet to develop one).
Artificially Limited Intelligence: ANI, also known as Weak AI, An AI program designed to carry out a single, limited task, such as playing chess or providing customer care. The term “ANI” refers to all artificial intelligence systems that have been created to date.
Artificial Neural Network: ANN is a synthetic system made up of layers of synthetic neurons and loosely based on the structure of biological brains.
Artificial Superintelligence: The concept of artificial superintelligence (ASI), first put forth by Oxford philosopher Nick Bostrum, is a theoretical intellect that is more developed than human intelligence. An ASI could be just a little bit smarter than the ordinary person, or it could be immensely, incomprehensibly smarter, similar to the cognitive gap between an ant and Nobel Prize winner Roger Penrose.
An unsupervised, rule-based machine learning technique called association rule learning looks for patterns or relationships between variables in a dataset.
Automatic Speech Recognition: The ability of a machine to detect human voice and translate it into text is known as automatic speech recognition (ASR), sometimes known as computer speech recognition, speech-to-text, or simply speech recognition. ASR is employed by the iPhone’s dictation feature, for instance.
Backpropagation: Backpropagation is the method through which a neural network notifies itself that it made a prediction error and then fixes it. The term “backpropagation” refers to the process of responding to inaccurate information by resending fresh data towards the direction of the error’s origin. Often referred to as “backprop” or “BP” in casual language.
Bayes’ theorem: The Bayes theorem is a mathematical formula that can be used to calculate what is known as “conditional probability,” or the likelihood of a specific outcome based on one’s prior knowledge of a previous result that occurred in similar circumstances. It is named after the 18th-century statistician Thomas Bayes.
Black box: The term “black box” is used to refer to a system whose inner workings are obscure and eventually enigmatic to the system’s designer (or creators). Because models frequently operate and evolve in ways that even the system’s creators are unable to completely comprehend or predict, AI is sometimes referred to as a “black box.”
Chatbot: The most significant part of a digital computer is the central processing unit (CPU). Every digital computing system’s memory, arithmetic capabilities (adding, subtracting, multiplying, and dividing), and operating system orchestrator are located in the CPU, sometimes known as the “brain” or “control center” of a computer. Microprocessors serve as the foundation for the CPU in contemporary computers.
An artificial intelligence (AI)-based computer program known as a chatbot uses natural language processing (NLP) to replicate human speech when answering customer service inquiries.
ChatGPT: In November 2022, San Francisco-based startup OpenAI released ChatGPT, an AI-powered chatbot. NLP is used by ChatGPT to mimic human conversation. ChatGPT can “answer follow-up questions, admit mistakes, challenge false premises, and reject inappropriate requests,” according to the OpenAI website.
Computer vision: Computer vision is the area of AI that focuses on giving machines the ability to comprehend and react to data produced from visual inputs, such as images and video, in a way that is comparable to the visual system in the human brain.
Convolutional Neural Network (CNN): A subset of artificial neural networks that is frequently employed in automated visual processing, allowing an AI model to distinguish and examine different elements inside an image.
Dall-E 2: Developed by OpenAI and published in 2022, Dall-E 2 is a deep learning model that creates images based on the input of text-based natural language prompts. Dall-E served as its forebear. The names of the two models are puns on the surrealist painter Salvador Dal’s last name and the name of the Wall-E character from the Pixar movie of the same name.
The Dartmouth Summer Research Project on Artificial Intelligence: The Dartmouth Summer Research Project on Artificial Intelligence, also known as the Dartmouth Workshop, was a conference that took place at Dartmouth College in the middle of 1956 and is largely regarded as the catalyst for the development of AI as a scientific topic. Marvin Minsky, John McCarthy, Nathaniel Rochester, and Claude Shannon were in charge of planning the conference.
Deep Blue: Deep Blue is a chess-playing-only artificial intelligence program created by IBM. It made history in 1997 when it defeated chess grandmaster Gary Kasparov in a game of chess for the first time ever.
Deep learning (also known as deep reinforcement learning): Deep learning, often referred to as deep reinforcement learning, is an application of machine learning that is based on the idea that models can become more intelligent when given access to large amounts of data. Neural networks must have at least three layers in order to do deep learning; the more layers a network has, the better.
DeepMind: Demis Hassabis, Shane Legg, and Mustafa Suleyman established the London-based artificial intelligence research facility DeepMind in 2010. In 2014, Google purchased the business, and it is now a fully-owned subsidiary of Alphabet Inc., Google’s parent company. A team of “scientists, engineers, ethicists and more,” as DeepMind puts itself on its website, “are committed to solving the intelligence problem to advance science and benefit humanity.”
Decision tree: A metaphorical representation of the process of making a choice, where each “branch” stands for a certain course of action. All the pertinent data that is being analyzed is the “root node” of a decision tree, which then branches off into “internal nodes” (also known as “decision nodes”) and terminates in “leaf nodes” (also known as “terminal nodes,” which represent all the potential outcomes of a given decision-making process).
Entropy: The term “entropy” in the context of machine learning describes the degree of randomness, disorder, and unpredictability inside a dataset that is being processed by a machine learning system. The second law of thermodynamics, which essentially states that the level of disorder or unpredictability within a system will never diminish over time but only remain constant or increase, is more often known as entropy.
Game Theory: Game Theory is a mathematical concept that describes the dynamic interaction between two or more rational agents seeking their own gains within a parameterized (rule-governed) framework. It was first proposed by mathematician John von Neumann and economist Oskar Morgenstern in 1944. A wide range of games, including zero-sum and nonzero-sum, are defined by game theory.
Generative adversarial network (GAN): A machine learning technique called a generative adversarial network (GAN) pits two neural networks against one another in a zero-sum game where one network’s loss results in the other’s gain and vice versa. A dataset is given to each network, and the “generator” network’s job is to basically deceive the “discriminator” network into thinking that the newly generated data is a part of the original dataset. The discriminator will next attempt to ascertain if the newly created image is fake or real, for instance, if the generator creates a new image of a human face based on numerous photographs of actual human faces. The generator will be able to mislead the discriminator with the majority (more than 50%) of its initial output until the end of the competition. The American computer scientist Ian Goodfellow, also known as “The GANfather,” created GANs in 2014.
GPT-3: The 2020 release of OpenAI’s GPT-3, also known as the Generative Pre-Trained Transformer 3 (GPT-3), is a huge open-source language model. The model serves as the conceptual foundation for the popular chatbot ChatGPT, which can produce text responses in natural language in response to text-based prompts.
Hallucination: Any output from an AI model that appears to be at odds with its training data is referred to as a “hallucination” in the field of artificial intelligence. For instance, a chatbot powered by hallucinatory AI might assert confidently and incorrectly that the Milky Way galaxy contains approximately 5.7 trillion stars, despite the fact that it was not trained using any scientific data.
Human-in-the-loop (HITL): A technique used in some machine learning models called “human-in-the-loop” (HITL) entails having at least one human programmer give the model input while it is being tested or trained in order to enhance its performance. In a perfect world, HITL produces a positive feedback loop that raises the level of intelligence in both robots and people.
Hyperparameter: A broad, dominating parameter set by a human programmer that sets the parameters that an AI model will set and fine-tune on its own during training.
Machine learning: Machine learning is a branch of artificial intelligence that allows computers to gradually become better at doing a specific activity or collection of activities by applying statistical formulas and data. It is important to note that a computer using machine learning does not need to be explicitly taught to enhance its performance in a specific way; rather, it is given access to data and is made to “teach” itself. The outcomes frequently surprise their human developers.
Machine translation (MT): An automatic method that uses AI to translate text or speech from one language into another is known as machine translation, or MT.
Microprocessor: A central processing unit (CPU) for digital computing systems housed on a single integrated circuit (also known as a microchip; therefore, the prefix in the name “microprocessor”). In 1971, Intel unveiled the 4004, which was the first microprocessor ever made.
Midjourney: A research team called Midjourney released the same-named text-to-image AI algorithm in open beta in 2022.
Moore’s Law: A theory that states that the number of transistors that can be contained into an integrated circuit (i.e., a microchip) doubles every two years. Moore’s Law is based on an observation that is typically attributed to former Intel CEO Gordon Moore.
Natural language processing (NLP): The goal of natural language processing (NLP), a subfield of artificial intelligence that also incorporates components of linguistics and computer science, is to give computers the ability to comprehend spoken and written language in a way that mimics how the human brain processes language.
OpenAI: Sam Altman, Elon Musk, and others launched the non-profit AI research facility known as OpenAI in 2015. As its name implies, OpenAI’s initial mission was to open-source its research and work with other companies in the field of artificial intelligence. OpenAI Limited Partnership, a subsidiary with a “capped profit,” was established by the company in 2019. (OpenAI LP). (Musk expressed regret about this choice on Twitter.)
Parameter: An AI model’s ability to create a specific output using a given dataset can be improved by adjusting a parameter, which is a variable that can be changed throughout the training process.
Pattern recognition: Pattern recognition is the automatic method by which a computer can find patterns in a collection of data.
Prior probability (also sometimes referred to simply as a prior): A phrase used in the field of Bayesian statistics to describe the assigned likelihood of an occurrence before (before to) further (posterior) information makes it necessary to revise that likelihood. Prior probability is also commonly referred to simply as a prior.
Reinforcement learning (RL): Machine learning models are trained using reinforcement learning (RL) to make the best choices possible in a changing environment. When employing RL, a programmer would frequently provide a machine learning model a scenario akin to a game where one result is preferred to others. The computer then tries out various tactics, with the programmer using rewards to “encourage” the desired behavior and punishments to “discourage” undesirable conduct.
Self-supervised learning: Self-supervised learning is a subset of machine learning in which unlabeled data is given to an AI model, which is then free to label the data using its own pattern recognition skills. These initial labels are then used by a self-supervised algorithm to understand additional data input iterations.
Semi-supervised learning: Semi-supervised learning is a subset of machine learning that combines aspects of both supervised learning and unsupervised learning, as the name suggests. The goal of semi-supervised learning is to teach an algorithm to categorize the latter into predetermined categories based on the former, as well as to enable the algorithm to identify new patterns across the dataset. Semi-supervised learning is based on the input of some labelled data and a higher quantity of unlabeled data. It is usually believed to act as a sort of link between the advantages of supervised learning and unsupervised learning.
Supervised learning: The goal of supervised learning is to train algorithms to recognize patterns and reliably label fresh data. It is a subset of machine learning that relies on the input of data that has been clearly labelled.
Stochastic: A mathematical term used to describe a system’s propensity to create random results. (Equivalent to “probabilistic,” “indeterminable,” and “random.”) Several AI algorithms are referred to as stochastic because they are built to integrate some level of randomness into their learning processes. In contrast, the outcomes of a deterministic system can be accurately predicted ahead.
TensorFlow: A Google-created open-source platform intended for the administration of AI and machine learning systems.
Turing test: The Turing test is a blindfolded experiment that involves a human subject interacting with an artificially intelligent machine and asking it a series of questions. It was developed by and is named after 20th-century mathematician Alan Turing. The AI has succeeded in the Turing Test if the human interlocutor is unable to distinguish between human and AI responses with certainty.
Uncanny valley: The term “uncanny valley” refers to the strange, unsettling sensation that people will experience when dealing with an artificial creature that closely (albeit imperfectly) resembles another human. It was initially proposed by roboticist Masahiro Mori in 1970.
Unsupervised learning: Unsupervised learning is a subset of machine learning that relies on unlabeled data as input. Unsupervised learning, as opposed to supervised learning, enables an algorithm to develop its own rules for finding patterns and classifying data.
Value alignment problem: The term “value alignment problem,” or simply “alignment problem,” was coined by computer scientist Stuart Russell to describe the challenges involved in making sure that intelligent machines share the same values and objectives as their human programmers. “Alignment research” is a branch of artificial intelligence and machine learning that was inspired by this issue.