A Quick Guide to AI
Last update 8/28/23
AI vs. Machine Learning vs. Deep Learning 4
Courses on AI & related topics 7
Artificial Intelligence (AI) is the capability of a computer system to mimic human cognitive functions such as learning and problem-solving. This term was coined in 1956.
Big data refers to both a movement in technology industries and to data sets themselves. Big data sets are too large or complex for regular data processing and are defined by the so-called “three V’s”: volume, variety, and velocity. In short, there is a lot of diverse data that is collected and processed quickly.
A chatbot is a software application used to conduct an online chat conversation via text or text-to-speech, instead of providing direct contact with a person.
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It represents a significant advancement in the field of NLP because it can process large amounts and various types of data, and gathers data from users that could be used to further train and fine-tune its ability to develop responses.
Deep learning is a subfield of machine learning that involves training artificial neural networks to learn and make predictions from large amounts of data. It utilizes multiple layers of interconnected neurons to automatically extract meaningful patterns and features, enabling the network to progressively understand complex relationships and perform sophisticated tasks. - From ChatGPT
An expert system was an early computer program that emulated human decision-making and is considered the first successful form of AI. It would reason through a knowledge base using if-then rules to solve complex problems.
General AI (or Strong AI) is artificial intelligence that can perform intellectual tasks like a human or an animal. It is popular in science fiction, such as the Terminator series and Star Wars (like C-3PO), but it has not yet been accomplished in modern technology.
Machine learning (ML) is the process of using mathematical models of data to help a computer learn without direct instruction. It’s considered a subset of artificial intelligence (AI).
Narrow AI (or Weak AI) is what we are currently capable of doing with artificial intelligence. Narrow AI performs tasks that people do just as well or better than humans. At its most basic, Narrow AI consists of a series of “if, then” statements.
Natural language processing (NLP) refers to giving computers the ability to understand text and spoken words in much the same way human beings can.
Neural networks use algorithms to mimic the human brain and solve problems. They have four components: inputs, weights, a threshold, and an output. Data is entered into the algorithm and calculated as 0 or 1 (yes or no). Then each piece of the data is weighted by its importance to the outcome. If the weighted data crosses a threshold, then it results in an outcome. IBM gives the example of whether or not you should order a pizza.
Temperature refers to a parameter used in generating text with language models like GPT-3. It controls the randomness of the output by influencing the diversity of the generated responses. Higher temperature values result in more varied and creative outputs, while lower values lead to more deterministic and conservative responses. - From ChatGPT
See also:
A Generative AI Primer, Jisc National Centre for AI
Artificial Intelligence (AI), EDUCAUSE
And Other Related Events
Source: “What’s the Difference Between Artificial Intelligence, Machine Learning and Deep Learning?”
by Michael Copeland
Artificial intelligence has been around longer than most folks think.
400 BCE Earliest record of an automaton | A friend of the ancient philosopher, Plato, builds an automaton (Greek for “acting of one’s own will”) in the form of a mechanical pigeon. |
1726 “The engine” referenced in Gulliver’s Travels | In his novel, Gulliver’s Travels, Jonathan Swift writes about “the engine,” an early reference to the modern-day concept of the computer. Throughout the early 1700s, all-knowing machines are frequently described in popular literature. |
1921 Karel Čapek coins “robot” | Czech playwright Karel Čapek introduces the idea of “artificial people,” which he calls “robots,” in his science fiction play, Rossum’s Universal Robots. |
1927 First film robot in Metropolis | Source: IMDB Metropolis, a science fiction film directed by Fritz Lang, depicts the first on-screen robot. This robot served as inspiration for a variety of films and other entertainment, such as the character C-3PO from Star Wars and Janelle Monáe’s music and art. |
1950s The conception of AI | AI is conceptualized in the 1950s with the hope to create technology that is on par with human intelligence, known as “General AI.” |
1950 Alan Turing introduces The Imitation Game | Source: The Crazy Programmer English computer scientist Alan Turing publishes “Computer Machinery and Intelligence,” in which he introduces The Imitation Game, a machine intelligence test. |
1952 Arthur Samuel’s checkers program | Arthur Samuel, an American computer scientist, creates a checkers-playing computer program – the first to be able to learn on its own. |
1955 John McCarthy coins “artificial intelligence” | In 1955, John McCarthy and his team introduce the phrase “artificial intelligence” as the theme of a workshop that they hold the following year at Dartmouth College, leading to McCarthy receiving credit for coining this term. |
1958 John McCarthy creates LISP | LISP (List Processing), the first programming language for AI research, is created by John McCarthy. |
1959 Arthur Samuel coins “machine learning” | In a speech about teaching machines how to play chess better than their programmers, Arthur Samuel coined the term “machine learning.” |
1960s US Military involvement | The United States Department of Defense begins concentrating more on training computers to mimic human reasoning. |
1966 Development of ELIZA | ELIZA, the first “chatterbot” or chat bot, is developed by Joseph Weizenbaum. This program is a mock psychotherapist that used natural language processing (NLP) for communicating with humans. |
1968 HAL 9000 | Source: G2 Stanley Kubrick’s 2001: A Space Odyssey is released, featuring HAL (Heuristically programmed ALgorithmic computer), which controls the ship’s systems and speaks with the crew using natural human language. |
1970s DARPA Projects | The Defense Advanced Research Projects Agency (DARPA) completes street mapping projects using AI. |
1979 The Stanford Cart implemented | Source: Stanford University The Stanford Cart, created by James L. Adams in 1961, is an early prototype of an autonomous vehicle that navigated a room full of chairs without human interference in 1979. |
1980s “AI Boom” | The 1980s is a period of rapid growth for AI, particularly with increased government funding support. Researchers are able to use deep learning techniques to teach computers how to learn from their mistakes for the first time. |
1980 WABOT-2 introduced | Researchers at Waseda University in Tokyo build WABOT-2, a humanoid robot that could communicate, read musical scores, and play an electronic organ. |
1985 AARON introduced | Source: Computer History Museum AARON, an autonomous drawing program developed by artist Harold Cohen, is presented at the American Association of Artificial Intelligence conference. |
1986 The first driverless car created and demonstrated | Ernst Dickmann presents the first driverless car with his team from Bundeswehr University and with support from Mercedes-Benz. The following year, Dickmann’s autonomous van goes down the Autobahn at nearly 90 kph (roughly 55 mph). |
1987 - 1993 “AI Winter” | The government, public, and private sectors lose interest in AI, which leads to less funding and fewer discoveries. |
1988 Jabberwacky developed | Jabberwacky, a chatbot meant to mimic human communication in a humorous manner, is developed by programmer Rollo Carpenter. |
1997 Deep Blue beats Garry Kasparov | Source: Encyclopedia Britannica Deep Blue became the first computer program to beat a human chess champion, Garry Kasparov, at the game. |
2000 Y2K | Early computer programmers are limited in the amount of data they can store, with individual bits (the smallest increment of data on a computer, carrying a value of 0 or 1) costing up to one dollar each at one time. To cut down on space, programmers use only the last two digits of the four-digit year to store calendar data. Y2K, or the Year 2000 Problem, describes the issue this decision created: a computer cannot tell the difference between 1900 and 2000. Although the danger is overexaggerated by the public, computer scientists avert serious technological disasters by the turn of the century. |
2000 Kismet developed | Kismet, the first robot to simulate human emotions with its face, is developed by American robotics scientist, Cynthia Breazeal. |
2002 Roomba released | The first Roomba is released to the market in 2002. Roomba is an autonomous robot vacuum that can sense obstacles in its path. |
February 16, 2011 IBM Watson wins Jeopardy! | Source: Christian Science Monitor IBM Watson, an AI question-answering system that responds to natural language, beats two top champions of Jeopardy!, Brad Rutter and Ken Jennings. |
2011 Apple releases Siri | Siri, the first virtual assistant to become popular with the public, is released for Apple’s iOS products. |
2012 - present Resurgence of AI | AI has become popular again both publicly and privately since 2012. This period has also seen an increased interest in Deep Learning and Big Data. We are also seeing the development of Generative AI, which is capable of creating new data and content based on training data. |
2016 Sophia was created | Source: G2 Sophia is the first robot to have a realistic human appearance as well as the ability to communicate and to see and replicate human emotions. This first “robot citizen” is created by Hanson Robotics. |
2020 OpenAI beta tests GPT-3 | OpenAI starts beta testing GPT-3, a Deep Learning model that can mimic human content almost indistinguishably. |
See also:
Fordham IT Educational Technologies www.fordham.edu/edtech |