top of page

20 Foundational AI Terms You Should Know

#1 Artificial Intelligence (AI)

Artificial Intelligence (AI) is the ability of machines or computer systems to perform tasks that require human-like intelligence and decision-making skills. This includes things like understanding natural language, recognizing images and patterns, and making predictions based on data. AI is used in a wide range of applications, from self-driving cars to virtual personal assistants like Siri or Alexa.

#2 Algorithm

An algorithm is a set of instructions or rules that a computer program follows to complete a task. In the context of AI, an algorithm is used to help a machine learn on its own. For example, an algorithm might be used to help a computer recognize faces in a set of images by looking for certain patterns or features.

#3 Chatbot

A chatbot is a computer program that can chat with humans through text messages or voice commands. Chatbots use natural language processing (NLP) to understand and respond to human language. They are often used for customer service or to provide assistance with simple tasks.

#4 Data Science

Data science is the practice of analyzing and interpreting large sets of data using statistical and computational techniques. It involves using mathematical and computer skills to find patterns and insights in data. In the context of AI, data science is used to train machine learning models and improve their accuracy.

#5 Decision Tree

A decision tree is a type of AI model that helps make decisions by following a hierarchy of choices and their possible outcomes. Decision trees are often used in fields like medicine or finance to help guide decision-making processes. For example, a decision tree might be used to help diagnose a medical condition based on a patient's symptoms.

#6 Game AI

Game AI is the use of AI in video games to create non-player characters (NPCs) that can act smart and react to player actions. Game AI is used to create realistic and engaging game worlds, and can range from simple decision-making processes to complex adaptive systems.

#7 Natural Language Processing (NLP)

Natural Language Processing (NLP) is the practice of teaching computers to understand and use human language. NLP is used in a wide range of applications, from chatbots and virtual assistants to language translation and sentiment analysis.

#8 Large Language Model (LLM)

A large language model (LLM) is a type of AI model that is trained to understand and create text. LLMs are often used for tasks like language translation or text generation. For example, GPT-3 is a popular LLM that can generate high-quality text in a variety of languages and styles.

#9 Machine Learning (ML)

Machine learning (ML) is the practice of teaching computers to learn from experience. ML models are trained on large sets of data and use statistical and computational techniques to identify patterns and make predictions. ML is used in a wide range of applications, from fraud detection to image recognition.

#10 Supervised Learning

Supervised learning is a type of ML in which a computer is trained using examples with the right answers. For example, a computer might be trained to recognize different types of flowers by being shown a large set of images labeled with the correct flower type.

#11 Unsupervised Learning

Unsupervised learning is a type of ML in which a computer learns by finding patterns in data without being told what to look for. For example, a computer might be used to group similar customer profiles based on their purchasing behavior, without being told which groups to create.

#12 Reinforcement Learning

Reinforcement learning is a type of ML in which a computer learns by trying things out and getting rewards or penalties. Reinforcement learning is often used in applications like robotics, where a machine must learn to navigate its environment and perform tasks. #13 Transfer Learning Transfer learning is a method in which knowledge obtained from one machine learning task is applied to a different but related task. It involves taking a pre-trained model, often developed for a specific task, and using it as the starting point for a new task. The pre-trained model has already learned features from a vast amount of data and can identify patterns that may be useful in the new task. By using transfer learning, we can train a model on a smaller dataset, which may not be sufficient for the new task alone, and still achieve good results.

#14 Overfitting

Overfitting is a common problem in machine learning where a model becomes too complex and starts to fit the training data too closely, resulting in poor generalization to new data. This can happen when a model is trained with a small amount of data or the model is too complex relative to the amount of data available. For example, if a machine learning model is trained on a dataset that contains only a few samples of a particular class, it may overfit to those samples and fail to generalize to new data. To avoid overfitting, techniques such as regularization and early stopping can be used.

#15 Deep Learning (DL)

Deep learning is a subfield of machine learning that involves training artificial neural networks with many layers. These layers allow the network to learn more complex representations of data, leading to better performance on a wide range of tasks. Deep learning has been particularly successful in areas such as computer vision and natural language processing, where large amounts of data are available.

#16 Artificial Neural Network (ANN)

An artificial neural network is a computing system that is designed to simulate the functioning of a biological neural network. It is made up of interconnected nodes, called neurons, which process and transmit information. ANNs are commonly used in deep learning applications and can be used for tasks such as image and speech recognition.

#17 Convolutional Neural Network (CNN)

A convolutional neural network is a type of artificial neural network that is designed for processing and analyzing images. It uses a technique called convolution, which involves applying a set of filters to an input image to extract features. CNNs have been particularly successful in computer vision applications, such as object recognition and image segmentation.

#18 Recurrent Neural Network (RNN)

A recurrent neural network is a type of artificial neural network that is designed for processing sequential data, such as time series or natural language. RNNs are unique in that they have loops in their architecture, which allow them to maintain an internal state that can be updated with each new input. This internal state allows RNNs to remember information from previous inputs and use it to make predictions about future inputs.

#19 Generative Adversarial Networks (GANs)

Generative adversarial networks are a type of neural network that are used for generating new data that is similar to a given dataset. They consist of two neural networks, a generator and a discriminator, that are trained in tandem. The generator creates new data samples, while the discriminator attempts to distinguish between the generated data and real data. Through this process, both networks become more skilled at their respective tasks, resulting in the creation of highly realistic fake data.

#20 Explainable AI (XAI)

Explainable AI is a subfield of artificial intelligence that focuses on creating algorithms and systems that are transparent and explainable to humans. XAI is becoming increasingly important as AI is used in more critical applications, such as healthcare and finance, where the decisions made by AI systems can have significant consequences. By making AI systems more transparent, XAI can help to build trust and confidence in these systems, and enable humans to better understand and verify the decisions made by AI.

This list and definitions were compiled by the creator of Business Anthropology, Anthony Galima for educational purposes.

105 views0 comments


bottom of page