AI (artificial intelligence)
AI stands for artificial intelligence, which is the simulation of human intelligence processes by machines or computer systems. AI can mimic human capabilities such as communication, learning, and decision-making.
AI ethics
AI ethics refers to the issues that AI stakeholders such as engineers and government officials must consider to ensure that the technology is developed and used responsibly. This means adopting and implementing systems that support a safe, secure, unbiased, and environmentally friendly approach to artificial intelligence.
Algorithm
An algorithm is a sequence of rules given to an AI machine to perform a task or solve a problem. Common algorithms include classification, regression, and clustering.
Application programming interface (API)
An API, or application programming interface, is a set of protocols that determine how two software applications will interact with each other. APIs tend to be written in programming languages such as C++ or JavaScript.
Beneficial AI
Beneficial AI (BAI) goes beyond meeting ethical standards and short-term outcomes; it is about ensuring that AI technologies actively and consistently contribute to long-term human welfare, societal well-being, and environmental sustainability. BAI entails a proactive approach to minimizing adverse impacts and mitigating long-term negative implications. This means not only preventing harm but also fostering positive outcomes across various domains, including economic equity, social justice, and ecological health. By prioritizing these objectives, Beneficial AI aims to create a future where technology enhances the quality of life for all individuals and communities, driving progress in a way that is inclusive, responsible, and sustainable.
Big data
Big data refers to the large data sets that can be studied to reveal patterns and trends to support business decisions. It’s called “big” data because organizations can now gather massive amounts of complex data using data collection tools and systems. Big data can be collected very quickly and stored in a variety of formats.
Chatbot
A chatbot is a software application that is designed to imitate human conversation through text or voice commands.
Cognitive computing
Cognitive computing is essentially the same as AI. It’s a computerized model that focuses on mimicking human thought processes such as pattern recognition and learning. Marketing teams sometimes use this term to eliminate the sci-fi mystique of AI.
Computer vision
Computer vision is an interdisciplinary field of science and technology that focuses on how computers can gain understanding from images and videos. For AI engineers, computer vision allows them to automate activities that the human visual system typically performs.
Data mining
Data mining is the process of sorting through large data sets to identify patterns that can improve models or solve problems.
Data science
Data science is an interdisciplinary field of technology that uses algorithms and processes to gather and analyze large amounts of data to uncover patterns and insights that inform business decisions.
Deep learning
Deep learning is a function of AI that imitates the human brain by learning from how it structures and processes information to make decisions. Instead of relying on an algorithm that can only perform one specific task, this subset of machine learning can learn from unstructured data without supervision.
Emergent behavior
Emergent behavior, also called emergence, is when an AI system shows unpredictable or unintended capabilities.
Ethical AI
Ethical AI involves designing, developing, and deploying AI systems in ways that uphold moral principles, including fairness, justice, and respect for human rights and dignity. Ethical AI serves as the underpinning framework of Responsible AI (RAI).
Generative AI
Generative AI is a type of technology that uses AI to create content, including text, video, code and images. A generative AI system is trained using large amounts of data, so that it can find patterns for generating new content.
Guardrails
Guardrails refers to restrictions and rules placed on AI systems to make sure that they handle data appropriately and don’t generate unethical content.
Hallucination
Hallucination refers to an incorrect response from an AI system, or false information in an output that is presented as factual information.
Hyperparameter
A hyperparameter is a parameter, or value, that affects the way an AI model learns. It is usually set manually outside of the model.
Image recognition
Image recognition is the process of identifying an object, person, place, or text in an image or video.
Large language model
A large language model (LLM) is an AI model that has been trained on large amounts of text so that it can understand language and generate human-like text.
Limited memory
Limited memory is a type of AI system that receives knowledge from real-time events and stores it in the database to make better predictions.
Machine learning
Machine learning is a subset of AI that incorporates aspects of computer science, mathematics, and coding. Machine learning focuses on developing algorithms and models that help machines learn from data and predict trends and behaviors, without human assistance.
Natural language processing
Natural language processing (NLP) is a type of AI that enables computers to understand spoken and written human language. NLP enables features like text and speech recognition on devices.
Neural network
A neural network is a deep learning technique designed to resemble the human brain’s structure. Neural networks require large data sets to perform calculations and create outputs, which enables features like speech and vision recognition.
Overfitting
Overfitting occurs in machine learning training when the algorithm can only work on specific examples within the training data. A typical functioning AI model should be able to generalize patterns in the data to tackle new tasks.
Pattern recognition
Pattern recognition is the method of using computer algorithms to analyze, detect, and label regularities in data. This informs how the data gets classified into different categories.
Predictive analytics
Predictive analytics is a type of analytics that uses technology to predict what will happen in a specific time frame based on historical data and patterns.
Prescriptive analytics
Prescriptive analytics is a type of analytics that uses technology to analyze data for factors such as possible situations and scenarios, past and present performance, and other resources to help organizations make better strategic decisions.
Prompt
A prompt is an input that a user feeds to an AI system in order to get a desired result or output.
Quantum computing
Quantum computing is the process of using quantum-mechanical phenomena such as entanglement and superposition to perform calculations. Quantum machine learning uses these algorithms on quantum computers to expedite work because it performs much faster than a classic machine learning program and computer.
Reinforcement learning
Reinforcement learning is a type of machine learning in which an algorithm learns by interacting with its environment and then is either rewarded or penalized based on its actions.
Responsible AI
Responsible AI (RAI) refers to the development and use of artificial intelligence technologies in a manner that is ethical, transparent, accountable, and respects privacy and human rights.
Responsible and Beneficial AI
Responsible and Beneficial AI involves the ethical development, deployment, and governance of AI technologies to prevent harm and actively promote the public good. Ethical AI serves as the minimum expectation, ensuring transparency, inclusivity, and equitable access. Beneficial AI extends beyond these ethical standards by considering both short-term and long-term societal impacts. The goal is to create AI systems that not only adhere to ethical principles but also contribute positively to societal well-being while anticipating and mitigating potential adverse impacts on society.
Sentiment analysis
Also known as opinion mining, sentiment analysis is the process of using AI to analyze the tone and opinion of a given text.
Structured data
Structured data is data that is defined and searchable. This includes data like phone numbers, dates, and product SKUs.
Supervised learning
Supervised learning is a type of machine learning in which classified output data is used to train the machine and produce the correct algorithms. It is much more common than unsupervised learning.
Token
A token is a basic unit of text that an LLM uses to understand and generate language. A token may be an entire word or parts of a word.
Training data
Training data is the information or examples given to an AI system to enable it to learn, find patterns, and create new content.
Transfer learning
Transfer learning is a machine learning system that takes existing, previously learned data and applies it to new tasks and activities.
Turing test
The Turing test was created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence equal to humans, especially in language and behavior. When facilitating the test, a human evaluator judges conversations between a human and machine. If the evaluator cannot distinguish between responses, then the machine passes the Turing test.
Unstructured data
Unstructured data is data that is undefined and difficult to search. This includes audio, photo, and video content. Most of the data in the world is unstructured.
Unsupervised learning
Unsupervised learning is a type of machine learning in which an algorithm is trained with unclassified and unlabeled data so that it acts without supervision.
Voice recognition
Voice recognition, also called speech recognition, is a method of human-computer interaction in which computers listen and interpret human dictation (speech) and produce written or spoken outputs. Examples include Apple’s Siri and Amazon’s Alexa, devices that enable hands-free requests and tasks.