In the vast and dynamic realm of Artiﬁcial Intelligence (AI), one cannot escape the labyrinth of jargons, concepts, and terminology that deﬁne its landscape. As AI continues to evolve and inﬁltrate various aspects of our lives, it brings along with it a unique lexicon that can appear intimidating and perplexing to the uninitiated. However, understanding these crucial terms is essential for anyone seeking to navigate the intricacies of AI eﬀectively.
AI governance refers to the policies, processes, and frameworks that organisations and institutions implement to ensure the responsible, ethical, and transparent development and deployment of Artiﬁcial Intelligence systems.
From our partners:
AIaaS is the delivery of Artiﬁcial Intelligence capabilities and tools via cloud-based platforms, enabling businesses and organisations to access and utilise AI technology without the need for in-house expertise or infrastructure.
An algorithm is a step-by-step procedure for solving a problem or accomplishing a task. In AI, algorithms are used to process data, recognize patterns, and make decisions.
Artiﬁcial Intelligence (AI).
AI refers to the development of computer systems capable of performing tasks that typically require human intelligence. These tasks include problem-solving, learning, understanding natural language, recognizing patterns, and making decisions.
Bias is the simplifying assumptions made by a model to make the target function easier to learn. Variance is the amount that the estimate of the target function will change if diﬀerent training data was used.
Big Data refers to the massive volume, variety, and velocity of data generated by various sources, including social media, IoT devices, and online transactions. AI techniques are used to analyse and extract insights from Big Data.
Cloud computing is the delivery of computing resources, such as storage, processing power, and AI services, over the internet. Cloud-based AI platforms enable scalable, ﬂexible, and cost-eﬀective AI development and deployment.
Data mining is the process of discovering hidden patterns, correlations, and trends in large datasets using statistical and ML techniques.
Data preprocessing involves cleaning, transforming, and normalising raw data to make it suitable for ML algorithms. Techniques include handling missing values, encoding categorical variables, and scaling features.
Deep Learning (DL).
DL is a subﬁeld of ML that involves training artiﬁcial neural networks to recognize complex patterns in large datasets. Deep learning models consist of multiple layers of interconnected nodes, which enable them to automatically learn hierarchical representations of the input data.
Edge computing involves processing data near its source, such as IoT devices, rather than sending it to a centralised data centre or cloud. AI models are often deployed at the edge to enable real-time decision-making and reduce data transmission costs.
Feature engineering is the process of selecting, creating, and transforming features or variables from raw data to improve the performance of ML models.
Graphics Processing Unit (GPU).
A GPU is a specialised electronic circuit designed for parallel processing, which accelerates the training and execution of AI models, particularly deep learning algorithms.
Internet of Things (IoT).
IoT refers to the network of interconnected devices, vehicles, and appliances that collect, exchange, and analyse data. AI techniques are used to process and make decisions based on the data generated by IoT devices.
Machine Learning (ML).
ML is a subset of AI that focuses on developing algorithms that can learn from and make predictions based on data. ML systems improve their performance as they are exposed to more data over time, without being explicitly programmed to do so.
Model evaluation involves assessing the performance of an AI model on a dataset not used during training. Metrics like accuracy, precision, recall, and F1 score are commonly used to quantify a model’s performance.
Model deployment is the process of integrating a trained AI model into a production environment, where it can be used to makepredictions on real-world data.
Model training is the process of adjusting an AI model’s parameters using a dataset to minimise the error between the model’s predictions and the actual output values.
Neural networks are computational models inspired by the structure and function of biological neurons. They consist of interconnected nodes or neurons, organised into layers, which process and transmit information to solve complex problems.
Overﬁtting and Underﬁtting.
Overﬁtting refers to a model that models the training data too well. Underﬁtting refers to a model that can neither model the training data nor generalise to new data.
Reinforcement Learning (RL).
RL is an ML paradigm in which an agent learns to make decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties and aims to maximise its cumulative reward over time. RL algorithms have been applied to control systems, game playing, and robotics.
Supervised learning is a type of ML where the algorithm is trained on a labelled dataset, containing input-output pairs. The algorithm learns the relationship between inputs and outputs, allowing it to make predictions on new, unseen data.
Tensor Processing Unit (TPU).
A TPU is a custom-designed hardware accelerator developed by Google speciﬁcally for the eﬃcient execution of ML models, particularly neural networks.
Unsupervised learning involves training ML algorithms on an unlabeled dataset, with the goal of discovering hidden patterns or structures in the data. Techniques include clustering, where the algorithm groups similar data points, and dimensionality reduction, which simpliﬁes the representation of the data.