Fundamentals and Background on Artificial Intelligence

Course semester
1st semester
Course category
Compulsory
Preparatory
ECTS
5
Tutors

S. Konstantopoulos, M. Filippakis, C. Rekatsinas A. Charambidis, C. Spatharis

Goal

Upon successful completion of the course, the student will be able to possess:

  • The body of theoretical and practical basic knowledge for continuing studies in the MSc
  • Fundamental concepts, basic theories and programming practices

Specifically, the course covers the following topics aiming at the learning outcomes mentioned above.

  • Probability theory
  • Calculus
  • Linear algebra
  • Computational logic
  • Programming, libraries, core practices and processes for machine learning

This is achieved through the critical consideration of the methods taught, the solving of exercises and the implementation of exemplary systems, with the aim of understanding, designing and building effective Artificial Intelligence methods. In addition, the course aims at the following general abilities of the students

  • Ability to search and delve into specific theoretical and practical topics to meet the specific needs of the curriculum courses
  • Ability to solve problems
  • Ability to develop critical thinking and capacity for critical approaches
  • Ability of interdisciplinary approaches
  • Ability to apply theoretical knowledge in practice
  • Ability to adapt methods and techniques to new situations and conditions

Contents

  • Introduction to programming using the Python programming language: Python syntax, Creating scripts, Variables and data types, Python functions, File management, Modules and packages.
  • Python libraries (eg Numpy, Scikit-learn, Matplotlib) for building Machine Learning tools, Creating and managing Numpy arrays, Basic array operations, Linear Algebra with Numpy, Linear Regression example with Numpy
  • Introduction and installation of the PyTorch tool, Introduction to the PyTorch Automatic Differentiation system, Tensors, Operations with tensors, Data Loaders and data pre-processing, Creation and training of a Neural Network
  • Introduction to Machine Learning Operations (MLOPs), Automating and managing Machine Learning models, Applications of MLOPs
  • Arrays, operations on arrays, determinants, inverse and inverse of an array. Linear equations, methods of solving linear systems, Gaussian elimination, Cramer's rule. characteristic quantities, eigenvalues and eigenvectors, diagonalization of a matrix, similarity transformations. Vector spaces and subspaces, addition, multiplication, inner product of vectors, linear combination, measure and distance of vectors. Linear inequalities, linear programming.
  • Linear-multiple linear regression, logistic regression, inverse normal regression (Probit regression), Crest Regression, Static/Dynamic Autoregression and Spectral Analysis.
  • Spectral regression, multivariate analysis of variance (ANOVA-MANOVA). Exploratory factor analysis. Database mining and advanced prediction techniques. Regression-based predictive modelling .
  • Linear Regression, Logistic Regression, Ridge regression, Supervised Workflow and Algorithms, Supportive Machine Learning, Supervised Learning, Unsupervised Learning.
  • Propositional and categorical logic: Syntax, entailment rule, interpretations and models, quantification.
  • Inference and reasoning: Resolution rule, production, substitution, integration, forward and backward execution chain, open and closed world assumptions, non-monotonic inference, predicate completion.
  • Ιnference and reasoning in various logics (Kripke, Fuzzy, Lukasiewicz) and in numerical fields.

Bibliography

  • Stuart Russel and Peter Norvig. Artificial Intelligenc­e: A Μodern Approach, Prentice Hall, 2nd edition (2003). http://aima.cs.berkeley.edu/.
  • Melvin Fitting. First-Order Logic and Automated Theorem Proving. Springer, 1996.
  • Ben-Ari, Mordechai. Mathematical logic for computer science. Springer Science & Business Media, 2012.
  • Leslie Valiant, “A theory of the learnable”. Communications of the ACM 27. 1984.
  • Cullen Schaffer, “A conservation law for generalization performance”. In Proceedings of the 11th International Conference on Machine Learning (ICML ’94). 199