logo

In TAILab, we are focusing on theoretical and applied aspects of trustworthy machine learning:


Theoretical Research:

Our goal is to advance knowledge on understanding trust in machine learning, particularly from an information security perspective. While trustworthiness of ML algorithms spans over several theoretical research including privacy, robustness, interpretability, explainability, fairness, and governance, in TAILab, our main focus is on robustness, i.e., the ability of an ML model to maintain reliable performance when faced with natural noise, data distribution shifts, and intentional adversarial attacks. Our interests are both on (i) data-oriented robustness (statistical robustness), where we focus on information theoretical and statistical investigation of noisy data, distributional changes, uncertainty quantification, OOD detection and model calibration; and (ii) model-oriented robustness (adversarial robustness), where we investigate the fundamental limitations of a classification model when facing with input perturbations, whether the perturbation is originated form an intentional adversary or from natural phenomena with adversarial effects. Our interests in tackling adversarial robustness span from empirical methods to certified robustness. We have a special interest in applying advances made in mathematical optimization and nonconvex relaxation problems to improve ML robustness bounds in both areas.


Applied Research:

The applied aspect of our research focuses on translating ML robustness principles into trustworthy AI systems that end users can actually rely on. We have specific interests in Clinical AI. The following instances of applied research that are completed or underway with our healthcare partners demonstrate the scope of our work. We design prediction pipelines that expose and control uncertainty at the point of care: for structured data and graphs, we quantify epistemic/aleatoric uncertainty in GNNs—useful for patient-specific risk stratification and pathway modelling. For medical imaging and segmentation, we push beyond classical confidence measurement to clinically aware conformal prediction, allocating larger, morphology- or interface-sensitive prediction sets with application beyond clinical settings, including robot path planning and navigation, where uncertainty is concentrated and critical decisions hinge. In parallel, for AI applications in mental health, we evaluate and harden LLM-based clinical assistants: we probe multi-step reasoning reliability, study failure modes, and couple LLM outputs with calibrated uncertainty/verification hooks so that agentic workflows remain auditable and defer gracefully when confidence is low. Across these applications, the throughline is operational robustness: threat- and shift-aware evaluation, calibrated predictions with coverage guarantees, and lightweight certifiable components that survive real distribution shifts while meeting clinical accountability standards.


For Prospective Students:

I seek to recruit highly qualified individuals pursuing a graduate degree and a postdoc. I primarily supervise theoretical research on trust in machine learning, with emphasis on robustness. If your primary interest is purely applied work, please do not email. Competitive applicants should (i) identify a theoretical question aligned with our group’s topics—e.g., robustness–accuracy trade-offs, information-theoretic bounds, certified robustness radii, or any other related topics to our research; (ii) write ≤1 page outlining the problem, assumptions, and the technical tools they plan to use; and (iii) map that theory to a target application (clinical or otherwise) in an additional ≤1 page, explaining how the analysis or guarantees would improve reliability in practice. When you are ready, attach this research statement, your CV, complete transcripts (undergrad + graduate), and English-proficiency evidence (if applicable), to an email with the subject format as “Prospective Student: (your name) (the degree, e.g., MSc/MEng/PhD/PDF).” Everything should be sent as one or more attachments to your email and not as URLs. Due to the volume of emails only potential candidates will be contacted.

If you are currently an MEng student in our department and interested in completing a project at the intersection of Security and Machine Learning, I might be able to help you explore topics and projects that would suit your background and supervise your project. Your best chance to get involved would be taking EE8227: Secure Machine Learning course.


Statement on Equity, Diversity, and Inclusion (EDI):

TAILab Research Group is strongly committed to upholding the values of Equity, Diversity, and Inclusion (EDI). Consistent with the Tri-Agency Statement on EDI, and the Dimensions Pilot Program at Toronto Met. University, our group will foster an environment in which all will feel comfortable, safe, supported, and free to speak their minds and pursue their research interests. We recognizes that engineering culture can feel exclusionary to traditionally underrepresented groups in STEM fields. By acknowledging the EDI issues that exist in our field, we aim to validate the challenges faced by each group member, and continually strive to improve our group’s culture for all members.


TAILab Research Group Meetings:

We meet bi-weekly to discuss research topics on AI and Machine Learning Security, Privacy. Please see the meeting schedule and discussion topics here. If you are interested to attend please contact Reza Samavi.


Current Projects:

Faculty
Reza 05 s
Reza Samavi, PhD, PEng


Security & Privacy

Trustworthy Machine Learning

Safe and Secure Machine Learning

Optimization


Collaborators/Visiting Researchers
25jackie
Jacqueline W. Breska , PhD - Visiting Researcher, Amsterdam University, Medical Center, Amsterdam, The Netherlands

Image Segmentation Confidence Measurement

Differential Privacy


25leo
Leonard Breska , PhD - Collaborator, University of Amsterdam, Department of Computer Science, Amsterdam, The Netherlands

Safe AI

Mechanistic Interpretability

Adversarial Robustness



Research Students
Hamedkarimi
Hamed Karimi , PhD candidate Computer Engineering

Machine Learning Robustness

Secure Machine Learning

Optimization


Maleki
Mohammadreza Maleki, PhD candidate Computer Engineering

Machine Learning

ML Robustness


Cassandra
Cassandra Czobit, PhD student Computer Engineering

Machine Learning

Medical AI


luzalen
Luzalen Marcus, PhD candidate Computer Engineering

Trustworthy Machine Learning

Computational Lingustics


25daniel
Daneil Sediq, MASc student Computer Engineering

ML Robustness

25vaishali
Vaishali Meyappan, MASc student Computer Engineering

LLM Confidence

Conformal Prediction

25 tailab
Research group meeting - September 2025


24 tailab
Research group meeting - September 2024


23 tailab
Research group meeting - September 2023


Group2023
Research group meeting - July 2023


202209
Research group meeting - September 2022


202208
Research group meeting - August 2022


2021 tailab 1
Research group meeting - June 2021


2019 spai 1
Research group meeting - Sep. 2019


Group18
Research group meeting - Aug. 2018

Alumni
Hirad
Hirad Daneshvar, PhD Computer Engineering

Trustworthy GNNs

Medical AI


25saeideh
Hao Luo, MEng Computer Engineering (2024)

ML Robustness

Certified Robustness

25saeideh
Saeideh Mousavi, MEng Computer Engineering (2024)

Medical AI

LLM Privacy

Mini
Mini Thomas, PhD Software Engineering, Co-supervised with Dr. Antoine Deza (McMaster Univ.)

Security, Privacy & Trust

Optimization

Machine Learning

Ferdous
Md. Mahmud Ferdous, MASc Computer Engineering

Security & Privacy

Machine Learning

Blockchain


Bradley
Bradley Rose, MASc Computer Engineering

OOD Generalization and Adversarial Robustness

Magdalean
Magdalean Singarajah, MEng Computer Engineering (2023)

Medical AI

LLM Privacy

Bipin
Bipin Aasi, MEng Computer Engineering (2023)

LLM Robustness

Moe
Moe Sabry, PhD Computer Science, Co-supervised with Dr. Douglas Stebila (Waterloo Univ.) and Dr. Emil Sekirinski (McMaster Univ.) (2023)

Security

Cryptography

Thesis: Secure Long-term Archiving System
Omar
Omar Boursalie, Postdoctoral Fellow (2023)

Machine Learning

Medical AI


Mina
Mina Yazdani, MASc Computer Engineering (2023)

Machine Learning Security

Optimization

Thesis: Diverse Ensembles and Noisy Logits for Improved Robustness of Neural Networks
Cassandra
Cassandra Czobit, MEng Computer Engineering (2022)

Machine Learning

Generative Adversarial Networks

Project: Implementation of a CycleGAN Model for MRI Image Translation
Omar
Omar Boursalie, PhD Biomedical Engineering, Co-supervised with Dr. Thomas Doyle (McMaster Univ.) (2021)

Machine Learning

Medical AI

Thesis: Temporally-Embedded Deep Learning Model for Health Outcome Prediction

Position: Sessional lecturer at McMaster University, Department of Electrical and Computer Engineering

Awards: NSERC PGS-D Recipient for 2018-2020
Anna
Anna Lindsay-Mosher, USRA - Art and Science (2020)

Semantic Web

Machine Learning

Social Good


Awards: Undergraduate Student Research Award
Yuting
Yuting Liang, MSc Computer Science (2020)

Security & Privacy

Optimization

Machine Learning

Thesis: Algorithms in Privacy & Security for Data Analytics and Machine Learning
Position: PhD student at HKUST, Department of Computer Science
Vanessa
Vanessa Calero Bravo, MSc Computer Science (2020)

Security & Privacy

Machine Learning

Social Networks

Thesis: A Framework for Measuring Privacy Risks of YouTube
Yifan
Yifan Ou, MSc Computer Science (2020)

Security & Privacy

Optimization

Machine Learning

Thesis: Game Theoretic Analysis of Defence Algorithms Against Data Poisoning Attack
Saman
Saman Dhindsa, MEng, Co-supervised with Dr. Gail Krantzberg (2020)

Security

Privacy

Project: Privacy Principles for Facial recognition Technology
Position: DependableIT. Technical Service Analysit
Awards: MITACS Award with Highmark Global
Imgplacem
Karl Knopf, MSc Computer Science, Co-supervised with Dr. Douglas Stebila (2019)
Thesis: Real World Secret Leaking
Position: PhD student, Computer Science, University of Waterloo
Pouyan
Pouyan Momeni, MEng Computer Science (2019)
Thesis: Machine Learning Model for Smart Contracts Security Analysis
Position: Senior Software Developer, Scotia Bank
Awards: MITACS Award with Highmark Global
Andrew
Andrew Sutton, MSc Computer Science (2018)
Thesis: Establishing Verifiable Trust in Collaborative Health Research
Position: Blockchain Application Developer at RBC
Awards: SOSCIP Award
Ming
Mingyuan Li, MSc eHealth (2018)
Thesis: DSAP: Data Sharing Agreements Privacy Ontology
Position: Solution Developer at CIHI
Sameen
Sameen Ateeq, MSc eHealth (2018)
Thesis: Finding and Evaluating Predictive Factors of Fall-Related Injuries
Ali
Ali Ariaeinejad, MSc eHealth (2017)
Thesis: A Performance Predictive Model for Emergency Medicine Residents
Position: Software Developer at Faculty of Health Sciences, McMaster University
Imgplacef
Qian Shan, MEng Computer Science (2017)
Project: Augmented Reality Based Brain Tumor 3D Visualization
Imgplacem
Farshad Rahimi Asl, MEng Computer Science, Co-supervised with Dr. Fei Chiang (2017)
Project: Privacy Aware Web Services in the Cloud
Position: Software Engineer at Evertz
Omar
Omar Boursalie, MASc Biomedical Engineering, Co-supervised with Dr. Thomas Doyle (2016)
Thesis: Mobile Machine Learning for Realtime Predictive Monitoring of Cardiovascular Disease
Position: PhD Candidate at McMaster
Xiao
Xiao Dong, MEng Computer Science (2016)
Project: COC: An ontology for capturing semantics of circle of care
Position: Senior Software Engineer at BlueCat
Imgplacem
Salman Khawaja, MEng Computer Science (2016)
Project: Securing the Privacy of Electronic Health Records on Mobile Phones