PhD student positions

Find here a list of current and past PhD positions at SmartData@PoliTO lab.

For further information, we invite you to check the Politecnico PhD program here. You may also write us through the Contacts page.


Open positions in the 2026 spring session

For applications, please contact: contact@smartdata.polito.it.

Applications shall include:

  • The position you are interested into 
  • A statement of interest (half a page)
  • Curriculum Vitae (CV) 
  • Master’s degree transcript 
  • Eventual academic references 

All position are fully funded.

Deadline for the Application: November 14th, 2025 (check the official page for further details). Beginning of PhD: March, 2026.


Agentic AI for Cybersecurity: Autonomous Red-Blue Agents in Interactive Cyber Environments

Supervisors:
Danilo Giordano (DAUIN)
Matteo Boffa (DAUIN)

Description: Cyberattacks are growing in complexity and speed, often powered by automation and generative AI. Traditional defenses, heavily dependent on human analysts, struggle to match this pace.

This project aims to develop autonomous, LLM-based cybersecurity agents capable of acting as offensive (red team) and defensive (blue team) entities. These agents will observe, reason, and act within realistic, simulated cyber environments, such as networked systems or sandboxed attack-defense scenarios.

The research will explore how AI agents can detect and exploit vulnerabilities, coordinate in defense strategies, and continuously learn from interactions – ultimately supporting human experts in proactive and adaptive cyber operations.

Research Challenges and Key Questions:

  • Cyber Environment Design: How can we create realistic, controllable digital environments where AI agents can safely test and learn cyber behaviors?
  • Autonomous Cyber Reasoning: How can LLMs understand and generate complex cyber actions (e.g., scanning, intrusion detection, exploit execution) with reliability and safety?
  • Multi-Agent Coordination: How can teams of AI agents collaborate or compete in offensive and defensive roles while sharing contextual knowledge?
  • Knowledge Adaptation: How can cybersecurity expertise from CTI reports, CVE databases, and code repositories be distilled into specialized, lightweight models?
  • Evaluation and Trust: How do we measure performance, safety, and containment of autonomous agents acting in cybersecurity settings?

Required background for the PhD candidate: 

  • Strong programming skills in Python, with experience in machine learning frameworks (PyTorch, TensorFlow).
  • Understanding of vulnerabilities, and common attack and defense techniques.
  • Interest or experience in ethical hacking, intelligent agent design, or LLM fine-tuning.
  • Familiarity with tools such as Metasploit, Wireshark, Snort, or Cuckoo Sandbox is a plus.

Autonomously Evolving Agent Architectures for Continuous Learning and Self-Improvement

Supervisors:
Marco Mellia (DAUIN)
Danilo Giordano (DAUIN)
Matteo Boffa (DAUIN)

Description: Current LLM-based agent systems often rely on human intervention to prevent context drift, requiring users to decompose tasks and refine prompts to keep the agent aligned with its goal. Moreover, most agent executions are atomic – they lack mechanisms for long-term memory, introspection, or structured knowledge accumulation across iterations.

This proposal aims to address these limitations by developing agent architectures capable of autonomous evolution and continuous learning. The envisioned system should be able to:

  • Self-evolve – autonomously identify failures or inefficiencies in previous iterations and modify its own architecture or reasoning strategy to prevent recurrence. 
  • Construct and exploit a structured knowledge base – persist useful contextual information and reasoning traces across executions, supporting future problem-solving and adaptation.

The long-term vision is to enable adaptive, self-improving agents that learn not only from data, but from their own operational history – bridging the gap between one-shot agent execution and lifelong learning systems.

Research Challenges and Key Questions:

  • Meta-learning and self-evolution: How can we implement a meta-learning framework that enables an agent to autonomously identify, analyze, and correct its own failures?
  • Knowledge representation and reuse: How can the agent’s accumulated experience be stored, structured, and reused effectively? Can this knowledge base eventually be leveraged to update or fine-tune the underlying model?
  • Continuous adaptation and evaluation: How can we measure and ensure progress in agent self-improvement? What metrics capture qualitative improvement and stability over time?
  • Benchmarking and evaluation environments: What existing benchmarks, datasets, or simulated environments can best support the evaluation of self-evolving, memory-based agents?

Required background for the PhD candidate:

  • Good foundations in Machine Learning or Artificial Intelligence, with an interest in autonomous systems, reinforcement learning, or large language models.
  • Programming experience in Python and familiarity with modern ML libraries (e.g., PyTorch, TensorFlow, or similar).
  • Interest in agent architectures and continuous learning, and curiosity about how systems can self-improve over time.
  • (Optional but appreciated): Some exposure to meta-learning, knowledge representation, or multi-agent systems.

Adversarial Robustness in Multi-Modal Foundation Models

Supervisors:
Luca Cagliero (DAUIN)
Danilo Giordano (DAUIN)

Description: Multi-modal foundation models – integrating vision, language, and audio – are increasingly used in critical domains such as content moderation, customer support, and AI-assisted software engineering. However, these systems introduce new attack surfaces arising from cross-modal interactions, where adversarial inputs in one modality can manipulate or corrupt the overall model behavior when combined with other modalities.

This PhD project aims to analyze, expose, and mitigate adversarial vulnerabilities in multi-modal AI models. The research will focus on how semantic inconsistencies across modalities (e.g., benign text paired with deceptive images or audio) can trigger unsafe or incorrect outputs. The final goal is to build theoretical foundations, practical attack generation frameworks, and robust defense mechanisms to strengthen the security of multi-modal AI systems deployed in real-world environments.

The research will be conducted in collaboration with the Italian Institute of Artificial Intelligence for Industry (I3A) and Politecnico di Torino, combining academic rigor with applied validation on industrial-scale multi-modal models.

Research Challenges and Key Questions:

  • Cross-Modal Vulnerability Modeling: How can we systematically characterize and measure vulnerabilities emerging from the fusion of modalities such as text, image, and audio?
  • Adversarial Attack Generation: How can we design controlled, automated adversarial attacks that exploit inconsistencies between modalities to induce model failures?
  • Defense and Robustness Evaluation: Which defense mechanisms (e.g., input sanitization, fusion regularization, uncertainty modeling) are most effective against multi-modal attacks?
  • Benchmark Dataset and Evaluation Framework: How can we build and validate a benchmark dataset of cross-modal adversarial examples to assess robustness across architectures and application contexts?
  • Scalability and Generalization: Can we design parameterized attack templates capable of automatically generating diverse adversarial samples across architectures and tasks?
  • Trustworthy Multi-Modal AI: How can theoretical insights and empirical findings be distilled into practical guidelines and tools for building secure, interpretable, and reliable multi-modal AI systems?

Required background for the PhD candidate:

  • Strong background in machine learning and artificial intelligence, particularly deep learning architectures.
  • Experience or interest in adversarial machine learning, multi-modal fusion, or AI robustness.
  • Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
  • Familiarity with one or more of the following domains is a plus:
    • Computer Vision, Natural Language Processing, or Audio Processing
    • Information Theory and Secure System Design
  • Motivation to work on AI safety and cybersecurity challenges in collaboration with academic and industrial partners.

Artificial Intelligence for Phishing Detection and Prevention

Supervisors:
Marco Mellia (DAUIN)
Nikhil Jha (DAUIN)

Description: Phishing is one of the most widespread and damaging cyberattacks, exploiting social engineering to deceive users via email, websites, or messaging platforms. Existing defenses, often based on static detection, struggle to keep pace with attackers’ rapidly evolving techniques.  

This research addresses the challenge in two phases. 

  • Phase 1: Observation and Analysis. The candidate will collect and analyze real-world phishing data, including malicious websites, phishing messages, and social engineering strategies. Data will be gathered by crawling the web and dark web, as well as monitoring communities on platforms such as Telegram, WhatsApp, Instagram, and TikTok. The resulting large-scale, multimodal dataset will reveal attacker tactics and user vulnerabilities. 
  • Phase 2: Prevention Algorithms. The goal is to engineer real-time defenses capable of both recognizing malicious content (e.g., phishing webpages) and identifying contextual anomalies, where user behavior deviates from normal patterns. 

To achieve this, the project will develop a multimodal foundation model for cybersecurity, designed to handle diverse data types (text, images, videos, languages) and tailored specifically to detect and prevent phishing campaigns across multiple attack vectors. 

Required background for the PhD candidate:

  • Strong programming skills, preferably in Python, with experience in deep learning frameworks (e.g., PyTorch, TensorFlow) and data processing tools (e.g., Spark, Pandas). 
  • Solid understanding of Machine Learning and Deep Learning, including model training, evaluation, and deployment. 
  • Knowledge of Natural Language Processing (NLP) and multimodal learning, with familiarity in working with text, images, or video data. 
  • Fundamentals of cybersecurity and networking, including common attack vectors (e.g., phishing, malware, social engineering) and defensive strategies. 
  • Experience in data collection and analysis, such as web crawling, handling large-scale datasets, or working with unstructured data from social platforms.