SECURE-AI4H Banner

AAAI Fall Symposium – SECURE‑AI4H

Safe, Ethical, Certified, Uncertainty-aware, Robust, and Explainable AI

November 6 – 8, 2025

Westin Arlington Gateway, VA

Schedule

Headshot of Presenter Name

Dr.Aidong Zhang

University of Virginia

Professor

Explainable AI in Medical Imaging

In recent years, major advances in artificial intelligence (AI) have been applied to medical image diagnosis with promising results. Even though these methods demonstrate incredible potential for saving valuable person-hours and minimizing inadvertent human errors, their adoption has been met with rightful skepticism and extreme circumspection in critical applications such as medical diagnosis. A paramount challenge is the lack of rationale behind predictions—making these systems notoriously “black boxes.” In extreme cases, this can create a mismatch between the designer’s intended behavior and the model’s actual performance. In this talk, I will discuss our recent research on explainable AI strategies. In particular, I will describe concept-based learning models and show how both concept-based and example-based learning approaches can be designed for explainable deep neural networks, vision transformers, and vision-language models.


Headshot of Presenter Name

Dr.Gamze Gürsoy

New York Genome Center, Columbia University

Assistant Professor

Federated Learning Approaches to Biomedical Knowledge Discovery

The rapid expansion of omics technologies, coupled with the growing availability of structured medical records, creates unprecedented opportunities to deepen our understanding of health and disease. Yet these advances also raise formidable challenges: protecting patient privacy and enabling the integration of sensitive data across institutions. In this talk, I will present our lab’s work on privacy-preserving informatics and machine learning methods that enable critical biomedical analyses without requiring raw data to be centralized or shared. I will highlight techniques such as federated learning and secure multiparty computation that make it possible to discover new knowledge while maintaining strong privacy guarantees. Finally, I will discuss how standard federated learning often breaks down under real-world distributional shifts, and introduce novel approaches we have developed to address these limitations.


Headshot of Presenter Name

Dr.Jonathan Takeshita

Assistant Professor

School of Cybersecurity, Old Dominion University

Duty of Care: A Call for Open and Responsible AI Innovation in Healthcare

Recent advances in AI, especially those of LLMs, bring the prospect of increased adoption of AI in medicine and medical education. In particular, many institutions responsible for medical treatment and education are rapidly aiming to increase AI use in practice and curricula. However, the potential downsides of overuse of AI in these fields are under-discussed. In the rush to AI adoption, sources of healthcare risk such as LLM reliability, patient privacy, financial and environmental costs, vendor dependencies, and AI over-reliance are often not deeply considered. This paper discusses these recent trends and makes recommendations for healthcare institutions considering further adoption of AI.


Headshot of Presenter Name

Dr. Philippe Giabbanelli

Professor

Old Dominion University

Towards Personalized Explanations for Health Simulations: A Mixed-Methods Framework for Stakeholder-Centric Summarization

Modeling & Simulation (M&S) approaches such as agent-based models hold significant potential to support decision-making activities in health, with recent examples including the adoption of vaccines, and a vast literature on healthy eating behaviors and physical activity behaviors. These models are potentially usable by different stakeholder groups, as they support policy-makers to estimate the consequences of potential interventions and they can guide individuals in making healthy choices in complex environments. However, this potential may not be fully realized because of the models' complexity, which makes them inaccessible to the stakeholders who could benefit the most. While Large Language Models (LLMs) can translate simulation outputs and the design of models into text, current approaches typically rely on one-size-fits-all summaries that fail to reflect the varied informational needs and stylistic preferences of clinicians, policymakers, patients, caregivers, and health advocates. This limitation stems from a fundamental gap: we lack a systematic understanding of what these stakeholders need from explanations and how to tailor them accordingly. To address this gap, we present a step-by-step to identify stakeholder needs and guide LLMs in generating tailored explanations of health simulations. Our procedure uses a mixed-methods design by first eliciting the explanation needs and stylistic preferences of diverse health stakeholders, then optimizing the ability of LLMs to generate tailored outputs (e.g., via controllable attribute tuning), and then evaluating through a comprehensive range of metrics to further improve the tailored generation of summaries.


Headshot of Presenter Name

Claire Xu

Researcher

The Harker School

Toward Preventive Alzheimer’s Risk Screening with Cell-Subtype-Aware Brain-Blood Graph Neural Network

Early Alzheimer’s disease (AD) pathology begins decades before symptoms emerge, yet over 75% of the at-risk population lacks access to non-invasive screening methods. Current diagnostic tools like PET imaging and cerebrospinal fluid sampling are costly, invasive, and poorly suited for large-scale, proactive brain health monitoring. This research introduces a cell-subtype-aware brain-blood gene modeling framework that reframes AD assessment from reactive diagnosis to preventive risk evaluation for sustained cognitive health. Using graph neural networks, blood RNA-seq profiles are anchored to sex-specific, single-cell brain transcriptomics across neuronal layers, enhancing biological fidelity and interpretability. Explicit control of APOE4 genotype, age, sex, and education preserves meaningful variation while suppressing systemic noise. Gene set enrichment analysis confirmed pathways in neurodegeneration, inflammation, oxidative phosphorylation, and sensory function, with brain-derived signals reproducibly detected in blood. Sex-stratified analyses revealed female-specific signatures linked to addiction and mood regulation, pathogen-driven immune responses, and nutrient-based neuroprotection. This research identifies a blood-based gene panel for AD risk: GFAP, TREM2, C1QC, C1QB, PLCG2, TXNIP, CD163, CAMK1D, DAPK1, CCND3, LRP10, and COQ10A. By coupling fine-grained brain biology with interpretable AI, this work enables equitable, population-scale early risk identification, supporting proac-tive interventions to maintain cognitive function and delay disease onset.


Headshot of Presenter Name

Dr. Gangqin Hu

Assistant Professor

West Virginia University

Dermatologic Image Analysis with ChatGPT: the Good, the Bad, and the Ugly.

Abstract: TBD


Headshot of Presenter Name

Dr. Apurv Ratan Murty

Assistant Professor

Georgia Institute of Technology

Title: TBD

Abstract: TBD


Headshot of Presenter Name

Dr. Qingyu Chen

Assistant Professor

North Carolina A&T State University

Title: TBD

Abstract: TBD


S K

Salamata Konate

Postdoc Researcher

Lassonde School of Engineering, York University

Tile: TBD

Abstract: TBD


Presenter A Presenter B
Sanford Burnham Prebys Medical Discovery Institute
Xia Tian — Assistant Professor
Mehran Ghafari — Postdoctoral Associate

Real-Time, Non-Invasive Monitoring of Energy Expenditure to Evaluate Physiological Aging in Mice

Accurate assessment of aging-related physiological changes requires tools that minimize stress and reflect natural behaviors. In this talk, I will present a real-time, non-invasive monitoring platform that uses thermal imaging to continuously track energy expenditure, motor activity, and circadian rhythm in mice within their home cages. This system enables long-term observation under naturalistic conditions and avoids the confounding effects of forced tasks or handling stress. Our findings reveal consistent differences between young and aged mice, demonstrating the platform’s sensitivity to subtle age-related decline. This approach offers significant potential for studying the progression of age-related diseases and evaluating early-stage interventions in preclinical models.



Presenter A Presenter B
Co-founders of Civitaas
Reva Schwartz — Research Scientist
Gabriella Waters — Director, Center for Responsible AI

Implementing real-world AI evaluation in health care

While there are numerous methods to test what AI models can do, these assessments typically occur “in the box” and under laboratory settings. This leads to outcomes that are too mismatched to apply to real world conditions, leaving organizations without the insights they need for informed decision making. The Civitaas team will demonstrate how to leverage scenarios for evaluating how risks and impacts materialize in real-world interactions between people and AI systems in health care settings.