I am a Research Associate at the University of Edinburgh working on geometry-aware computer vision systems for clinical video analysis, with a focus on bronchoscopy, surgical workflow modeling, and endoscopic video understanding. My work integrates explicit geometric structure (depth, graphs, topology) into deep learning pipelines to improve robustness, interpretability, and deployment readiness in safety-critical medical settings.
Prior to this, I received my MRes degree from King's College London, my MSc degree from the University of Southampton, and my Bachelor's degree from Beijing University of Chinese Medicine.
I am particularly interested in AI systems and ML engineering roles that build and deploy computer vision systems for real-world medical or other safety-critical environments.
I design and build applied machine learning systems for safety-critical and data-constrained domains,
with an emphasis on end-to-end pipelines: data curation, model training, evaluation, and deployment-ready inference.
My work spans geometry-aware perception, temporal and graph-based modeling, and reproducible tooling that produces
integration-friendly outputs (e.g., depth, graphs, structured predictions) for downstream systems.
If you’d like to collaborate, please reach out at
francis.xiatian.zhang@outlook.com (primary) or
francis.zhang@ed.ac.uk (academic).
Publishes ready-to-run inference code for bronchoscopic navigation research stacks, adapting foundation models to bronchoscopy footage so downstream perception modules inherit reliable depth.
Produces stable depth estimates that downstream planners can use for CT-to-video alignment and collision checks.
Pipeline: synthetic airway renderer → domain adaptation → depth inference package → evaluation harness, all versioned so surgical robotics teams can reproduce deployments.
Provides a depth-aware inpainting toolkit that teams can run locally to clean surgeon video archives prior to research sharing or dataset release.
Accepts raw video plus masks and returns temporally consistent fills, ready for downstream training or review.
Bundled training/inference scripts with deterministic preprocessing and checkpoints so biomedical researchers can rerun inpainting and compare outputs to their own reference sets.
U-care: Deep Ultraviolet Light Therapies
Research Associate, EPSRC Grant: £6,132,366 (PI: Dr. Mohsen Khadem)
Funded by UKRI, 2021–2026 (I joined in 2024)
Course Development in Edge Computing and Analytics 2.0
Research Assistant, UK-Egypt Trans-National Education (TNE) Grant: £30,000 (PI: Dr. Anish Jindal)
Funded by the British Council, UK, 2022–2024
Pose Estimation for Health Professional Education: Development of an Objective Computerized Approach for Measuring and Assessing Technical Competencies in Nursing
Research Assistant, Northumbria University Application Seed Funding Scheme: £16,428 (PI: Dr. Merryn Constable)
Funded by Northumbria University, UK, 2022
Research
My previous research can be primarily categorized into three parts:
Computer-assisted Intervention,
Clinical Outcome Analysis, and
Evidence-based Medicine.
Below are some of my selected publications. A complete list of my publications can be found on my Google Scholar page.
Conference Papers:
BREA-Depth: Bronchoscopy Realistic Airway-geometric Depth Estimation
Francis Xiatian Zhang, Emile Mackute, Mohammadreza Kasaei, Kevin Dhaliwal, Robert Thomson and Mohsen Khadem
MICCAI 2025 |
paper |
arXiv |
code Contribution: Delivered an airway-aware depth estimation system that adapts foundation models via Depth-aware CycleGAN translation, airway structure losses, and Airway Depth Structure Evaluation metrics for bronchoscopy navigation.
Depth-Aware Endoscopic Video Inpainting
Francis Xiatian Zhang, Shuang Chen, Xianghua Xie and Hubert P. H. Shum
MICCAI 2024 |
paper |
arXiv |
code Contribution: Built a depth-aware inpainting system that pairs Spatial-Temporal Guided Depth Estimation with bi-modal fusion and a depth-enhanced discriminator to recover occluded anatomy.
Pose-Based Tremor Classification for Parkinson's Disease Diagnosis from Video
Haozheng Zhang, Edmond S. L. Ho, Francis Xiatian Zhang and Hubert P. H. Shum
MICCAI 2022 |
paper |
arXiv |
code Contribution: Led dataset curation and clinical alignment for a pose-based tremor analysis system, ensuring label validity, evaluation protocol design, and meaningful clinical interpretation.
Journal Papers:
Adaptive Anticipation: Adaptive Graph Learning From Spatial Information for Surgical Workflow Anticipation
Francis Xiatian Zhang, Jingjing Deng, Robert Lieck and Hubert P. H. Shum
IEEE Transactions on Medical Robotics and Bionics 2024 |
paper |
arXiv |
code Contribution: Engineered an adaptive spatial graph learner that updates tool adjacencies on-the-fly to forecast surgical workflow transitions with interpretable attention.
Unraveling the brain dynamics of Depersonalization-Derealization Disorder: a dynamic functional network connectivity analysis
Sisi Zheng, Francis Xiatian Zhang*, Hubert P. H. Shum, Haozheng Zhang, Nan Song, Mingkang Song and Hongxiao Jia
BMC Psychiatry 2024 |
paper Contribution: Built neuroimaging processing, clustering, and statistical analysis pipelines to quantify brain dynamics.
Advancing healthcare practice and education via data sharing: demonstrating the utility of open data by training an artificial intelligence model to assess cardiopulmonary resuscitation skills
Merryn D. Constable, Francis Xiatian Zhang*, Tony Conner, Daniel Monk, Jason Rajsic, Claire Ford, Laura Jillian Park, Alan Platt, Debra Porteous, Lawrence Grierson and Hubert P. H. Shum
Advances in Health Sciences Education 2024 |
paper |
code |
dataset Contribution: Built a multi-view skill-rating ST-GCN pipeline with pose estimation to score CPR performance.
* = co-first author.
Experience
Research Associate at The University of Edinburgh, 11/2024-.
Develops deep learning visual navigation prototypes for robotic bronchoscopy—airway segmentation, geometric graph construction, and failure-aware inference evaluated on recorded clinical video streams.
PI: Dr Mohsen Khadem
Research Assistant at Durham University, 10/2023-11/2023;7/2024-9/2024.
Delivered the Edge Computing and Analytics 2.0 course by teaching end-to-end model workflows—from training to ONNX export and deployment through Python APIs with lightweight GUIs.
PI: Dr Anish Jindal
Demonstrator at Durham University, 11/2021-6/2024.
Led weekly lab support for Computational Thinking, Data Science, Programming for Data Science, and Text Mining modules—debugging starter code, clarifying marking rubrics, and routing stubborn tooling issues to instructors.
Student Helper at Durham University, 08/2022-09/2022.
Handled visiting speaker transport, couch bookings, and leisure tours around Durham Castle for the 21st ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2022).
Research Assistant at Northumbria University, 04/2022-07/2022.
Set up multi-camera capture, collected nursing simulation footage, and led training of the automatic rating system that turned pose-estimation outputs into competency scores.
PI: Dr Merryn Constable