Call for Papers of AISVH 2025

Conference Overview

Annual International Symposium on AI Security, Vision & Hardware (AISVH 2025) will be organized by the College of Computer Science, Chongqing University, and the School of Computer Science. It will be held in Chongqing, P.R. China, from May 15, 2025. As AI technologies become deeply embedded in security-critical systems, challenges such as adversarial attacks, model privacy, and trustworthy hardware demand urgent attention. This conference provides a focused platform for researchers and practitioners to exchange cutting-edge advances in AI security, secure computer vision, and resilient hardware design. Topics include secure and robust learning, adversarial defenses, privacy-preserving vision systems, hardware-based AI protection, and more. Proceedings will be published in Springer's Communications in Computer and Information Science (CCIS), indexed by EI.

Author Instructions:

Authors are invited to submit original papers written in English. Each paper must not appear in other conferences with proceedings or journals. The submission must be anonymous, with no author names, affiliations, acknowledgements, or obvious references. Original contributions are invited up to 16 pages in length (single column), excluding appendices and bibliography, and up to 20 pages in total, using at least 11-point fonts and with reasonable margins. It is highly recommended to prepare the papers in the Springer LNCS format http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0. Only PDF files will be accepted. If accepted, one of the authors is expected to present the paper at the conference. Papers must be submitted through https://easychair.org/conferences/?conf=AISVH2025 .

Areas of Interest:

  • Adversarial machine learning
  • Robustness and generalization in AI systems
  • Secure and explainable AI
  • AI model watermarking and fingerprinting
  • Backdoor detection and defense
  • Federated learning security and privacy
  • Applied Cryptography
  • Secure computer vision systems
  • Privacy-preserving vision and sensing
  • Biometric security and anti-spoofing
  • Edge AI and secure model deployment
  • Hardware-based AI protection
  • Trusted execution environments (TEE)
  • Secure AI accelerator design
  • Side-channel attacks on AI hardware
  • AI hardware verification and validation
  • Secure multi-modal perception
  • Sensor spoofing and countermeasures
  • Data poisoning and model inversion attacks
  • Differential privacy in AI
  • Vision-based authentication and surveillance
  • Secure autonomous systems
  • Cryptographic protocols for AI
  • AI security benchmarks and evaluation
  • AI governance and ethical risk
  • Others related to AI security, vision, and hardware