Journal Information
Artificial Intelligence and Autonomous Systems
https://www.elspub.com/journals/artificial-intelligence-and-autonomous-systems/home
Publisher:
ELSP
ISSN:
2959-0744
Viewed:
8730
Tracked:
1
Call For Papers
Scope

The journal Artificial Intelligence and Autonomous Systems (AIAS) is an online multidisciplinary open access journal aiming to provide a peer-reviewed forum for rigorous and fast publications of the latest research findings and industrial applications in the contemporary fields of AI and autonomous systems. AIAS welcomes research articles on the theoretical, computational, cognitive, and empirical aspects of AI, autonomous systems, and their implementations.

The scope and topics of AIAS include but are not limited to:

    Theoretical foundations of AI
    Theoretical foundations of AS
    Autonomous AI
    Brain-inspired systems
    Autonomous medical devices and systems
    Autonomous vehicles
    Autonomous human-machine systems
    Autonomous function and behavior generation
    Interactive intelligent systems
    Autonomous decision making
    Autonomous machine learning theory
    Computer vision
    Autonomous robotics and control
    Language and semantic processing
    Data science
    AI control theory and optimization
    Networked and distributed systems
    AI-based computer security
    High-performance computing driven by AI
Last updated by Dou Sun in 2025-11-28
Special Issues
Special Issue on Trustworthy Multiagent Reinforcement Learning
Submission Date: 2026-01-31

Multiagent Reinforcement Learning (MARL) has emerged as a powerful paradigm for solving complex multiagent decision-making problems across various domains including autonomous systems, robotics, smart grids, and financial trading. However, the widespread deployment of MARL systems in real-world applications is hindered by many challenges related to trustworthiness such as robustness, safety, fairness, and explainability. Traditional MARL algorithms are vulnerable to adversarial attacks, environmental uncertainties, and coordination failures, which can lead to unreliable and unsafe behavior. Moreover, issues such as biased policies lack of interpretability, and inefficient reward mechanisms further impede their adoption in high-stakes applications. Therefore, there is an urgent need to develop trustworthy MARL frameworks and algorithms that ensure robustness against adversarial perturbations, enhance safety in critical environments, and promote fair and ethical decision-making among agents. This Special Issue aims to explore cutting-edge methodologies, theoretical advancements, and practical implementations that enhance the trustworthiness of MARL systems. We invite original research and survey papers that address the following topics, but are not limited to: 1. Adversarial attacks and defenses in MARL, including multi-dimensional perturbation models and robust policy learning. 2. Risk-aware MARL, focusing on safe exploration, constraint satisfaction, and formal verification of policies. 3. Fairness in MARL, including equitable policy design, bias mitigation, and reward allocation in cooperative and competitive settings. 4. Explainability in MARL, with emphasis on interpretable decision-making and human-in-the-loop systems. 5. Scalability and generalization of MARL algorithms across diverse environments and large-scale multiagent systems. 6. Real-world applications of trustworthy MARL in domains such as robotics, healthcare, finance, and smart cities. 7. Empirical evaluations, benchmarking, and theoretical analysis of robustness, safety, and fairness in MARL frameworks. 8. Challenges and solutions in integrating large models into MARL.
Last updated by Dou Sun in 2025-11-28
Special Issue on Federated Learning for Secure and Privacy-Preserving Intelligent Systems
Submission Date: 2026-08-31

The rapid proliferation of intelligent systems across healthcare, finance, transportation, and industrial automation has led to unprecedented volumes of sensitive data. While these datasets fuel the development of advanced machine learning models, traditional centralized learning paradigms raise significant concerns regarding privacy, data security, and regulatory compliance. Federated Learning (FL) has emerged as a transformative paradigm that enables collaborative model training across distributed devices or institutions without the need to share raw data. By keeping data localized while sharing model updates, FL offers a promising solution for privacy-preserving intelligence, secure decision-making, and decentralized AI. Recent advancements in FL have extended its potential beyond basic collaborative learning. Novel algorithms now address issues such as communication efficiency, heterogeneity of data distributions, robustness against adversarial attacks, and formal privacy guarantees through differential privacy and secure multi-party computation. These developments are reshaping the landscape of intelligent systems, enabling scalable deployment of AI in sensitive domains while maintaining regulatory compliance and user trust.This special issue aims to consolidate cutting-edge research, innovative methodologies, and real-world applications of federated learning in the context of secure and privacy-aware intelligent systems. We invite contributions that not only advance theoretical understanding but also demonstrate practical impact in deploying FL in real-world scenarios. Topics of interest include, but are not limited to: Federated learning algorithms for heterogeneous and non-i.i.d. data Privacy-preserving techniques in FL, including differential privacy and homomorphic encryption Secure aggregation, blockchain-enabled FL, and adversarially robust FL Communication-efficient and scalable FL for edge and IoT devices Federated learning in healthcare, finance, smart cities, and industrial systems Model personalization and transfer learning in federated settings Benchmarking, evaluation metrics, and empirical studies of FL under security and privacy constraints Interdisciplinary approaches combining FL with reinforcement learning, computer vision, NLP, or multimodal AI We particularly encourage submissions that bridge theoretical innovation and practical deployment, demonstrating how federated learning can enable secure, trustworthy, and privacy-respecting intelligent systems across diverse application domains. The journal is preparing for inclusion in major indexing services (e.g., Scopus, SCI). Early publications will be automatically included once indexing is granted. Authors benefit from waived article processing charges, and early participation as a founding contributor helps establish the journal’s impact from its inception.
Last updated by Dou Sun in 2025-11-28
Related Conferences