[Event at CIG] [Meetings] [CFP] First International Workshop on LLMs and KRR for Trustworthy AI (LMKR-TrustAI 2025)
Yang Song
yang.song1 at unsw.edu.au
Fri Jul 18 07:22:47 CEST 2025
First International Workshop on LLMs and KRR for Trustworthy AI (LMKR-TrustAI 2025)
Held in conjunction with KR 2025<https://kr.org/KR2025/>
Half day, 11-13 November 2025 (TBD)
Paper submission: August 4, 2025
Workshop web site: https://sites.google.com/view/lmkr-trustai-2025/home
Call for Papers
Overview
The emergence of large language models (LLMs) has brought significant opportunities for developing scalable and generalisable AI applications more easily. Compared to knowledge representation and reasoning (KRR) methods, LLMs demonstrate remarkable capability in encoding linguistic knowledge, enabling them to generate human-like text and generalise across diverse tasks with minimal domain-specific training. However, LLMs’ reliance on statistical patterns rather than explicit reasoning mechanisms raises concerns about factual consistency, logical coherence, vulnerability to hallucinations, bias and misalignment with human values. This workshop focuses on an emerging research paradigm: the integration of LLMs with KRR techniques to enhance transparency, verifiability and robustness in AI systems. We explore approaches that incorporate structured knowledge (ontologies, knowledge graphs, symbolic logic, etc.), neuro-symbolic methods, formal reasoning frameworks and explainability techniques to improve the trustworthiness of LLM-driven decision-making.
The workshop will feature invited talks from leading experts, research paper presentations, and interactive discussions on bridging probabilistic learning with symbolic reasoning for trustworthy AI. By bringing together researchers from KRR and deep learning, this workshop aims to foster new collaborations and technical insights to develop AI systems that are both powerful and trustworthy.
Topics of interest include but are not limited to:
*
Knowledge-grounded language models
*
Hybrid neuro-symbolic architectures
*
Reasoning-aware prompt engineering
*
Logical consistency checks in LLM outputs
*
Uncertainty and automated verification
*
Causality and reasoning
*
Explainability and controllability
*
Commonsense reasoning integrating LLMs and KRR
*
Reinforcement learning for ensuring safety and trustworthiness
*
Alignment and preference-guided LLMs
*
Multi-agent AI frameworks
*
Benchmarks, datasets and quantitative evaluation metrics
*
Evaluation and user studies in real-world applications
Organising Committee
* Maurice Pagnucco, UNSW, Australia
* Yang Song, UNSW, Australia
Program Committee
*
Professor Tony Cohn, University of Leeds, UK
*
Dr Mingming Gong, University of Melbourne, Australia
*
Professor Gerhard Lakemeyer, RWTH Aachen, Germany
*
Professor Fangzhen Lin, HKUST, China
*
Professor Tim Miller, University of Queensland, Australia
*
Dr Nina Narodytska, VMware Research, USA
*
Associate Professor Abhaya Nayak, Macquarie University, Australia
*
Professor Ken Satoh, National Institute of Informatics, Japan
*
Professor Michael Thielscher, University of New South Wales, Australia
*
Professor Guy Van den Broeck, UCLA, USA
Important Dates
Paper submission: August 4, 2025
Paper notification: August 25, 2025
Workshop date and time: Half-day during November 11-13, 2025 (TBD)
Submissions
Contributions may be regular papers (up to 9 pages) or short/position papers (up to 5 pages), including everything. Submissions should follow the KR 2025 formatting guidelines and be submitted through the submission page. Each submission will be reviewed by at least two program committee members. We also welcome submissions that have recently been accepted in top AI conferences. At least one author of each accepted paper will be required to attend the workshop to present the contribution.
Submission link: https://openreview.net/group?id=kr.org/KR/2025/Workshop/LMKR-TrustAI
Best regards,
Maurice Pagnucco, Yang Song
Organisers, KR 2025 Workshop on LLMs and KRR for Trustworthy AI
Confidential communication - This email and any files transmitted with it are confidential and are intended solely for the addressee. If you are not the intended recipient, please be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this email and any file attachments is strictly prohibited. If you have received this email in error, please notify me immediately by return email and destroy this email.
More information about the IFI-CI-Event
mailing list