[Event at CIG] [ICLP Workshop] Deadline Approaching: Machine Ethics and Explainability

Fabio Aurelio D'Asaro fabio.dasaro at unimi.it
Fri Jul 23 18:20:04 CEST 2021


=== Apologies for multiple postings ===

Dear all,

the deadline for the 1st Workshop on Machine Ethics and Explainability (MEandE 2021), co-located with ICLP 2021, is approaching (2nd August). Don’t miss out! Please find more information on the website (https://sites.google.com/view/meande2021 <https://sites.google.com/view/meande2021>) and in the CfP below.

Best wishes,

Fabio D’Asaro

---

CALL FOR PAPERS

MEandE-LP 2021  
1st Workshop on Machine Ethics and Explainability-The Role of Logic Programming 
https://sites.google.com/view/meande2021 <https://sites.google.com/view/meande2021>

September 20-21, 2021 (ICLP Workshop)
Affiliated with 37th International Conference on Logic Programming, 
Fully virtual event hosted by the Department of Computer Science of the University of Porto. September 20-27, 2021

INVITED SPEAKERS

Luis Moniz Pereira, New University of Lisbon, Portugal.
Francesca Toni, Imperial College of London, UK.

Titles of the talks will be announced soon.
                                                                                                         
AIMS AND SCOPE

Machine Ethics, Explainability are two recent topics that have been attracting a lot of attention and concern in the last years. This global concern has manifested in many initiatives at different levels. There is an intrinsic relation between these two topics. It is not enough for an autonomous agent to behave ethically, it should also be able to explain its behavior, i.e. there is a need for both ethical component and explanation component. Furthermore, an explainable behavior is obviously not acceptable if it is not ethical (i.e., does not follow the ethical norms of the society).

In many application domains especially when human lives are involved (and ethical decisions must be made), users need to understand well the system recommendations, to be able to explain the reasons for their decisions to other people. One of the most important ultimate goals of explainable AI systems is the efficient mapping between explainability and causality. Explainability is the system ability to explain itself in natural language to average user by being able to say, "I generated this output because x,y,z". In other words, the ability of the system to state the causes behind its decision is central for explainability. 

However, when critical systems (ethical decisions) are concerned, is it enough to explain system's decisions to the human user? Do we need to go beyond the boundaries of the predictive model to be able to observe a cause and effect within the system?

There exists a big corpus of research work on explainability, trying to explain the output of some blackbox model following different approaches. Some of them try to generate logical rules as explanations. However, it is worth noting that most methods for generating post-hoc explanations are themselves based on statistical tools, that are subject to uncertainty or errors. Many of the post-hoc explainability techniques try to approximate deep-learning black-box models with simpler interpretable models that can be inspected to explain the black-box models. However, these approximate models are not provably loyal with respect to the original model, as there are always trade-offs between explainability and fidelity. 

On the other side, a good corpus of researchers has used inherently interpretable approaches to design and implement their ethical autonomous agents. Most of them are based on logic programming, from deontic logics to non-monotonic logics and other formalisms. 

Logic Programming has a great potential in these two emerging areas of research, as logic rules are easily comprehensible by humans, and favors causality which is crucial for ethical decision making .

Anyway, in spite of the significant amount of interest that machine ethics has received over the last decade mainly from ethicists and artificial intelligence experts, the question "are artificial moral agents possible?" is still roaming around. There have been several attempts for implementing ethical decision making into intelligent autonomous agents using different approaches. But, so far, no fully descriptive and widely acceptable model of moral judgment and decision making exists. None of the developed solutions seem to be fully convincing to provide a trusted moral behavior. The same goes for explainability, in spite of the global concern about the explainability of the autonomous agents' behaviour, existing approaches do not seem to be satisfactory enough. There are many questions that remain open in these two exciting, expanding fields.

This workshop aims to bring together researchers working in all aspects of machine ethics and explainability, including theoretical work, system implementations, and applications. The co-location of this workshop with ICLP is intended also to encourage more collaboration with researchers from different fields of logic programming.This workshop provides a forum to facilitate discussions regarding these topics and a productive exchange of ideas.

Topics of interest include (but are not limited to):    

·         New approaches to programming machine ethics;    
·         New approaches to explainability of blackbox models;    
·         Evaluation and comparison of existing approaches;    
·         Approaches to verification of ethical behavior;    
·         Logic programming applications in machine ethics;    
·         Integrating logic programing with methods for machine ethics;    
·         Integrating logic programing with methods for explainability.                     
                                                                                                                                                                  
SUBMISSIONS

The workshop invites two types of submissions: 

·         original papers describing original research. 
·         non-original paper already published on formal proceedings or journals.
 Original papers must be formatted using the Springer LNCS style available here: 

·          regular papers must not exceed 14 pages (including references) 
·         extended abstract must not exceed 4 pages (excluding references)
 Authors are requested to clearly specify whether their submission is original or not with a footnote on the first page. Authors are invited to submit their manuscripts in PDF via the EasyChair system at the link: https://easychair.org/conferences/?conf=meandelp2021 <https://easychair.org/conferences/?conf=meandelp2021>

IMPORTANT DATES

Paper submission deadline:                       August 2, 2021 

Author Notification:                                    August 18, 2021 

Camera-ready articles due:                       August 25, 2021

PROCEEDINGS

Authors of all accepted original contributions can opt for to publish their work on formal proceedings.  Accepted non-original contributions will be given visibility on the workshop web site including a link to the original publication, if already published.

Accepted original papers will be published (details will be added soon).

LOCATION

Fully Virtual

WORKSHOP CHAIRS

Abeer Dyoub, DISIM, University of L'Aquila.

Fabio Aurelio D’Asaro, Logic Group, Department of Philosophy, University of Milan.

Ari Saptawijaya, Faculty of Computer Science, Universitas Indonesia.

PROGRAM COMMITTEE

Giuseppe Primiero (University of Milan)
Marija Slavkovik (University of Bergen)
Matteo Spezialetti (University of L'Aquila)
Krysia Broda (Imperial College London)
Stefania Costantini (University of L'Aquila)
Francesca Lisi (University of Bari)
Luis Moniz Pereira (New University of Lisbon)
Giovanni De Gasperis (University of L'Aquila)
Paolo Baldi (University of Milan)
Fabrizio Riguzzi (University of Ferrara)
Alberto Termine (University of Milan)

-- 
-- 
Hai ricevuto questo messaggio in quanto sei iscritto al gruppo Google
"AIxIA mailing list".
Per mandare un messaggio a questo gruppo, invia una email a
aixia at aixia.it <mailto:aixia at aixia.it>
Solo gli iscritti possono inviare messaggi.
Per annullare l'iscrizione a questo gruppo, invia un'email a aixia+unsubscribe at aixia.it <mailto:aixia+unsubscribe at aixia.it>
Per iscriversi visita
http://groups.google.com/a/aixia.it/group/aixia <http://groups.google.com/a/aixia.it/group/aixia>
Puoi iscriverti con il tuo account aixia.it <http://aixia.it/> o gmail.com <http://gmail.com/>
 
You received this message because you have subscribed the Google group "AIxIA mailing list".
To send a message to this group, send an email to aixia at aixia.it <mailto:aixia at aixia.it>
Only the subscribers can send to this group.
To cancel the subscription to this group, send an email to aixia+unsubscribe at aixia.it <mailto:aixia+unsubscribe at aixia.it>
To subscribe, visit 
http://groups.google.com/a/aixia.it/group/aixia <http://groups.google.com/a/aixia.it/group/aixia>
You can subscribe with your aixia.it <http://aixia.it/> or gmail,com account
--- 
To unsubscribe from this group and stop receiving emails from it, send an email to aixia+unsubscribe at aixia.it <mailto:aixia+unsubscribe at aixia.it>.




More information about the IFI-CI-Event mailing list