Feature Necessity & Relevancy in ML Classifier Explanations
dc.contributor.author | Huang, Xuanxiang | |
dc.contributor.author | Cooper, Martin C. | |
dc.contributor.author | Morgado, Antonio | |
dc.contributor.author | Planes Cid, Jordi | |
dc.contributor.author | Marques-Silva, Joao | |
dc.date.accessioned | 2023-11-15T09:12:56Z | |
dc.date.available | 2023-11-15T09:12:56Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction. In some applications, and besides asking for an explanation, it is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations. This paper starts by relating such queries respectively with the problems of relevancy and necessity in logic-based abduction. The paper then proves membership and hardness results for several families of ML classifiers. Afterwards the paper proposes concrete algorithms for two classes of classifiers. The experimental results confirm the scalability of the proposed algorithms. | |
dc.description.sponsorship | This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program “Investing for the Future – PIA3” under Grant agreement no. ANR-19-PI3A-0004, and by the H2020-ICT38 project COALA “Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence”, and funded by the Spanish Ministry of Science and Innovation (MICIN) under project PID2019-111544GB-C22, and by a María Zambrano fellowship and a Requalification fellowship financed by Ministerio de Universidades of Spain and by European Union – NextGenerationEU. | |
dc.identifier.doi | https://doi.org/10.1007/978-3-031-30823-9_9 | |
dc.identifier.isbn | 978-3-031-30822-2 | |
dc.identifier.isbn | 978-3-031-30823-9 | |
dc.identifier.uri | https://repositori.udl.cat/handle/10459.1/464532 | |
dc.language.iso | eng | |
dc.publisher | Springer | |
dc.relation | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2019-111544GB-C22/ES/SISTEMAS DE INFERENCIA PARA INFORMACION INCONSISTENTE: ANALISIS ARGUMENTATIVO/ | |
dc.relation.isformatof | Reproducció del document publicat a https://doi.org/10.1007/978-3-031-30823-9_9 | |
dc.relation.ispartof | Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2023. Lecture Notes in Computer Science, vol 13993. Sankaranarayanan, S., Sharygina, N. eds. Cham: Springer, 2023. p. 167-186 | |
dc.rights | cc-by (c) Xuanxiang Huang et al., 2023 | |
dc.rights | Attribution 4.0 International | * |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject | Formal Explainability | |
dc.subject | Abduction | |
dc.subject | Abstraction Refinement | |
dc.title | Feature Necessity & Relevancy in ML Classifier Explanations | |
dc.type | info:eu-repo/semantics/bookPart | |
dc.type.version | info:eu-repo/semantics/publishedVersion |