The adoption of machine learning and artificial intelligence techniques across various fields requires careful oversight to ensure their use is fair, non-discriminatory, and understandable to those affected. Two key areas have emerged to address these concerns: fairness and interpretability. Fairness focuses on detecting and preventing biases, especially those impacting vulnerable groups, both in data and models. Interpretability, on the other hand, aims to make model decisions understandable, allowing the assignment of importance to original variables and thus facilitating fairness analysis.
La adopción de técnicas de machine learning e inteligencia artificial en diversos campos requiere un control cuidadoso para asegurar un uso justo, no discriminatorio y comprensible para las personas afectadas. Así, han surgido dos áreas clave para abordar estas preocupaciones: el fairness y la interpretabilidad. El fairness se enfoca en detectar y prevenir sesgos, especialmente contra colectivos vulnerables, tanto en los datos como en los modelos. La interpretabilidad, por su parte, busca hacer entendibles las decisiones de los modelos, permitiendo asignar importancia a las variables originales y facilitando así el análisis de fairness.
62-XX
(primary)
2.B (0.16)
2.C (0.16)
Consider a scalar-on-function regression problem, where the goal is to predict a scalar response from a functional predictor. Several predictive models have been proposed in the Functional Data Analysis literature, but many of them are difficult to interpret since it is hard to identify the relevance of the functional predictors. In this work, we extend relevance measures based on the Shapley value from multivariate to functional predictors by adapting concepts from the continuous games theory.
Joint work with Pedro Delicado.
In an era where explainability in Deep Learning (DL) is crucial, this talk demonstrates how mathematics supports advances in DL, particularly in Convolutional Neural Networks (CNN). The presentation is divided into two parts: a mathematical exploration of Fourier analysis and its application to image filtering, followed by a computational analysis using the Fast Fourier Transform (FFT) to optimize CNN training. This approach aims to improve efficiency, aligning with the goals of "Green AI".
The Network Scale-up Methods (NSUM) estimate hidden populations through indirect surveys using participants' aggregated data about acquaintances. This study compares nine NSUM through simulations, examining factors like network structure, subpopulation distribution, sample size, and biases. Findings show that some lesser-used estimators excel in specific cases of network configuration and biases, while the most common estimator is less sensitive to subpopulation configuration and recall error.
Joint work with Jose Aguilar, Juan Marcos Ramírez, David Rabanedo, Antonio Fernández Anta, and Rosa E. Lillo.
Explaining feature importance in model predictions is key in Explainable AI (XAI). However, most methods focus on single variables, overlooking interactions common in real-world problems. This work compares extensions of SHAP values, a popular interpretability method, to account for interactions, alongside NN2Poly, a neural network specific interpretability method. Simulations under various settings compare local and global explanations and propose metrics for computing importance order.
Joint work with J. Alexandra Cifuentes, Rosa E. Lillo, and Iñaki Úcar.
We address fair representation learning using fair Partial Least Squares (PLS) components, a technique commonly used in statistics for efficient data dimensionality reduction tailored for prediction. We introduce a novel method that integrates fairness constraints into the construction of PLS components, applicable in both linear and nonlinear cases using kernel embeddings. Our algorithm's effectiveness is demonstrated across various datasets.
Joint work with Elena M. De Diego, Paula Gordaliza and Jean-Michel Loubes.
As machine learning's role in decision-making grows, concerns about fairness and bias have risen. Various fairness techniques address discrimination based on sensitive variables like race or gender, but little research combines these methods or considers multiple sensitive attributes simultaneously. This project explores the impact of combining fairness algorithms to enhance equity, offering insights for real-world applications like hiring and credit approval.
Joint work with Rosa Lillo, Arturo Pérez and Fabio Scielzo.
Causal inference estimates the impact of a treatment on an outcome under strong assumptions. A key measure, the Average Treatment Effect, reflects the difference in outcome likelihood between the same population when fully treated or untreated. This applies to auditing decisions involving sensitive attributes that shouldn't influence outcomes. We introduce a generalized trimming method using Maximum Mean Discrepancies, offering a flexible alternative to existing causal inference techniques.
Joint work with Eustasio del Barrio, Paula Gordaliza and Jean Michel Loubes.