Women in Technology Series
Gender-preserving Debiasing for Pre-trained Word Embeddings
2nd October 2019, 13:00
Ashton Lecture Theatre
Dr Danushka Bollegala
Department of Computer Science, University of Liverpool
Abstract
Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings.
Following the talk, the audience will have the opportunity to direct questions to the speaker and also to members of the discussion panel.
The confirmed panel members are as follows:
· Prof. Simon Maskell (Electrical Engineering and Electronics)
· Dr. Rebecca Davnall (Philosophy)
· Dr. Zainab Hussain (Health Sciences), Chair of BAME Staff Network
Additional Materials
Maintained by Othon Michail