Built-in Debiasing Methods

The currently supported bias mitigation methods include:

Learning Fair Representations

Adversarial Training

Li, Yitong, Timothy Baldwin and Trevor Cohn (2018) [Towards Robust and Privacy-preserving Text Representations](https://aclanthology.org/P18-2005/), ACL 2018.  

Elazar, Yanai and Yoav Goldberg (2018) [Adversarial Removal of Demographic Attributes from Text Data](https://aclanthology.org/D18-1002/), EMNLP 2018.

Wang, Tianlu, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang and Vicente Ordóñez (2019) [Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations](https://arxiv.org/abs/1811.08489), ICCV 2019.  

Zhao, Han and Geoff Gordon (2019) [Inherent Tradeoffs in Learning Fair Representations](https://papers.nips.cc/paper/2019/hash/b4189d9de0fb2b9cce090bd1a15e3420-Abstract.html), NeurIPS 2019.

Diverse Adversarial Training

Han, Xudong, Timothy Baldwin and Trevor Cohn (2021) [Diverse Adversaries for Mitigating Bias in Training](https://aclanthology.org/2021.eacl-main.239/), EACL 2021.

Decoupled Adversarial Training

Han, Xudong, Timothy Baldwin and Trevor Cohn (2021) [Decoupling Adversarial Training for Fair NLP](https://aclanthology.org/2021.findings-acl.41/), Findings of ACL 2021.

Towards Equal Opportunity Fairness through Adversarial Learning

Han, Xudong, Timothy Baldwin and Trevor Cohn (2022) [Towards Equal Opportunity Fairness through Adversarial Learning](https://arxiv.org/abs/2203.06317), CoRR abs/2203.06317.

INLP

  Ravfogel, Shauli, Yanai Elazar, Hila Gonen, Michael Twiton and Yoav Goldberg (2020) [Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection](https://aclanthology.org/2020.acl-main.647.pdf)

Contrastive Learning for Fair Representation

  Shen, Aili, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann (2021) [Contrastive Learning for Fair Representations](https://arxiv.org/abs/2109.10645), CoRR abs/2109.10645.

Balanced Training

Instance Reweighting and Resampling

Han, Xudong, Timothy Baldwin and Trevor Cohn (2021) [Balancing out Bias: Achieving Fairness Through Training Reweighting](https://arxiv.org/abs/2109.08253), CoRR abs/2109.08253.  

Lahoti, Preethi, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang and Ed Chi (2020) [Fairness without Demographics through Adversarially Reweighted Learning](https://papers.nips.cc/paper/2020/hash/07fc15c9d169ee48573edd749d25945d-Abstract.html), NeurIPS 2020.  

Wang, Tianlu, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang and Vicente Ordóñez (2019) [Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations](https://arxiv.org/abs/1811.08489), ICCV 2019.  

FairBatch: resampling dynamically during training

Roh, Yuji, Kangwook Lee, Steven Euijong Whang, and Changho Suh (2021) [FairBatch: Batch Selection for Model Fairness](https://arxiv.org/abs/2012.01696), ICLR, 2021

Minimizing Group Difference for Equal Opportunity Fairness

Shen, Aili, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann (2022) [Connecting Loss Difference with Equal Opportunity for Fair Models ]().

Incorporating Demographic Factors

Representation Augmentation

Han, Xudong, Timothy Baldwin and Trevor Cohn (2021) [Balancing out Bias: Achieving Fairness Through Training Reweighting](https://arxiv.org/abs/2109.08253), CoRR abs/2109.08253.