Recommender systems in recruitment platforms involve two active sides, candidates and recruiters, each with distinct goals and preferences. Most recommendation methods address only one side of the problem, leading to potentially ineffective matches. We propose a two-sided fusion framework that jointly models candidate and recruiter preferences to enhance mutual matches between candidates and recruiters. We also propose a personalized two-sided fusion approach to enhance the fairness of job recommendations. Experiments on the XING recruitment dataset show that the proposed approach improves fairness and compatibility, demonstrating the benefits of incorporating two-sided preferences in fairness-aware recommendations.
Please cite this work:
Rus, C., Mansoury, M., Yates, A., & de Rijke, M. (2026). Joint modeling of candidate and recruiter preferences for fair two-sided job matching. In Proceedings of the 48th European Conference on Information Retrieval (ECIR).
python 3.8
conda env create -f environment.yml
conda activate fairness_env
You can find the run scripts under /src/TSF/run_TSF.sh.
Experiments with recommendation model were done by Librec-auto [1].
TSF-ATT method made use of the implementation provided by [2].
This repo will be updated with the full pipeline and the implementation for TSF-ATT.
python run_two_sided_fusion.py --fusion_type <fusion_type> --core "20"
fusion_type can take the following values:
- weighted_sum
- comb_max
- comb_min
- rrf
- isr
- borda_fuse
python run_two_sided_fusion.py --fusion_type fair_fusion_optim --core "20" --settings <settings> --weight <w>
settings can take the following predefined values:
- settings_DGI_country
- settings_DUP_country
- settings_DGI_is_payed
- settings_DGI_DUP_country (
optimization_obj = w * DGI(country) + (1-w) * DUP(country)) - settings_DGI_country_is_payed (
optimization_obj = w * DGI(country) + (1-w) * DGI(is_payed))
You can define your own settings in run_two_sided_fusion.py as it follows:
settings_<metric>_<attribute> = [
{
"fair_metrics":["metric"], # list of fairness metrics used during optimization
"group_cols": ["attribute"] # list of attributes used during optimization
}
]
Next define your setting in settings_choices dict:
settings_choices = {
"settings_<metric>_<attribute>": settings_<metric>_<attribute>
}
w is the weight of the metrics if dual optimization is used, meaning optimization_obj = w * metric_1 + (1-w) * metric_2. Code suports only dual weighted optimization, but if you want to optimize for more metrics or attributes the weight will be set to 1.
Below you can see an example of how the job recommendation list changes due TSF - SUM (alpha = 0.7) for U_ID = 715804 in top-10, which preference towards NON-DE jobs is 0.5:
2225861 -- NON-DE
1169080 -- DE
1416610 -- DE
1306469 -- DE
86578 -- DE (removed)
869007 -- DE
824200 -- DE (removed)
1821386 -- DE (removed)
160781 -- DE (removed)
1630349 -- DE
Proportion of NON-DE job: 0.1.
2225861 -- NON-DE
1416610 -- DE
2280530 -- NON-DE (added)
1306469 -- DE
1564428 -- NON-DE (added)
1630349 -- DE
1642014 -- NON-DE (added)
1553375 -- NON-DE (added)
1169080 -- DE
869007 -- DE
Proportion of NON-DE job: 0.5.
TSF - SUM adds more NON-DE job, respecting more the user preference towards NON-DE jobs and increasing the fairness as it increses the number of NON-DE jobs which are overall underrepresented.
The dual optimization obtains a good trade-off between RGI(country) and RGI(premium). It is to be noted that the alpha value for this represents actually the weight (w) between the two metrics as specified above.
The dual optimization between RGI(country) and RUP(country) shows that these metrics show indeed a trade-off as obtaining an optimal results for both metrics is rather difficult.
[1] Sonboli, N., Mansoury, M., Guo, Z., Kadekodi, S., Liu, W., Liu, Z., ... & Burke, R. (2021, October). Librec-auto: A tool for recommender systems experimentation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (pp. 4584-4593).
[2] Sabouri, M., Mansoury, M., Lin, K., & Mobasher, B. (2025). Using LLMs to Capture Users' Temporal Context for Recommendation. arXiv preprint arXiv:2508.08512.