Findings of NAACL 2022
Unbiased Math Word Problems Benchmark for Mitigating Solving Bias
Zhicheng Yang, Jinghui Qin, Jiaqi Chen and Xiaodan Liang
Findings of NAACL 2022

Abstract


In this paper, we revisit the solving bias when evaluating models on current Math Word Problem (MWP) benchmarks. However, current solvers exist solving bias which consists of data bias and learning bias due to biased dataset and improper training strategy. Our experiments verify MWP solvers are easy to be biased by the biased training datasets which do not cover diverse questions for each problem narrative of all MWPs, thus a solver can only learn shallow heuristics rather than deep semantics for understanding problems. Besides, an MWP can be naturally solved by multiple equivalent equations while current datasets take only one of the equivalent equations as ground truth, forcing the model to match the labeled ground truth and ignoring other equivalent equations. Here, we first introduce a novel MWP dataset named UnbiasedMWP which is constructed by varying the grounded expressions in our collected data and annotating them with corresponding multiple new questions manually. Then, to further mitigate learning bias, we propose a Dynamic Target Selection (DTS) Strategy to dynamically select more suitable target expressions according to the longest prefix match between the current model output and candidate equivalent equations which are obtained by applying commutative law during training. The results show that our Unbiased MWP has significantly fewer biases than its original data and other datasets, posing a promising benchmark for fairly evaluating the solvers’ reasoning skills rather than matching nearest neighbors. And the solvers trained with our DTS achieve higher accuracies on multiple MWP benchmarks.

 

 

Framework


 

 

 

Experiment


 

 

Conclusion


In this paper, we revisit the solving bias in MWP. To mitigate the data bias caused by lacking question diversity, we construct a data set called UnbiasedMWP by variating the expressions in newcollected data. The experimental results illustrate that the solver trained on UnbiasedMWP is more robust than on our collected data. To mitigate the learning bias caused by loss overcorrect with taking only one ground-truth, we proposed a strategy to generate the equivalent expressions and select the longest prefix with the current model output during training, called Dynamic Target Selection (DTS). Experimental results show that our DTS helps several models achieve better performance.