MLCheck is a testing tool which can be used to test machine learning models with respect to the properties specified by the tester.


MLCheck can be used to test svereal properties on the machine learning (ML) model. In recent years, with the advent of machine learning, a number of applications now use ML models as decision making software. Now, in some applications, properties like fairness, monotonicity, security properties are of concern. MLCheck has been used to test several such properties on a number of different models. The simple usability of this tool allows the tester just to write the property using a domain specific language in pre- and post-condition form. After uploading the model to be tested using a paramater, and defining the input and output format of the model in an XML file, the testing is done automatically. Testing process either ends with the violations of the specified property or without finding any such violations. In the former case, MLCheck will return a test case where the property is violated. Moreover, MLCheck can also be configured to return multiple violated test cases, instead of a single one which could be helpful to further retrain the model with these violated cases.In the latter case, if no violation is found, MLCheck can be further run with a different setting in an attempt to find a violation.


The testing tool MLCheck could be found in the folloing github page:



For further information and to know more about our work please contact Arnab Sharma (E-Mail: arnab(dot)sharma(at)uni-paderborn(dot)de).