A commonly discussed issue with machine learning and artificial intelligence solutions is the lack of trust from the users. This originates from the fact that it is often not understandable to the users why methods provide a particular output, and therefore machine learning solutions are not used to the extent that they could. This has caused academics and practitioners to call for greater transparency into the inner workings of artificial intelligence models, leading to the emergence of a new field in machine learning and artificial intelligence called explainable artificial intelligence. This field aims to make black-box machine learning models more transparent, helping mitigate issues of fairness, discrimination, and trust.
To address explainable artificial intelligence, several different solutions have been proposed including actions such as modifications and add-ons in the algorithms, transparency of training data, as well as standardized declarations of the solutions, similar to an audit process. The latter describes a step-by-step documentation of how the solution model has been implemented, allowing for the possibility of identification of biases or other skews in the implementation. Although these methods might be solutions contributing to the unboxing of the black-box elements, unfortunately, many of the solutions carry their own problems.
Since some of these problems are critical it is important to handle them as well. To some of these problems, there already exist potential solutions. One of the most promising techniques, to increase the difficulty of both gaining insight into the original data as well as manipulation of the model itself, involves adding tailored amounts of noise into the training data for the model. The model is thereby trained on data with slight perturbations. Generally, it’s important to understand the transparency requirements. In doing so the degree of transparency can be adapted and a method creating sufficient transparency that is suitable for the actual case can be used. A standardized auditing process is an important way to create a transparency declaration of the solution identification and it should generally also always be used where the amount and depth of details can be varied according to transparency requirements.
All in all, the transparency of artificial intelligence and machine learning methods carries some potential problems however for these methods to be used to their full potential it is evident that some sort of transparency solution is necessary. Without such standards practitioners will not be able to deploy the methods with confidence that the data they steward is adequately protected.