Machine Learning Accountability

AI machine learning is a great partner for sensor beacon data because it allows you to make sense of data that’s often complex and contains noise. Instead of difficult traditional filtering and algorithmic analysis of the data you train a model using existing data. The model is then used to detect, classify and predict. When training the model, machine learning can pick up on nuances of the data that a human programmer wouldn’t see by analysing the data.

One of the problems with the AI machine learning approach is that you use the resultant model but can’t look inside to see how it works. You can’t say why the model has classified something some way or why it has predicted something. This can make it difficult for us humans to trust the output or understand what the model was ‘thinking’ when the classifications or predictions end up being incorrect. It also makes it impossible to provide rationales in situations such as ‘right to know’ legislation or causation auditing.

A new way to solve this problem is use of what are known as counterfactuals. Every model has inputs, in our case sensor beacon data and perhaps additional contextual data. It’s possible to apply different values to inputs to find tipping points in the model. A simple example from acceleration xyz sensor data might be that a ‘falling’ indicator is based on z going over a certain value. Counterfactuals are generic statements that explain not how the model works but how it behaves. Recently, Google announced their What-If tool that can be used to derive such insights from TensorFlow models.

Read about Machine Learning and Beacons