The article titled Improved RSSI Indoor Localization in IoT Systems with Machine Learning Algorithms by Ruvan Abeysekera and Ruvan Abeysekera focuses on enhancing indoor localisation in Internet of Things (IoT) systems using AI machine learning algorithms. The paper addresses the limitations of GPS in indoor environments and explores the use of Bluetooth low-energy (BLE) nodes and Received Signal Strength Indicator (RSSI) values for more accurate localisation.
GPS is ineffective indoors so the paper emphasises the need for alternative methods for indoor localisation, which is crucial for various applications like smart cities, transportation and emergency services.
The study uses machine learning algorithms to process RSSI data collected from Bluetooth nodes in complex indoor environments. Algorithms like K-Nearest Neighbors (KNN), Support Vector Machine (SVM, and Feed Forward Neural Networks (FFNN) are used, achieving accuracies of approximately 85%, 84%, and 76% respectively.
The RSSI data is also processed using techniques like weighted least-squares method and moving average filters. The paper also discusses the importance of hyperparameter tuning in improving the performance of the machine learning models.
The research claims to provide significant advancement in indoor localisation, highlighting the potential of machine learning in overcoming the limitations of traditional GPS-based systems in indoor environments.
The research compares two different approaches to track a person indoors using Bluetooth LE technology with a smartphone and a smartwatch used as monitoring devices.
The beacons were iB005N supplied by us and it’s the first time we have been referenced in a research paper.
The research is novel in that it uses AI machine learning to attempt location prediction.
The researchers were able to predict the user’s next location with 67% accuracy.
Location prediction has some interesting and useful applications. For example, you might stop a vulnerable person going outside a defined area or in an industrial setting stop a worker going into a dangerous area.
The paper describes an efficient solution for locating, tracking, analysing distribution and flow of people and/or vehicles. Filters and algorithms including artificial intelligence and angle of arrival (AoA) were employed.
The resultant system provided for analysis of location, traffic flow and passenger movement along routes.
The researchers found that accuracy was improved when multiple measuring stations were used. Improved positioning was achieved using geometry algorithms (Voronoi) and the k-mean cluster algorithms. Artificial intelligence allowed for deeper analysis of the data for more accurate positioning, trajectory estimation and density evaluation.
Beacons provide a great way of providing new data for AI machine learning. They allow you to measure things that aren’t currently being quantified, create new data that isn’t silo’d by protectionist staff or departments and allow you to pre-process data in-place making it suitable for learning and inference.
There’s a new free State of AI Report 2019 in the form of a 136 page presentation. It covers aspects such as research, talent, industry and geopolitical areas such as China and Politics.
“Firms blamed the cost of AI solutions, a lack of qualified workers, and biased data as the principal blockers impeding AI adoption internally. Respondents identified skills shortages and unrealistic expectations as the top two reasons for failure, in fact, with a full quarter reporting up to 50% failure rate.”
We believe a key part of this is ‘unrealistic expectations’. Half of all AI projects failing for 1 in 4 companies isn’t unreasonable. AI and machine learning should be viewed as a research rather than a development activity in that it’s often the case that it’s not known if the goal is achievable until you try.
Another unrealistic expectation of machine learning is often to have 100% accuracy. The use of an accuracy % in assessing machine learning models focuses stakeholders minds too much on the perceived need for a very high accuracy. In reality, human-assessed, non-machine learning, processes such as medical diagnosis tend to have much less than 100% accuracy and sometimes have undetermined accuracy but these are reasonably seen as being acceptable.
In summary, there has to be upfront realistic expectations of both the possible outcome and the accuracy of the outcome for projects to correctly determine if AI activities are an unexpected failure.
In our previous post on iBeacon Microlocation Accuracy we explained how distance can be inferred from the received signal strength indicator (RSSI). We also explained how techniques such as trilateration, calibration and angle of arrival (AoA) can be used to improve location accuracy.
Their study was based on a large-scale exhibition where they placed scanning devices:
They implemented a LSTM neural network and experimented with the number of layers:
They obtained best results with the simplest machine learning model with only 1 LSTM:
As is often the case with machine learning, more complex models over-learn on the training data such that they don’t work with new, subsequent data. Simple models are more generic and work not just with the training data but with new scenarios.
The researchers managed to achieve an accuracy of 2.44m at 75 percentile – whatever that means – we guess in 75% of the cases. 2.44m is ok and compares well to accuracies of about 1.5m within a shorter range confined space and 5m at the longer distances achieved using conventional methods. As with all machine learning, further parameter tuning usually improves the accuracy further but can take along time and effort. It’s our experience that using other types of RNN in conjunction with LSTM can also improve accuracy.
When working with Machine Learning on beacon sensor data or indeed any data, it’s important to realise AI machine learning isn’t magic. It isn’t foolproof and is ultimately only as good as the data passed in. Because it’s called AI and machine learning, people often expect 100% accuracy when this often isn’t possible.
By way of a simple example, take a look at the recent tweet by Max Woolf where he shows a video depicting the results of the Google cloud vision API when asked to identify an ambiguous rotating image that looks like a duck and rabbit:
There are times when it thinks the image is a duck, other times a rabbit and other times when it doesn’t identify either. Had the original learning data included only ducks but no rabbits there would have been different results. Had there been different images of ducks the results would have been different. Machine learning is only a complex form of pattern recognition. The accuracy of what you get out is related to a) The quality of the learning data and b) The quality of the tested data when to try identification.
If your application of machine learning is safety critical and needs 100% accuracy, then machine learning might not be right for you.
A growing use of sensor beacons is in prognostics. Prognostics replaces human inspection with continuously automated monitoring. This cuts costs and potentially detects when things are about to fail rather than when they have failed. This makes processes proactive rather than reactive thus providing for smoother process planning and reducing the knock-on affects of failures. It can also reduce the need for over excessive and costly component replacement that’s sometimes used to reduce in-process failure.
Prognostics is implemented by examining the time series data from sensors, such as those monitoring temperature or vibration, in order to detect anomalies and make forecasts on the remaining useful life of components. The problems with analysing such data values are that they are usually complex and noisy.
Machine learning’s capacity to analyse very large amounts of high dimensional data can take prognostics to a new level. In some circumstances, adding in additional data such as audio and image data can enhance the capabilities and provide for continuously self-learning systems.
A downside of using machine learning is that it requires lots of data. This usually requires a gateway, smartphone, tablet or IoT Edge device to collect initial data. Once the data has been obtained, it need to be categorised, filtered and converted into a form suitable for machine learning. The machine learning results in a ‘model’ that can be used in production systems to provide for classification and prediction.