(Not) Using Machine Learning for Bluetooth AoA

New Research, On the Generalization of Deep Learning Models for AoA Estimation in Bluetooth Indoor Scenarios, by Ivan Pisa, Guillem Boquet, Xavier Vilajosana, and Borja Martinez of Universitat Oberta de Catalunya, Barcelona, Spain, looks into the application of Deep Learning (DL) models for Angle-of-Arrival (AoA) estimation, a key technique for indoor positioning in Bluetooth indoor positioning.

Accurate estimation of the Angle-of-Arrival (AoA) is complex. The accuracy of AoA estimates can be significantly affected by various signal disturbances such as multipath components, polarisation, spread delays, jitter and noise. These factors can create ambiguities and distort the phase differences of received signals, leading to errors in the position data reported by the system. Also, the multipath effect, where multiple signal replicas interfere with each other, can severely mislead AoA estimations.

Conventional algorithmic AoA estimation techniques rely heavily on processes that can increase the cost, reduce scalability and complicate the operation of the systems where they are used. A primary requirement is the calibration of the antenna array to obtain its steering vector, a process that ensures accurate directional sensitivity of the antenna system. This calibration, along with other computationally intensive tasks such as matrix inversion and eigenvector decomposition, requires significant computational resources. This can be particularly challenging when these systems need to be scaled up for large deployments.

The study’s main objective was to evaluate and compare the generalisation capabilities of AI machine learning models to traditional signal processing techniques, such as the Multiple Signal Classification (MUSIC) algorithm, across various scenarios including different locator positions, time instant and unfamiliar environments.

The results indicated that while DL models perform well within the environment they are trained in, their ability to generalise to new or altered conditions is notably weaker than that of the MUSIC algorithm. The authors concluded that DL models tend to learn specifics of the training environment rather than generalisable features of the AoA estimation task. This learning limitation hampers their practical application since models trained in one environment perform poorly in another.

Analysing Sensor Data From Bluetooth Beacons Using AI Machine Learning

Analysing sensor data from Bluetooth beacons using machine learning involves several steps, from data collection and preprocessing to model development and deployment. Here’s an overview of the process:

Step 1: Data Collection
The first step in analysing sensor data from Bluetooth beacons is to collect the data. Beacons continuously broadcast data such as unique identifiers, signal strength (RSSI) and sometimes telemetry data like temperature an battery level. Sensor beacons also enable detection of a wide range of environmental and operational metrics. To collect this data, you need one or more receivers such as smartphones or gateways that can detect these signals and store and forward the data for further analysis.

Step 2: Data Pre-processing
Once data collection is complete, the raw data often needs to be cleaned and structured before analysis. This may involve:

  • Removing noise and outliers that can distort the analysis.
  • Filtering data to focus only on relevant signals.
  • Normalising signal strength to account for variations in distance and transmitter power.
  • Time-stamping and sorting the data to analyse temporal patterns.

Step 3: Feature Engineering
Feature engineering is sometimes critical in machine learning. For beacon data, features might include the average signal strength, or changes in signal strength over time. These features can help in developing models that understand patterns in data, such as identifying the trajectory of a moving beacon.

Step 4: Machine Learning Model Development
With pre-processed data and a set of features, you can train machine learning models to detect, classify, or predict. Common machine learning tasks for beacon data include:

  • Classification models to determine the type of interaction (e.g. particular types of movement).
  • Regression models to estimate distances from a beacon based on signal strength.
  • Clustering to identify groups of similar behaviours or patterns.

    Tools and frameworks like Python’s scikit-learn, TensorFlow or PyTorch can be used to develop these models.

Step 5: Evaluation and Optimisation
After developing a machine learning model, it’s important to evaluate its performance using metrics like accuracy, precision, recall and F1-score. Cross-validation techniques can help verify the robustness of the model. Depending on the results, you may need to return to feature engineering or model training to optimise performance.

Step 6: Deployment and Real-time Analysis
Deploying the model into a production environment is the final step. This means integrating the model into an existing app or system that interacts with Bluetooth beacons. The goal is to analyse the data in real-time to make immediate decisions such as sending notifications to users’ phones.

Read about our consulting services

Using ChatGPT in Beacon Applications

ChatGPT and other Large Language Models (LLMs) can be involved in Beacon-based IoT (Internet of Things) applications for tasks like classification and prediction, but it’s important to understand its limitations and best use cases. The strength of ChatGPT lies in processing and generating text based on natural language interactions, not numbers. Here’s how it might be applied in an IoT context:

Numerical Classification

For numerical classification tasks within beacon-based IoT, such as categorising temperature ranges or identifying equipment status based on sensor data, ChatGPT itself isn’t directly suited since it specialises in text data. However, you can use it to interpret the results of classification tasks done by other, more suitable machine learning models. For example, after a specialised model classifies temperature data into categories like “low”, “medium”, or “high”, ChatGPT can generate user-friendly reports or alerts based on these classifications and the context at the time of the report.

Prediction

In terms of prediction, if the task involves interpreting or generating text-based forecasts or insights from numerical data, ChatGPT can be useful. For example, after an analysis has been performed on traffic flow data by a predictive model, ChatGPT could help in generating natural language explanations or predictions such as, ‘Based on current data, traffic is likely to increase within the next hour’.

Integration Approach

For effective use in beacon-based IoT applications, ChatGPT would typically be part of a larger system where:

  1. Other machine learning models handle the numerical analysis and classification based on sensor data.
  2. ChatGPT takes the output of these models to create understandable, human-like text responses or summaries based on a wider context.

Conclusion

Thus, while ChatGPT isn’t a tool for direct analysis of numerical IoT data, it can complement other machine learning systems by enhancing the user interaction layer, making the insights accessible and easier to understand for users. For actual data handling, classification and prediction, you would generally deploy models specifically designed for numerical data processing and analysis.

Improving Bluetooth Fingerprinting Using Machine Learning

A new paper titled “Augmentation of Fingerprints for Indoor BLE Localization Using Conditional GANs” by Suhardi Azliy Junoh and Jae-Young Pyun, explores the development of a data-augmentation method for enhancing the accuracy of indoor localisation systems that use Bluetooth Low Energy (BLE) fingerprinting.

Bluetooth fingerprinting is a technique used to identify and track devices based on the unique characteristics of the Bluetooth signal, such as hardware addresses and signal strength, at specific locations.

The primary challenge addressed is the labour-intensive and expensive nature of traditional site surveys required for collecting Bluetooth fingerprints. The authors propose a novel approach that employs a Conditional Generative Adversarial Network with Long Short-Term Memory (CGAN-LSTM) to generate high-quality synthetic fingerprint data. This method aims to complement existing fingerprint databases, thereby reducing the need for extensive manual site surveys.

The research found that augmenting the fingerprint database using the CGAN-LSTM model significantly improved localisation accuracy. In experimental evaluations, the proposed data augmentation framework increased the average localization accuracy by 15.74% compared to fingerprinting methods without data augmentation. Moreover, when compared to linear interpolation, inverse distance weighting, and Gaussian process regression, the proposed CGAN-LSTM approach demonstrated an average accuracy improvement ranging from 1.84% to 14.04%, achieving average accuracies of 1.065 and 1.956 meters in two different indoor environments. These results underline the effectiveness of the CGAN-LSTM model in capturing the complex spatial and temporal patterns of BLE signals, making it a promising solution for indoor localisation challenges.

The study contributes significantly to the field by demonstrating how synthetic data can enhance the performance of fingerprint-based localisation systems in a cost-effective and efficient manner. The authors suggest that this approach could alleviate the burdensome demands of manual site surveys, offering a viable solution for improving the accuracy of BLE-based indoor localisation while minimizing resource expenditure.

More Accurate Beacon Locating Using AI Machine Learning

There’s new research in the Bulletin of Electrical Engineering and Informatics on Bluetooth beacons based indoor positioning in a shopping malls using machine learning. Researchers from Algeria and Italy improved the accuracy of RSSI locating by using AI machine learning techniques. They used extra-trees classifier (ETC) and a k-neighbours classifier to achieve greater than 90% accuracy.

A smartphone app was used to receive beacon RSSI and send it to an indoor positioning system’s data collection module. RSSI data was also filtered by a data processing module to limit the error range. KNN, RFC, extra trees classifiers (ETC), SVM, gradient boosting classifiers (GBC) and decision trees (DT) algorithms were evaluated.

The ETC model gave the best accuracy. ETC is an algorithm that uses a group of decision trees to classify data. It is similar to a random forest classifier but uses a different method to construct the decision trees. ETC fits a number of randomised decision trees on sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. ETC is a good choice for applications where accuracy is important but the data is noisy and where computational efficiency is important.

Using AI Machine Learning to Improve Ranging Accuracy

There’s new research from Oregon State University, USA and Peking University, Beijing, China on A Machine Learning Approach to Improve Ranging Accuracy with AoA and RSSI.

System Workflow

Machine learning was used to determine the line-of-sight distance in a multipath (reflective) environment. Due to the multipath effect, acquired signals indoors have complex mathematical models. A machine learning Artificial Neural Networks (ANN) is the most efficient way to process these signals.

The system achieved accuracy where 75% of the errors were less than 0.1 m with a median error of 0.037 m and a mean error of 0.092 m. This reduced ranging errors to under 10cm. The researchers were able to achieve high-precision indoor ranging without the need for a wide signal bandwidth nor synchronisation. The system was also simple and low cost to deploy due to low complexity of the equipment.

Using AI Machine Learning with Bluetooth Angle of Arrival (AoA)

There’s new research from Universities in Piraeus, Greece and Berlin, Germany, together with U-Blox AG in Switzerland who create Bluetooth Angle of Arrival prototyping boards on Deep Learning-Based Indoor Localization Using Multi-View BLE Signal.

Processing of Bluetooth Angle of Arrival usually requires radiogoniometry spectral analysis of radio in-phase and quadrature-phase (IQ) signals in order to then determine location by triangulation. Instead, this paper proposes machine learning of IQ and signal strength (RSSI) data from multiple anchor points to determine location. AoA processing also uses distributed processing across the anchors to improve performance.

The developed machine learning models were found to be robust against modifications of room furniture configurations and materials and it’s therefore expected that they have high re-usability (machine learning generalisation) potential. The system achieved a localization accuracy of 70cm.

Read about PrecisionRTLS™

Processing iBeacon RSSI Using AI Machine Learning

There’s new research from China on Regional Double-Layer, High-Precision Indoor Positioning System Based on iBeacon Network.

The project used extended Gaussian filtering to delete and filter significant abnormal data values caused by multipath radio noise indoors. A deep neural network was also used to fingerprint data.

The system resulted in a maximum error positional error of only 1.02m.

Probabilistic vs Neural Network iBeacon Positioning

There’s new research by ITMO University, Russia on the Implementation of Indoor Positioning Methods: Virtual Hospital Case. The paper describes how positioning can be used to discover typical pathways, queues and bottlenecks in healthcare scenarios. The researchers implemented and compared two ways to mitigate noise in Bluetooth beacon RSSI data.

The probabilistic and neural network methods both use past recorded data to compare with new data. This is known as fingerprinting. The neural network method is less complex when there’s need to scale to locating many objects. The researchers tested the methods at the outpatient department of the cardio medical unit of Almazov National Medical Research Centre.

Comparison of the methods showed they give approximately the same error of between 0.96m and 2.11m. However, the neural network-based approach significantly increased performance.

An AI Machine Learning Beacon-Based Indoor Location System

There’s a recent paper by researchers at DeustoTech Institute of Technology, Bilbao, Spain and Department of Engineering for Innovation, University of Salento, Lecce, Italy on Behavior Modeling for a Beacon-Based Indoor Location System.

The research compares two different approaches to track a person indoors using Bluetooth LE technology with a smartphone and a smartwatch used as monitoring devices.

The beacons were iB005N supplied by us and it’s the first time we have been referenced in a research paper.

The research is novel in that it uses AI machine learning to attempt location prediction.

The researchers were able to predict the user’s next location with 67% accuracy.

Location prediction has some interesting and useful applications. For example, you might stop a vulnerable person going outside a defined area or in an industrial setting stop a worker going into a dangerous area.