Using Bluetooth Sensor Beacons for AI Machine Learning

Sensor beacons provide a quick and easy way to obtain data for AI machine learning. They provide a way of measuring physical processes to provide for detection and prediction.

Beacon Temperature Sensor

Beacons detect movement (accelerometer), movement (started/stopped moving), button press, temperature, humidity, air pressure, light level, open/closed (magnetic hall effect), proximity (PIR), proximity (cm range), fall detection, smoke and natural gas. The open/closed (magnetic hall effect) is particularly useful as it can be used on a multitude of physical things for scenarios that require digitising counts, presence and physical status.

The data is sent via Bluetooth rather than via cables which means there’s no soldering or physical construction. The Bluetooth data can be read by smartphones, gateways or any devices that have Bluetooth LE. From there it can be stored in files for reading into machine learning.

Such data is often complex and it’s difficult for a human to devise a conventional programming algorithm to extract insights. This is where AI machine learning excels. In simple terms, it reads in recorded data to find patterns in the data. The result of this learning is a model. The model is then used during inference to classify or predict situations based on new incoming data.

The above shows some output from accelerometer data fed into one of our models. The numbers are distinct features found over the time series as opposed to a single x,y,z sample. For example ’54’ might be a peak and ’61’ a trough. More complex features are also detectable such as ‘120’ being the movement of the acceleration sensor in a circle. This is the basis for machine learning classification and detection.

It’s also possible to perform prediction. Performing additional machine learning (yes, machine learning on machine learning!) on the features to produce a new model tells us what usually happens after what. When we feed in new data to this model we can predict what is about to happen.

The problem with sensor data is there can be a lot of it. It’s inefficient and slow to detect events when this processing at the server. We create so called Edge solutions that do this processing closer to the place of detection.

Read more about SensorCognition

Understanding Sensor Beacon Accelerometer Data

In this post we will take a look at data from the INGICS iGS01RG beacon.

The x axis is time. You can see the x, y and z values, every 100ms, over time. The y axis is normalised between -1 and -1 for use in our SensorCognition Edge device. The chart is for when the beacon has been moving, followed by a stationary period. Notice how the orange line continues to show acceleration even though the beacon isn’t moving. This is caused by gravity.

In this chart the beacon has been flipped over and the orange line now shows a constant negative acceleration.

A good thing about the presence of a constant offset in one of the x y z inputs is that it can be used to help determine the orientation of the beacon. The less desirable aspect is that the offset significantly complicates using the x y z to determine types of movement such as human gestures.

Such complex data problems are more easily solved using AI machine learning than trying to write a traditional algorithm to make sense of the data.

Here’s an example of output from a SensorCognition Edge device trained with up and down movement and left and right movement. In this case, the output 227 is showing the beacon is moving left and right.

Read about SensorCognition

The State of AI in 2019

Beacons provide a great way of providing new data for AI machine learning. They allow you to measure things that aren’t currently being quantified, create new data that isn’t silo’d by protectionist staff or departments and allow you to pre-process data in-place making it suitable for learning and inference.

There’s a new free State of AI Report 2019 in the form of a 136 page presentation. It covers aspects such as research, talent, industry and geopolitical areas such as China and Politics.

Read more about AI Machine Learning with Beacons

The Crux of Machine Learning is Realistic Expectations

Venturebeat has an article, based on IDC research, titled For 1 in 4 companies, half of all AI projects fail.

“Firms blamed the cost of AI solutions, a lack of qualified workers, and biased data as the principal blockers impeding AI adoption internally. Respondents identified skills shortages and unrealistic expectations as the top two reasons for failure, in fact, with a full quarter reporting up to 50% failure rate.”

We believe a key part of this is ‘unrealistic expectations’. Half of all AI projects failing for 1 in 4 companies isn’t unreasonable. AI and machine learning should be viewed as a research rather than a development activity in that it’s often the case that it’s not known if the goal is achievable until you try.

Another unrealistic expectation of machine learning is often to have 100% accuracy. The use of an accuracy % in assessing machine learning models focuses stakeholders minds too much on the perceived need for a very high accuracy. In reality, human-assessed, non-machine learning, processes such as medical diagnosis tend to have much less than 100% accuracy and sometimes have undetermined accuracy but these are reasonably seen as being acceptable.

In summary, there has to be upfront realistic expectations of both the possible outcome and the accuracy of the outcome for projects to correctly determine if AI activities are an unexpected failure.

Read about AI Machine Learning with Beacons

Using AI Machine Learning on Bluetooth RSSI to Obtain Location

In our previous post on iBeacon Microlocation Accuracy we explained how distance can be inferred from the received signal strength indicator (RSSI). We also explained how techniques such as trilateration, calibration and angle of arrival (AoA) can be used to improve location accuracy.

There’s new research presented at The 17th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys ’19) by researchers from Nagoya University, Japan that looks into the use of AI machine learning to process Bluetooth RSSI to obtain location.

Their study was based on a large-scale exhibition where they placed scanning devices:

They implemented a LSTM neural network and experimented with the number of layers:

They obtained best results with the simplest machine learning model with only 1 LSTM:

As is often the case with machine learning, more complex models over-learn on the training data such that they don’t work with new, subsequent data. Simple models are more generic and work not just with the training data but with new scenarios.

The researchers managed to achieve an accuracy of 2.44m at 75 percentile – whatever that means – we guess in 75% of the cases. 2.44m is ok and compares well to accuracies of about 1.5m within a shorter range confined space and 5m at the longer distances achieved using conventional methods. As with all machine learning, further parameter tuning usually improves the accuracy further but can take along time and effort. It’s our experience that using other types of RNN in conjunction with LSTM can also improve accuracy.

If you want to view the research paper you need to download all the papers from the conference (zip) and extract p558-uranoA.pdf. Some of the other papers also make interesting, if not directly relevant, reading.

Read about AI Machine Learning with Beacons

How to Start Industry 4.0 and Digital Transformation

There’s lots said about the advantages of Industry 4.0 or Digital Transformation and the associated new technologies but it’s a lot harder to apply this to the context of a business that has legacy equipment and no real way of knowing where to start.

Our previous article on productivity explained how, historically, digital transformation has been only been implemented in the top 5% ‘frontier’ companies. These have tended to be very large companies with large R&D budgets that have enabled customised digital solutions. More recently, the availability of less expensive sensors and software components have extended opportunities to the SME companies. These companies are already realising gains in profitability, customer experience and operational efficiency. Unlike previous technologies, such as CRM, the newer technologies such as IoT and AI are more transformative. Companies that don’t update their processes risk being outranked by their competition with a greater possibility of going out of business. But where do you start?

The place to start is not technology but instead something you and your colleagues fortunately have lots of experience of : Your company. Take an honest look at your processes and work out the key problems that, if solved, would achieve the greatest gains. You might have ignored problems or inefficiencies for years or decades because they were thought to be insolvable. Technology might now be able to solve some of these problems. So what kind of problems? Think in terms of bottlenecks, costly workrounds, human effort-limited tasks, stoppages, downtimes, process delays, under-used equipment and even under-used people. Can you measure these things and react? Can you predict they are about to happen? This is where sensing comes in.

The next stage is connectivity. You will almost certainly need to upgrade or expand your WiFi and/or Ethernet network. It can be impractical to put sensors on everything and everyone and connect everything by WiFi/Ethernet. Instead, consider Bluetooth LE and sensor beacons to provide a low cost, low power solution for the last 50 to 100m. Bluetooth mesh can provide site-wide connectivity.

Initially implement a few key improvements that offer good payback for the effort (ROI). The improvements in efficiency, productivity, reduced costs and even customer experience should be enough to convince stakeholders to expand and better plan the digital transformation. This involves replacement of inefficient equipment and inefficient processes using, for example, robotics and 3D printing. It also involves analysing higher order information combined from multiple sources and using more advanced techniques such as AI machine learning to recognise and detect patterns to detect, classify and predict. This solves problem complexity beyond that able to be solved by the human mind or algorithmic program created by a programmer.

Get Help Determining Feasibility

Read about Beacons in Industry and the 4th Industrial Revolution (4IR)

Explore AI Machine Learning with Beacons

Free AI Paper

Microsoft has a new free (registration not required) paper on Maximising the AI opportunity, How to harness the potential of AI effectively and ethically (pdf). While the data is UK centric, the insights and actions are applicable to any country.

The message is that organisations should embrace AI’s potential or risk being left behind. As well as economic gains, changes should take into account social and safety issues.

“Organisations that are investing in establishing the right approach to AI now outperform those that don’t by 9%”

The paper explains AI and how many organisation are talking about AI but fewer are taking action. It gives perspectives of use of AI in FinTech, Healthcare, Manufacturing and Retail.

Read about AI Machine Learning with Beacons

SensorCognition™ – Machine Learning Sensor Data at the Edge

The traditional IoT strategy of sending all data up to the cloud for analysis doesn’t work well for some sensing scenarios. The combination of lots of sensors and/or frequent updates leads to lots of data being sent to the server, sometimes needlessly. The server and onward systems usually only need to now about abnormal situations. The data burden manifests itself as lots of traffic, lots of stored data, lots of complex processing and significant, unnecessary costs.

The processing of data and creating of ongoing alerts by a server can also imply longer delays that can be too long or unreliable for some time-critical scenarios. The opposite, doing all or the majority of processing near the sensing is called ‘Edge’ computing. Some people think that edge computing might one day become more normal as it’s realised that the cloud paradigm doesn’t scale technically or financially. We have been working with edge devices for a while now and can now formally announce a new edge device with some unique features.

Another problem with IoT is every scenario is different, with different inputs and outputs. Most organisations start by looking for a packaged, ready-made solution to their IoT problem that usually doesn’t exist. They tend to end up creating a custom coded solution. Instead, with SensorCognition™ we use pre-created modules that we ‘wire’ together, using data, to create your solution. We configure rather than code. This speeds up solution creation, providing greater adaptability to requirements changes and ultimately allows us to spend more time on your solution and less time solving programming problems.

However, the main reason for creating SensorCognition™ has been to provide for easier machine learning of sensor data. Machine learning is a two stage process. First data is collected, cleaned and fed into the ‘learning’ stage to create models. Crudely speaking, these models represent patterns that have been detected in the data to DETECT, CLASSIFY, PREDICT. During the production or ‘inference’ stage, new data is fed through the models to gain real-time insights. It’s important to clean the new data in exactly the same way as was done with the learning stage otherwise the models don’t work. The traditional method of data scientists manually cleaning data prior to creating models isn’t easily transferable to using those same models in production. SensorCognition™ provides a way of collecting sensor data for learning and inference with a common way of cleaning it, all without using a cloud server.

Sensor data and machine learning isn’t much use unless your solution can communicate with the outside world. SensorCognition™ modules allow us to combine inputs such as MQTT, HTTP, WebSocket, TCP, UDP, Twitter, email, files and RSS. SensorCognition™ can also have a web user interface, accessible on the same local network, with buttons, charts, colour pickers, date pickers, dropdowns, forms, gauges, notifications, sliders, switches, labels (text), play audio or text to speech and use arbitrary HTML/Javascript to view data from other places. SensorCognition™ processes the above inputs and provides output to files, MQTT, HTTP(S), Websocket, TCP, UDP, Email, Twitter, FTP, Slack, Kafka. It can also run external processes and Javascript if needed.

With SensorCognition™ we have created a general purpose device that can process sensor data using machine learning to provide for business-changing Internet of Things (IoT) and ‘Industry 4.0’ machine learning applications. This technology is available as a component of BeaconZone Solutions.