The State of AI in 2019

Beacons provide a great way of providing new data for AI machine learning. They allow you to measure things that aren’t currently being quantified, create new data that isn’t silo’d by protectionist staff or departments and allow you to pre-process data in-place making it suitable for learning and inference.

There’s a new free State of AI Report 2019 in the form of a 136 page presentation. It covers aspects such as research, talent, industry and geopolitical areas such as China and Politics.

Read more about AI Machine Learning with Beacons

The Crux of Machine Learning is Realistic Expectations

Venturebeat has an article, based on IDC research, titled For 1 in 4 companies, half of all AI projects fail.

“Firms blamed the cost of AI solutions, a lack of qualified workers, and biased data as the principal blockers impeding AI adoption internally. Respondents identified skills shortages and unrealistic expectations as the top two reasons for failure, in fact, with a full quarter reporting up to 50% failure rate.”

We believe a key part of this is ‘unrealistic expectations’. Half of all AI projects failing for 1 in 4 companies isn’t unreasonable. AI and machine learning should be viewed as a research rather than a development activity in that it’s often the case that it’s not known if the goal is achievable until you try.

Another unrealistic expectation of machine learning is often to have 100% accuracy. The use of an accuracy % in assessing machine learning models focuses stakeholders minds too much on the perceived need for a very high accuracy. In reality, human-assessed, non-machine learning, processes such as medical diagnosis tend to have much less than 100% accuracy and sometimes have undetermined accuracy but these are reasonably seen as being acceptable.

In summary, there has to be upfront realistic expectations of both the possible outcome and the accuracy of the outcome for projects to correctly determine if AI activities are an unexpected failure.

Read about AI Machine Learning with Beacons

Using AI Machine Learning on Bluetooth RSSI to Obtain Location

In our previous post on iBeacon Microlocation Accuracy we explained how distance can be inferred from the received signal strength indicator (RSSI). We also explained how techniques such as trilateration, calibration and angle of arrival (AoA) can be used to improve location accuracy.

There’s new research presented at The 17th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys ’19) by researchers from Nagoya University, Japan that looks into the use of AI machine learning to process Bluetooth RSSI to obtain location.

Their study was based on a large-scale exhibition where they placed scanning devices:

They implemented a LSTM neural network and experimented with the number of layers:

They obtained best results with the simplest machine learning model with only 1 LSTM:

As is often the case with machine learning, more complex models over-learn on the training data such that they don’t work with new, subsequent data. Simple models are more generic and work not just with the training data but with new scenarios.

The researchers managed to achieve an accuracy of 2.44m at 75 percentile – whatever that means – we guess in 75% of the cases. 2.44m is ok and compares well to accuracies of about 1.5m within a shorter range confined space and 5m at the longer distances achieved using conventional methods. As with all machine learning, further parameter tuning usually improves the accuracy further but can take along time and effort. It’s our experience that using other types of RNN in conjunction with LSTM can also improve accuracy.

If you want to view the research paper you need to download all the papers from the conference (zip) and extract p558-uranoA.pdf. Some of the other papers also make interesting, if not directly relevant, reading.

Read about AI Machine Learning with Beacons

Free AI Paper

Microsoft has a new free (registration not required) paper on Maximising the AI opportunity, How to harness the potential of AI effectively and ethically (pdf). While the data is UK centric, the insights and actions are applicable to any country.

The message is that organisations should embrace AI’s potential or risk being left behind. As well as economic gains, changes should take into account social and safety issues.

“Organisations that are investing in establishing the right approach to AI now outperform those that don’t by 9%”

The paper explains AI and how many organisation are talking about AI but fewer are taking action. It gives perspectives of use of AI in FinTech, Healthcare, Manufacturing and Retail.

Read about AI Machine Learning with Beacons

Machine Learning isn’t Magic

When working with Machine Learning on beacon sensor data or indeed any data, it’s important to realise AI machine learning isn’t magic. It isn’t foolproof and is ultimately only as good as the data passed in. Because it’s called AI and machine learning, people often expect 100% accuracy when this often isn’t possible.

By way of a simple example, take a look at the recent tweet by Max Woolf where he shows a video depicting the results of the Google cloud vision API when asked to identify an ambiguous rotating image that looks like a duck and rabbit:

There are times when it thinks the image is a duck, other times a rabbit and other times when it doesn’t identify either. Had the original learning data included only ducks but no rabbits there would have been different results. Had there been different images of ducks the results would have been different. Machine learning is only a complex form of pattern recognition. The accuracy of what you get out is related to a) The quality of the learning data and b) The quality of the tested data when to try identification.

If your application of machine learning is safety critical and needs 100% accuracy, then machine learning might not be right for you.

Read about AI Machine Learning with Beacons

Prognostics, Predictive Maintainance Using Sensor Beacons

A growing use of sensor beacons is in prognostics. Prognostics replaces human inspection with continuously automated monitoring. This cuts costs and potentially detects when things are about to fail rather than when they have failed. This makes processes proactive rather than reactive thus providing for smoother process planning and reducing the knock-on affects of failures. It can also reduce the need for over excessive and costly component replacement that’s sometimes used to reduce in-process failure.

Prognostics is implemented by examining the time series data from sensors, such as those monitoring temperature or vibration, in order to detect anomalies and make forecasts on the remaining useful life of components. The problems with analysing such data values are that they are usually complex and noisy.

Machine learning’s capacity to analyse very large amounts of high dimensional data can take prognostics to a new level. In some circumstances, adding in additional data such as audio and image data can enhance the capabilities and provide for continuously self-learning systems.

A downside of using machine learning is that it requires lots of data. This usually requires a gateway, smartphone, tablet or IoT Edge device to collect initial data. Once the data has been obtained, it need to be categorised, filtered and converted into a form suitable for machine learning. The machine learning results in a ‘model’ that can be used in production systems to provide for classification and prediction.