Hybrid vs Native Apps and Cross Platform Tools?

When creating apps to discover beacons, there’s often the temptation to use cross platform tools to create both iOS and Android apps at the same time. Such tools are often based on web (WebView screen) technologies and Javascript.

The first problem you will encounter is that few of the cross platform tools support Bluetooth. Even if they do, they don’t support it to the degree required to implicitly use the latest iOS and Android Bluetooth APIs. This is one of the main problems with cross platform in that functionality always trails the underlying native OS functionality.

Another problem is that there’s no one Android browser upon which the WebViews are based. Niels Leenheer has a (old but still relevant) set of slides that explains how browsers vary across Android versions, devices and phone manufacturers. The consequence of this is that getting any non-trivial WebView-based app to work across many device types is very difficult.

The next problem is functionality. It not only lags the underlying OS functionality in the use of APIs but also features are absent. This often requires some native coding which causes the app to become more of a Frankenstein creation with consequent unexpected complexities.

For best performance and OS look and feel you have to use native development. It’s possible for hybrid apps to look and feel like Android and iOS but it takes a lot of effort due to the previously mentioned browser fragmentation. It’s possible to get near-native app performance by replacing (bundling) a better Javascript interpretor. However, these extra complexities are what you were trying to avoid by using the cross platform framework in the first place.

If the above doesn’t persuade you, even Mark Zuckerberg regretted using web technologies in apps in 2015. This didn’t stop others trying. There are some detailed posts on Medium explaining how AirBnB is moving back to native development and how the difficulties are not just technical but also organisational.

If you are writing apps or getting apps written we recommend you save yourself some grief and write them using native code.

Read about our development services

What New Things Could Machine Learning Enable?

Benedict Evans of Andreessen Horowitz, a venture capital firm in Silicon Valley, has a thought-provoking blog post on Ways to Think About Machine Learning.

Benedict asks what new things machine learning (ML) could enable. What important problems might it actually be able to solve? There are (too) many examples of machine learning being used to analyse images, audio and text, usually using the same example data. However, the main question for organisations is how can they use ML? What should they look for in data? What can be done?

Much of the emphasis is currently on making use of existing captured data. However, such data is often trapped in siloed company departments and usually needs copious amounts of pre-processing to make it suitable for machine learning.

We believe some easier-to-exploit and more profound opportunities exist if you use new data from sensors attached to physical things to create new data. Data from physical things can provide deeper insights than existing company administrative data. The data can also be captured in more suitable formats and can be shared rather than stored by protectionist company departments.

For example, let’s take movement xyz that’s just one aspect of movement that can be detected by beacons. Machine learning allows use of accelerometer xyz motor vibration to predict the motor is about to fail. Human posture, recorded as xyz allows detection that patients are overly-wobbly and might be due for a fall. The same human posture information can be used to classify sports moves and fine tune player movement. xyz from a vehicle can be used to classify how well a person is driving and hence allow insurers provide behavioural based insurance. xyz from human movement might even allow that movement to uniquely identify a person and be used as a form of identification. The possibilities and opportunities are extensive.

As previously mentioned, the above examples are just one aspect of movement. If you also consider movement between zones, movement from stationary and fall detection itself, more usecases become evident. Sensor beacons also allow measuring of temperature, humidity, air pressure, light and magnetism (hall effect), proximity and heart rate. There are so many possibilities it can seem difficult to know where to start.

One solution is to look at your business rather than technical solutions or even machine learning. Don’t expect or look for a ready-made solution or product as the most appropriate machine learning solutions will usually need be custom and proprietary to your company. Start by looking for aspects of your business that are currently very costly or very risky. How might more ‘intelligence’ be used to cut these costs or reduce these risks?

Practical examples are How might we use less fuel? How might we use less people? How might we concentrate on the types of work that are least risky? How might be preempt costly or risky situations? How might we predict stoppages or over-runs?

Next, use your organisation domain experts to assess what data might be needed to measure data associated with these situations. Humans often have insight that patterns in particular data types will help classify and predict situations. They just can work out the patterns. That’s where machine learning excels.

Read About AI Machine Learning with Beacons

Devices That Can See Beacons

When people think about beacons they often imagine them being detected in smartphone apps. This post explores other devices that can also see beacons allowing for different interaction possibilities and new scenarios.

Apps – Apps aren’t limited to just smartphone apps. You can run apps on TV boxes that run Android. Just make sure they have Bluetooth 4.3 or later.

GatewaysGateways are small single pupose devices that look for beacons and send the information on via MQTT or REST (HTTP) to any server. This allows web servers to see beacons.

Desktops and Laptops – PC/Mac devices with built-in Bluetooth or dongles can see beacons.

Walky Talkies – Motorola manufacture the MOTOTRBO range of digital radios that can detect iBeacons and show their location on a map.

Raspberry Pi – This has Bluetooth and can be used to detect beacons.

AndroidThings – This special IOT version of Android can run apps that detect beacons and store and/or forward information to other devices.

ArduinoArduino boards often have Bluetooth and can do things based on the presence of beacons.

Pixl.js – The manufacturer of the Puck.js also supplies a device with a screen that can detect and interact with beacons.

Single Board Computers (SBC) have an advantage over gateways in that data can be cached locally when there isn’t an Internet connection. They can also make decisions locally and send out alerts directly rather than having to rely on a server. This is so called ‘IoT Edge’ computing.

Bluetooth 5 in Smartphones

Last February we wrote about the progress of Bluetooth 5 in recent smartphones. A few months on and Nordic Semiconductor, the company that produces the System on a Chip (SoC) used in most beacons, has a new blog post on Bluetooth 5 in Smartphones and how we are about to experience a tipping point in support for Bluetooth 5.

The final observation from the article is:

Even if sticking to previous incarnations of Bluetooth may look like the right choice, the marketing power of Bluetooth 5, regardless of whether it’s needed or not, is likely to help companies differentiate products and increase sales.

This is true. Some companies currently claiming Bluetooth 5 support in products don’t actually use Bluetooth 5 yet but instead offer an upgrade path to Bluetooth 5.

Detecting Temperature With Beacons

Some sensor beacons can be used to monitor temperature. The first thing to consider when comparing temperature beacons is whether they have a dedicated hardware temperature sensor. Some beacons have a temperature sensor inside the main chip (System on a Chip – SoC) that’s less accurate and has less precision. The sensor is mainly there to give an indication of the chip temperature, not the ambient (outside the beacon) temperature. Most beacons only transmit for the order of 1ms every 10 to 5 seconds and enter a very low power state the remainder of the time. This means they not only use low power but don’t significantly heat the SoC. This means the SoC roughly tracks the outside temperature.

In our sensor beacon listings, when we say a beacon has a temperature sensing it has a separate hardware sensor, usually the Sensirion SHT20, providing more accuracy and precision than the sensor in a SoC. Some of our beacons, such as the Minew i3 and i7 have an internal SoC temperature sensor that’s readable but we don’t classify that as a sensor beacon.

The next thing to consider is the casing. In order to quickly track ambient temperature, the casing needs to be open somewhere and usually have a hole. Beacons that say they are waterproof and have temperature sensing won’t track ambient temperature well.

We have had customers use temperature sensing beacons in scientific situations and where they need to periodically calibrate sensing equipment. How do you calibrate temperature sensor beacons? The SHT20 is has a long term drift of only <0.04 deg C/year (the humidity reading vaies difts by <0.5%RH/year) so it doesn’t need calibration for most situations. However, if you need better than this, or check calibration, you will need to periodically calibrate in the software of the device (usually an app) that receives the beacon sensor data.

New Physical Web Association

Last April we asked if the Physical Web was dead and mentioned that a group of people, led by Agustin Musi from Switzerland, was contemplating creating PhysicalWeb2. The Physical Web Association (PHWA) has now been created as a non-profit association with the goal of driving the development, community, and adoption of the Physical Web. The PHWA is now accepting memberships.

A refreshed TestFlight version of the PhyWeb iOS app is available to members. This new app will be promoted via advertising and the press. In time, the PHWA aims to develop a native app kit to add the Physical Web to existing apps, develop brand-neutral apps for iOS and Android and host a metadata service as, presumably, a substitute for the google Physical Web Proxy.

Detecting Movement With Beacons

There are various types of movement that can be detected by beacons:

Movement between zones – This is large scale movement between, for example, rooms. This relies on devices detecting the beacons and relaying the information to software that, stores historical location, plots positions and creates alerts. This is the basis for Real Time Locating Systems (RTLS).

Movement from stationary – This is when something goes from being stationary to moving. There are two ways to do this. You can look at the xyz from a beacon accelerometer to determine it has started moving. Alternatively, some beacons such as the iB003 have motion triggered advertising so you will only see the beacon when it moves.

Falling – Again you can look at the xyz from a beacon accelerometer to determine a beacon is falling. Alternatively, you can use a more intelligent beacon such as the iBS01G that does this for you and just gives indications of a start/during/end of a fall as values in the advertising data.

Vibration – The xyz can be used to determine the degree of the movement and hence vibration.

Posture detection – This is more advanced analysis of the xyz that works out, for example, if someone is walking, running, sitting or standing. Another use is the analysis of sports (e.g. golf, squash, tennis, badminton) swings to determine the type of movement and score the movement.

There also scenarios outside the above that are also possible. For example, we had a customer wanting to know if their forklift truck hadn’t been moving for 2 minutes so as to make best use of it.

View our sensor beacons

Have Us Devise a Solution for Your Company

Beacon Within Existing Systems

There’s a trend for beacons becoming parts of existing systems rather than being the main reason for having a system. The two way radio admin system (from Motorola) was one of the early examples. Newer examples are smart desk/meeting room systems, BlindSquare for navigation and (Cisco Meraki) WiFi access points.

Middleware used to create systems is also increasingly including support for beacons. An example is IBM’s MobileFirst Foundation service that has recently provided for beacons via a MobileFirst Adapter. This allows you to easily use beacons within mobile apps with data being stored in the IBM Cloud.

BlindSquare and Beacons

BlindSquare is a popular accessible GPS application developed for the blind and visually impaired. It describes the environment, announces points of interest and street intersections. BlindSquare also works with iBeacons.

An example of use of BlindSquare with beacons is Melbourne Zoo that allows people with visual impairments to get to parts of the zoo that are out of bounds to guide dogs.

Prognostics, Predictive Maintainance Using Sensor Beacons

A growing use of sensor beacons is in prognostics. Prognostics replaces human inspection with continuously automated monitoring. This cuts costs and potentially detects when things are about to fail rather than when they have failed. This makes processes proactive rather than reactive thus providing for smoother process planning and reducing the knock-on affects of failures. It can also reduce the need for over excessive and costly component replacement that’s sometimes used to reduce in-process failure.

Prognostics is implemented by examining the time series data from sensors, such as those monitoring temperature or vibration, in order to detect anomalies and make forecasts on the remaining useful life of components. The problems with analysing such data values are that they are usually complex and noisy.

Machine learning’s capacity to analyse very large amounts of high dimensional data can take prognostics to a new level. In some circumstances, adding in additional data such as audio and image data can enhance the capabilities and provide for continuously self-learning systems.

A downside of using machine learning is that it requires lots of data. This usually requires a gateway, smartphone, tablet or IoT Edge device to collect initial data. Once the data has been obtained, it need to be categorised, filtered and converted into a form suitable for machine learning. The machine learning results in a ‘model’ that can be used in production systems to provide for classification and prediction.