The Bluetooth SIG has an infographic that depicts eight location-based usecases where Bluetooth can be used to create better visitor experiences and improve operational efficiency. It explains how smartphones are helping to drive rapid adoption of Bluetooth due to their inherent compatibility.
The usecases demonstrate some of the kinds of solutions we create at Beaconzone.
The app scans for advertising devices, optionally with a specific CoreBluetooth UUID, and displays them including RSSI (signal strength). It can connect to devices that are connectable and then browse device the Bluetooth Services and Characteristics. For iBeacons, it’s also possible to observe region updates for specified beacons.
The app monitors and graphs recent RSSI values. You can also set up your device to advertise iBeacon or custom services with custom Bluetooth Characteristics.
Researchers identified the behaviour and physiological state of milk cattle using beacons and combined this with data from weather forecast stations.
Changes of motor activity of cows were recorded on the 24hr characteristics and registered during the period of cattle heat. Motorola smartphones were used as base stations to collect and process the data.
The researchers succeeded in collecting and processing data from beacon devices that provided an alterative to traditional pedometer-based solutions.
The system automatically registers attendance without disturbing the class. It uses an iBeacon in each classroom to determine location. It also uses a camera and deep learning analysis to prevent students cheating the system by having someone else attend. The researchers say the system is better than biometric scanning and RFID that requires manual reading one by one.
The solution uses iBeacons but it’s the Bluetooth MAC address that’s used for room identification. The scanner and camera interface uses a Raspberry Pi that sends data to a server.
No, it’s not an April fool joke but instead another useful thing killed by Google. Apart from Search, Cloud, Gmail and perhaps Android it’s risky to base your business on anything provided by Google. Unless it’s an offering through which Google itself depends for income then you can’t depend on it sticking around. Instead, businesses should look to create their own APIs.
This shows the easy route isn’t always the best route. Think about your project dependencies. It is likely the platform you depend on will exist for the lifetime of your project? How is the platform funded? How is the company that provides the platform funded?
Tracker beacons are different from normal beacons in that they are designed to be connected to an app for the majority of the time. Non-tracker beacons just advertise and aren’t usually connected except for setup.
The F6 comes with iOS and Android SDKs that provide for bonding/pairing with a password, listening to events such as connecting, connected, disconnected, getting the MAC address and RSSI, ringing the tracker, receiving a button press event, receiving a notification n seconds after disconnect and disconnecting at a given distance (received signal power level, RSSI).
Sensor beacons provide a quick and easy way to obtain data for AI machine learning. They provide a way of measuring physical processes to provide for detection and prediction.
Beacons detect movement (accelerometer), movement (started/stopped moving), button press, temperature, humidity, air pressure, light level, open/closed (magnetic hall effect), proximity (PIR), proximity (cm range), fall detection, smoke and natural gas. The open/closed (magnetic hall effect) is particularly useful as it can be used on a multitude of physical things for scenarios that require digitising counts, presence and physical status.
The data is sent via Bluetooth rather than via cables which means there’s no soldering or physical construction. The Bluetooth data can be read by smartphones, gateways or any devices that have Bluetooth LE. From there it can be stored in files for reading into machine learning.
Such data is often complex and it’s difficult for a human to devise a conventional programming algorithm to extract insights. This is where AI machine learning excels. In simple terms, it reads in recorded data to find patterns in the data. The result of this learning is a model. The model is then used during inference to classify or predict situations based on new incoming data.
The above shows some output from accelerometer data fed into one of our models. The numbers are distinct features found over the time series as opposed to a single x,y,z sample. For example ’54’ might be a peak and ’61’ a trough. More complex features are also detectable such as ‘120’ being the movement of the acceleration sensor in a circle. This is the basis for machine learning classification and detection.
It’s also possible to perform prediction. Performing additional machine learning (yes, machine learning on machine learning!) on the features to produce a new model tells us what usually happens after what. When we feed in new data to this model we can predict what is about to happen.
The problem with sensor data is there can be a lot of it. It’s inefficient and slow to detect events when this processing at the server. We create so called Edge solutions that do this processing closer to the place of detection.