In 2018 there were 7 billion active IoT devices; in 2019 that number surpassed 26 billion. Some analysts estimate that every second, 127 new IoT devices are connected to the web, generating a wealth of data from the sensors attached to them.
It is mission critical for organisations who deploy IoT devices, to harness the data generated by them to improve the products and services they provide to their customers, potentially migrating to a ‘Things as a Service’ model. To do that, they need a solution that can scale with them, and adapt to both their customer’s & organisation’s needs.
Traditionally, message broker systems are used to connect processes together that need to regularly interact. When two processes need to exchange data, they connect directly to a broker which acts essentially as a collection of buckets. When process A needs to send something to process B, it puts the message in the appropriate bucket on the broker to which process B is assigned. Process B would then process the message in its bucket and put a response in the bucket for process A to receive the results. Solutions like this tend to be optimised towards flexibility and configurable delivery guarantees.
While distributed log technologies may on the surface seem remarkably like traditional broker messaging technologies, they differ significantly architecturally and therefore have vastly different performance and behavioural characteristics.
Recently technologies such as Apache Kafka, Azure IoT EventHub, or as they are commonly called the ‘distributed commit log technologies’, play a similar role to the traditional broker message systems. They are optimised towards different use cases, however instead of concentrating on flexibility and delivery guarantees, they tend to be concentrated on scalability and throughput.
In a distributed commit log architecture, the sending and receiving processes are a bit more de-coupled and in some ways the sending process does not care about the receiving processes. The messages are persisted immediately to the distributed commit log. The delivery guarantees are often viewed in the context, that messages in the distributed commit log tend to be persisted only for a period. Once the time has expired, older messages in the distributed commit log disappear regardless of guarantees and therefore usually fit use cases where the data can expire or will be processed by a certain time.
To ensure that Connected Operations can handle data from an increasing number of devices on a sliding scale, it has a component called IoT Bridge. IoT Bridge leverages Azure IoT Hub, a cloud-hosted solution from Microsoft, to act as a distributed commit log in this scenario.
Being a cloud-hosted solution, Azure IoT Hub is ideally placed to handle the scaling side of the process, with Connected Operations IoT Bridge defining the logic used to provide insight and workflow within the ServiceNow Platform. That logic comes from the use of IoT rules, which are a no code solution for defining when the data provided by IoT Hub requires action on the Now Platform. This action is carried out using process flows defined within the Flow Designer tool, effectively meaning that multiple actions can be taken in multiple sections of the platform.
A discussion of how IT Asset Management (ITAM) might apply to IoT Devices, touching upon IoT Devices in the CMDB, Product Models and the concept of Digital Twins....Read more...
OT Hub Connected Operations and IoT...Read more...