The fundamental need for IoT
Our hunger to improve our understanding has led to an insatiable thirst for knowledge, information, and thus, data. On the factory floor this is how data collection has evolved over the years.
The primary roadblock with this evolution is the word ‘output’. It is a methodology which captures only lagging data. While it may help to plan and adapt on subsequent stages, the data does nothing to locate potential problems in the area from which it has been collected.
This is the primary case for an IoT enabled factory. The vision of IoT initiatives are typically to capture real-time data of the business area and use that to accelerate decision making for the area. Simply put, the earlier an anomaly in the process is identified, the sooner it can be fixed, and the cheaper it will be to correct any problems arising from the anomaly.
The standard architecture of an IoT implementation on the shop floor looks something like what is given below:
What makes for a successful implementation (or not)
An IoT implementation, like any other project, can either soar to success or crash to failure. Fortunately, understanding what goes into a successful implementation, and vice versa, and quite well understood. These checks are listed
|Dimension to consider*||If done well||If done poorly|
|Define top use cases||There is a tight business case, with success criteria, clear KPIs, RoI targets.
Cognizant of vendor capabilities
|Have a vague high-level objective. Expecting the implementation to come in and be a miracle cure for all ills already in the organization.|
|List key technology assets||The hardware and software touch-points are clearly mapped, with inter-system hand-offs charted clearly on what each element is expected to do||The scope of machines affected are listed, and a top-level list of software applications impacted is prepared|
|Clarify ownership of IoT technology being developed||The ownership of the underlying technology, and the customisations built on top of it, are clearly demarcated between the company and the vendor. There is clarity on whether to go for an open-source platform, or deploy a commercial platform.||There is limited discussion and clarity on the kind of IoT platform to be chosen. Decisions are outsourced to consultants or partner vendors, with minimal engagement from the company.|
|Plan for evolution||Analysis has been done to establish that the chosen architecture can support all current use cases, as well as allow growth for potential scenarios which may come up in the future.||There is only a working understanding on the platform’s ability to meet the requirements. No foresight has been applied to project future scenarios which the platform may need to support.|
|Ensure end-to-end integration||All touchpoints between different systems are clearly mapped. Details are known on the nature of information flow at the touchpoint. There is a primary understanding that the touchpoint can perform as expected, based on the understanding of source and destination systems (protocols, APIs, data formats etc).||There may be a top-level understanding of the type of information flow between the major systems. Design level details are not available. This can lead to a potential deal-breaker issue during implementation.|
|Address security, performance compliance||Each element of the platform has been studied for risks, and corrective actions are implemented, or at least planned. It is verified that future growth of the platform will not compromise security||There is only a top-level understanding of security implications. Extremely dangerous is to implicitly assume that current security infrastructure will cover the new platform, by default.|
|Consider private vs public cloud||As most IoT analysis happens centrally, there is a technical, commercial, compliance level evaluation of building a public vs a private cloud.||A top level commercial analysis is carried out, and the cheaper option wins.|
|Test on pilot projects||Well structured pilots are carried out which prove out as much of the platform as possible, while simultaneously identifying potential issues and fixing them before full-blown implementation||A demonstration, or a general walkthrough of the system is done. Issues specific to the company are not identified until the implementation has hit its stride.|
*Reference – An Enterprise IoT Implementation Checklist
When the standard speed of analytics isn’t enough
While setting up the IoT platform, there are bottlenecks which come up and have the potential to derail the implementation, and raise questions on the validity of IoT for the organization. Some of these are:
- The business value of the IoT platform comes from real-time analytics which immediately translate into action on the floor. This however is not realized due one or more of the following reasons
- The analytics is required from a small subset of the data generated from the relevant device. However, the entire data needs to move to the cloud and then processed before the insight can be communicated back to the user. This lag reduces the utility of the analysis
- While the lag is acceptable, the complexity of the dataset means large raw datasets are transferred to the cloud, where after processing much of the dataset is ignored forever. This consumes data bandwidth, as well as blocks storage space.
In situations such as these, a key concept comes into play. The IoT implementation can be designed with an ability to perform primary data analysis and deliver basic insights at the device level itself, without moving data to the cloud. This is formally known as Edge analytics, as the data analysis is done at the edge of the network, before the data is pushed into the central datastore via the communication channel.
What is Edge Analytics
Simply put, Edge analytics is the collection, processing, and analysis of data at the edge of a network either at or close to a sensor, a network switch, or some other connected device. With the growing popularity of connected devices with the evolution of Internet of Things (IoT), many industries such as retail, manufacturing, transportation, and energy are generating vast amounts of data at the edge of the network. Edge analytics is data analytics in real-time and on site where data collection is happening. Edge analytics could be descriptive or diagnostic or predictive analytics.
Refer: Edge Analytics
As we have already discussed above, the primary objective is to provide immediate feedback based on analysis of data received from a sensor in real-time. A secondary objective is also to reduce the load on central data analytics resources.
In today’s world, computing power and storage are both considered cheap hence the need to control data stored and analysed may seem anachronistic. What needs to be appreciated is that with more sources of data, the data grows in step, whereas the analytical load increases exponentially. Thus, it can escalate into a serious resource crunch and impact the scalability of the solution. There is a case for edge analytics to help manage additional resource requirements by decentralizing the computing load.
Where Edge Analytics comes into its own
Oil rigs have been pioneers for edge analytics implementations. They have the double whammy of needing real-time analysis to manage critical and potentially life-threatening equipment, and suffer from connectivity constraints that prevent a quick turnaround of feedback from the central data analytics platform.
Schneider Electric utilized Edge analytics to deliver real-time insight using the infrastructure on the rig itself. Data was processed to prepare machine learning models, which are trained on powerful central resources. The models themselves are relatively much lighter on computing resources to make actual inferences, and can thus be put on devices placed on the rig itself. Therefore, Schneider Electric was able to deliver predictive insights on the device layer itself, giving much more accurate data for the Oil rig operators to work from.
Refer: Schneider Electric
How to decide whether to go for a standard versus Edge analytics IoT architecture
Even though edge analytics is an exciting area, it should not be viewed as a potential replacement for central data analytics. Both can and will supplement each other in delivering data insights and both models have their place in organizations. One compromise of edge analytics is that only a subset of data can be processed and analyzed at the edge and only the results may be transmitted over the network back to central offices. This will result in ‘loss’ of raw data that might never be stored or processed. So, edge analytics is OK if this ‘data loss’ is acceptable. On the other hand, if the latency of decisions (& analytics) is not acceptable as in-flight operations or critical remote manufacturing/energy, edge analytics should be preferred.
Refer: Edge Analytics