ALL > Computer and Education > courses > university courses > graduate courses > Modern Computer Networking > ZSTU class(2019-2020-1) > student directories > sergekalombo L20192E060111 >
Last Homework (article)PRESENT AND FUTURE OF FLOW MONITORING L2019E060111 kalombo nyembwe serge Version 0
👤 Author: by sergekalomboyahoocom 2019-12-19 05:20:06
PRESENT AND FUTURE OF FLOW MONITORING

 

Abstract. In this paper we review the major approaches to future of flow monitoring, this article is to allow you to compare your current system with the good practices generally observed and to give you directions of evolution.

Whether you are part of the IT operations, business or management teams, your process management, administration and feedback needs are essential to ensure the expected levels of service are achieved.

  1. INTRODUCTION


To produce more and faster, to make your business ever more agile and responsive, you need a clear view of how the orchestrations you are doing work. You must be informed in real time of malfunctions, the formation of queues or changes in the means of production, in short, the risks of not achieving the key indicators of your activity.

From the simple monitoring platform that allows you to have a vision on the technical issues of the platform, to Business Activity Monitoring that gives you a real transparency on the progress of your business activities, through the Process Intelligence that will allow you to optimize, simulate and anticipate changes in your organization.

Having a powerful flow monitoring is critical: integrating all the data flows it is able to offer a synthetic vision of the entire information system.

The purpose of this article is to allow you to compare your current system with the best practices generally observed and give you directions for evolution.

  1. Flow Monitoring


A flow is called a set of service calls and / or messages that together form a business service. In a complex information system, such a flow often crosses several applications and sometimes uses several technologies: for example a user clicks on a screen that triggers a REST call that causes a SOAP invocation whose processing sends a series of JMS messages.

Monitoring these flows therefore means implementing business activity monitoring. This consists of collecting data in all the divans applications to correlate them. This provides an aggregated cross-view of the activity of your information system. This monitoring must provide at all times the health status and performance of important business functions (Key Performance Indicator). In the past this information was often seen separately for each technical layer.

Stream tracking does not replace component tracking, but completes it in the same way as integration testing completes unit testing. Each brick must be monitored in isolation in a technical way to identify the problems that are specific to it, while the flow monitoring will focus on transversal and business elements that require a global vision of the system. There is a good overlap area between the two but do not confuse them or use one to replace the other.

In the rest of the article, a business message will indifferently mean the content of a service call or message sent.

  1. To have a good follow-up of the flow


B.1. functionality

The essential functionality is to be able to identify the business flow in atomic messages. Generally this involves the use of a unique identifier (correlation identifier). All messages in the same stream should therefore contain the same identifier. This involves setting up a specific brick in the various applications responsible for generating the identifier at the beginning of the chain and propagating it in the processes. This brick will also be responsible for transmitting a copy of the messages for it to be integrated in the follow-up.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The system must be able to take into account heterogeneous events: if the messages sent by the various components have common elements (time stamp for example) they also include specific information related to the service's business (name of the business service, identifier of the service). objects). Being able to easily integrate these different data will make it possible to build as much as possible functional metrics that will evolve at the same time as the services.

To be able to make the most of this data, you need a configurable dashboarding system: it is not only a question of pre-defining a set of fixed monitoring screens, but also of being able to use them for studies or investigations. The monitoring systems usually used are often poorly suited to this type of use: their "old-fashioned" ergonomics make data mining difficult. Moreover, it is a lot of monolithic solutions integrating monitoring data collection and storage.

The database should provide indexing functionality with maximum coverage, ideally to index all fields of the data. This makes it possible to simplify investigations in the event of an error: for example, it will be possible to identify all messages that concern a certain account number. For volume issues, we can limit the indexing in time (48 hours at least), keeping the possibility of reindexing past messages.




 

B.2. Constraints

  • Never interfere with the job


A failure in the monitoring brick must never have consequences on the functional, so we must technically isolate both.

Then the monitoring must not lead to a drop in performance, the monitoring messages must be sent as asynchronous events (pattern wiretap) using a message middleware. It is desirable to have a dedicated message infrastructure, as this avoids any risk of overloading.

  • Limit specific developments


Finally, we must limit the business developments in the monitoring blocks: if configuration or a little specific development is inevitable, especially for the most accurate metrics, we must avoid recoding functional behavior. The result is often fragile and will make the evolutions of the profession more difficult.

  • Software bricks needed


To meet these criteria, we can identify the different bricks that are necessary for flow monitoring:

  • The information is transmitted as events in a dedicated message middleware.

  • A message processing system for aggregation and event detection.

  • An indexed database where they are stored.

  • A monitoring console to exploit them.


 

 

 

 

 

 

 

 

 

 

 

B.3. Current practices

Your information system already has some of the technical blocks you need, such as a message system or database. Unfortunately, the specificities of monitoring often prevent the use of the same tools:

Due to the large number of messages entering the system, conventional middlewares do not offer sufficient processing capacity, so it is necessary to use a specialized communication system for this type of volume: AMQP, SNMP, RSYSLOG, ZeroMQ .

For message processing, a complex event processing (CEP) system like Drool Fusion will manage the technical aspects. It will keep in memory a state of the system on which rules are defined triggering treatments or alerts.

The database must store messages with heterogeneous formats that evolve with applications. We will generally move towards a NoSQL type of storage solution to have dynamic data schemas while providing partitioning and scalability to absorb the volume and the flow of incoming data. The indexing features of Elastic Search generally make it a good choice.

For dashboarding Kibana is the reference product for visualizing data stored in Elastic Search: it allows you to build rich screens in a flexible way.

 

 

B.4. The future

This type of architecture based on standard bricks is limited in two aspects; the first is technical and the second functional.

  1. ALWAYS MORE MESSAGES


The first limitation is related to the increase in the number of messages to process. In systems based on new architectures of micro-services and integrating new uses (mobile applications, Internet of Things) the number of messages is multiplied. The objective of the monitoring system being to continue to absorb all the messages it must see its capacities increase in the same proportions.

This entails the implementation of several solutions:

  • Even by choosing a fast broker, conventional message middleware solutions peak at a few thousand messages per second. We must then turn to "Fast Data" solutions which Kafka seems today to become the reference solution.

  • The integration of messages in the storage system must go through an "Event Streaming" solution. Possible solutions are for example Apache Storm or Apache Spark Streaming.

  • The storage of such a volume of data will be performed on a distributed storage system such as an Apache Cassandra or HDFS Hadoop. Storage in Elastic Search can be retained for fast queries on recent data.


This type of "big data" architecture makes it possible to combine several processing approaches while maintaining a unified storage, called DataLake.

  1. FURTHER ANALYSIS


After the classic rule engines, we are starting to turn to more advanced analysis solutions to better measure what is happening but also to better predict. With this in mind, the business component of monitoring is gaining more and more weight and the distinction with BI disappears and we are convinced that we will soon see online machine learning solutions enrich these systems.

 

 

 

 

 

 

 

 

 

 

 

 

 

  1. CONCLUSION


We hope to have convinced you of the value of flow monitoring. Today it is essential for reasons of exploitation, tomorrow it will also bring you business value. The implementation of this type of solution in an information system is a structuring project but based on open and well known components, so there is no reason not to put it.

The recommended solution is relevant for light flows, but in most cases, there are still large flows that are carried by ETL platforms that usually dump monitoring indicators into an existing dashbording device. These large flows are also business services that trades would like to follow. The solution should be completed to monitor large flows and be able to integrate with existing dashbording devices. It will be difficult to sell to business departments a device that only monitore part of the business services (those corresponding to the light stream).

Please login to reply. Login

Reversion History

Loading...
No reversions found.