How to register a complex synchronization process?

I am developing a complex distributed service that performs an iterative synchronization process. It synchronizes every 10 seconds of a business entity in different information systems. One iteration consists of a bunch of third-party calls to retrieve the current state of business objects (the number of customers, products, certain customer and product data, etc.), query the local database, and then get the differences between them and smooth synchronization of this difference.

There are different types of iterations. They are fast (only changes in the set of objects) and slow iterations (full data validation). Fast every 10 seconds, and slow once a day.

So how can I log these processes using NLog? I use SQLite to store data. But I'm stuck in the design of the database for magazines.

So, I want to register the flow of each iteration: 1. Request the current state of objects for servicing 3d parties 2. Request a local database for the current state of objects 3. Get a list of differences 4. Call an external service to transfer insufficient data 5. Update the local database to obtain insufficient data.

But there are so many kinds of information to write, so I can’t just put it in one TEXT field.

I am currently using this structure for logs:

 CREATE TABLE [Log] ( [id] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [ts] TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, [iteration_id] varchar, [request_response_pair] varchar, [type] VARCHAR NOT NULL, [level] TEXT NOT NULL, [server_id] VARCHAR, [server_alias] VARCHAR, [description] TEXT, [error] Text); 

Thus, each request and response of the service is placed in description and request_response_pair is the key that associates each response with each request.

Here is my NLog configurator:

 <?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" internalLogFile="D:\nlog.txt" internalLogLevel="Trace"> <targets> <target name="Database" xsi:type="Database" keepConnection="false" useTransactions="false" dbProvider="System.Data.SQLite.SQLiteConnection, System.Data.SQLite, Version=1.0.82.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139" connectionString="Data Source=${basedir}\SyncLog.db;Version=3;" commandText="INSERT into Log(iteration_id, request_response_pair, type, level, server_id, server_alias, description, error) values(@Iteration_id, @Request_response_pair, @Type, @Loglevel, @server_id, @server_alias, @Description, @Error)"> <parameter name="@Type" layout="${message}"/> <parameter name="@Loglevel" layout="${level:uppercase=true}"/> <parameter name="@Request_response_pair" layout="${event-context:item=request_response_pair}"/> <parameter name="@Iteration_id" layout="${event-context:item=iteration_id}"/> <parameter name="@server_id" layout="${event-context:item=server_id}"/> <parameter name="@server_alias" layout="${event-context:item=server_alias}"/> <parameter name="@Description" layout="${event-context:item=description}"/> <parameter name="@Error" layout="${event-context:item=error}"/> </target> </targets> <rules> <logger name="*" minlevel="Trace" writeTo="Database" /> </rules> </nlog> 

This is how I write:

 namespace NLog { public static class LoggerExtensions { public static void InfoEx(this Logger l, string message, Dictionary<string, object> contextParams) { LogEventInfo eventInfo = new LogEventInfo(LogLevel.Info, "", message); foreach (KeyValuePair<string, object> kvp in contextParams) { eventInfo.Properties.Add(kvp.Key, kvp.Value); } l.Log(eventInfo); } public static void InfoEx(this Logger l, string message, string server_id, string server_alias, Dictionary<string, object> contextParams = null) { Dictionary<string, object> p = new Dictionary<string, object>(); p.Add("server_id", server_id); p.Add("server_alias", server_alias); if (contextParams != null) { foreach (KeyValuePair<string, object> kvp in contextParams) { p.Add(kvp.Key, kvp.Value); } } l.InfoEx(message, p); } } } 

I know about logging levels, but I need all these detailed logs, so I am logging it as information. I cannot find any tutorial on how to write these complex, structured journals. Only simple dumb magazines.

+6
source share
1 answer

I assume that you are talking about “Journals” for typical “journals”, so we need to see something if we need to check our workflow (errors / performance). ”I assume that you do not mean magazines, for example in "We need credentials, and the journal is part of our domain data and is included in the workflow."

And from what I received from your message, are you worried about the backend storage format in the logs so that you can subsequently process them and use them for this diagnosis?


Then I would recommend that you make the registration code independent of the specifics of the domain.

Question: How will the logs you create be handled? Do you really need to access them everywhere so you need a database to provide you with a structured view? How important is how fast can you filter your logs? Or will they end up in one large log analyzer application that will only work in the second week when something bad happens?

In my opinion, the biggest reasons you want to avoid any features of the domain in the journal is that "the logs should work if something is wrong," and "the logs should work after everything has changed "

Logs should work if something is wrong.

If your log table has columns for domain values ​​such as "Request_response_pair" and there is no pair, then the log entry itself may fail (for example, if this is an index field). Of course, you can make sure that your DB design has no non-zero columns and no restrictions, but take a step back and ask: why do you want the structure in your log database anyway? Logs should work as reliably as possible, so any template that you click on them may limit use cases or may not allow you to log important information.

Logs should work after changing things

Especially if you need logs to detect and correct errors or improve performance, this means that you will regularly compare logs from "before change" to logs from "after change." If you need to change the structure of the log database because you changed the data of your domain, it will hurt you when you need to compare logs.

True, if you change the data structure, you probably still need to update some tools, such as a log analyzer, etc., but usually there is most of the registration / analysis code that is completely agnostic for the actual domain structure.


Many systems (including complex ones) can interact with “just write one simple line” and later write tools to separate the line again if they need to filter or process the logs.

Other systems write logs into simple string key / value pairs. The log function itself is not domain-specific, it simply takes a dictionary of strings and writes it (or even simpler, the params [] string, which should have an even number of parameters, and you use every second parameter as a key - if you are not afraid of this sentence : -D).

Of course, you will probably start writing another layer of tools on top of the functions of the main journal, which knows about the data structures specific to the domain, and then compiles a dictionary of strings and passes it. Of course, you do not want to copy the decomposing code to the whole place. But make the basic functions available in all places where you can record something. This is really useful if you are really faced with “strange” situations (exception handler) where some information is missing.

+1
source

Source: https://habr.com/ru/post/946164/