In the process of processing calls of frameworks such as Corba (for example, TAO), Thrift, D-Bus, ICE

We are trying to create an application in which parts of it can be distributed, but not necessarily. For this, we would like to use the existing infrastructure for remote calls. In order not to do everything twice, we would like to use the same thing for calls on the same machine in the same process.

Does anyone know what performance / latency constraint we get when using this structure instead of directly calling vtable? Are comparisons available?

The system must be portable on Windows and Linux

Relationship Tobias

+4
source share
5 answers

What is common with most of the communication structures that I know of is that they will always be serialized, sent and deserialized, which will always be the result of performance when passing links to other threads and direct access to data (with or without a mutex), This it doesn’t always have to be dramatic when responsibilities are assigned wisely to minimize communication.

Please note that with these architectural options, performance is just one aspect to consider. Others: security, stability, flexibility, deployment, maintainability, licenses, etc.

+1
source

omniORB has for a long time positioned a shortcut with a connection that made direct calls, but starting with version 4 it has its own POA, which bypasses even more required CORBA behavior to make it almost as fast as a direct virtual call. See the omniORB Wiki and find "Hot Local Calls". Unfortunately, this is not like official docs, at least not what I could find.

+3
source

From ZeroMQ / Learn the basics :

In 2011, CERN (European Organization for Nuclear Research) compared CORBA, Ice, Thrift, ZeroMQ, YAMI4, RTI, and Qpid (AMQP). Read their analysis and conclusions . (Pdf)

Which may just be the comparison you were. (Found thanks to a comment by Mattiu Ruge.)

I would also say that although some ORBs allow you to skip marshalling, you still cannot avoid the allocation of dynamic memory, which is important for performance. (Today, processors are insanely fast, memory access is slow, and asking the OS to allocate a memory page is very slow.)

So, in C ++, you can simply return the const string & binding, CORBA C ++ will force you to dynamically allocate and free the string or data structure (regardless of the type of the return type or out). It doesn’t matter if the method calls the process / network anyway, but in the process it becomes quite significant compared to simple C ++.

Another “booty” that we burned is that you cannot define mutually exclusive structures (i.e. structure “A” includes “B”, which includes “A” again). This meant that we had to convert them to interfaces that allocate the CORBA Servant "server side" (in-process) for each structure that is very heavily loaded. I understand that there are preliminary tricks to avoid creating servants, but in the end we just want to completely get away from CORBA, and not go deep into the depths.

Especially in C ++, memory management is very fragile and difficult to program correctly. (See The Rise and Fall or CORBA , “complexity” section.) I attribute many man-years of extra effort to this technology choice.

I would be interested to know how you are doing and what you have accepted.

+1
source

One of several reasons for creating the IBM System Object Model was CORBA. IBM SOM is the "local CORBA", and IBM DSOM is the CORBA implementation.

You should probably rate somFree .

Another variant of UNO (from OpenOffice.org). I can’t say that I like UNO, it’s worse, but it is more mature than the long-forgotten SOM. The UN in-process ecosystem is divided into sections depending on the programming language. C ++ and Java are the most common sections. Serialization does not exist, but the preferred mechanism for interaction between partitions is late binding (Java Proxy-> Java Dispatch-> C ++ Dispatch-> C ++ object) (IDispatch view in OLE), although maids can also be directly connected (Java Proxy -> C ++).

0
source

ZeroC's ICE definitely supports collocation calls when data sorting is avoided. You can find detailed documentation information from your website: http://doc.zeroc.com/display/Ice/Location+Transparency Although the call to collocation has some call compared to the virtual call, unfortunately I do not have actual numbers, but it also depends on the conditions, that is, how many servants are registered in a particular adapter, etc.

0
source

Source: https://habr.com/ru/post/1483296/


All Articles