Decimal byte array constructor in Binaryformatter Serialization

I ran into a very unpleasant problem that I cannot determine.
I am launching a very large ASP.Net application for business containing many thousands of objects; It uses in-memory serialization / deserialization with a MemoryStream to clone the application state (insurance contracts) and transfer it to other modules. He worked perfectly for many years. Now sometimes not systematically ; in serialization, it throws an exception

The decimal byte array constructor requires a four-byte array containing valid decimal bytes.

Running the same application with the same data, 3 times out of 5 it works. I have included all CLR exceptions, Debugging - Exceptions - CLR Exception - Enabled, so I assume that if an incorrect initialization / assignment in the decimal field occurs, the program should stop. This is not happening. I tried to separate serialization into more elementary objects, but it is very difficult to try to determine the field causing the problem. From the working version in production and this, I switched from .Net 3.5 to .NET 4.0 and successive changes were made in the user interface part, and not in the business part. Patiently I will go through all the changes.

It looks like old-fashioned C problems, when char *p writes where it should not, and only during serialization, when it analyzes all the data, the problem crashes.

Is this possible in a managed .Net environment? The application is huge, but I do not see the abnormal pace of memory. What can be a way to debug and track a problem?

Below is the stacktrace part

 [ArgumentException: Decimal byte array constructor requires an array of length four containing valid decimal bytes.] System.Decimal.OnSerializing(StreamingContext ctx) +260 [SerializationException: Value was either too large or too small for a Decimal.] System.Decimal.OnSerializing(StreamingContext ctx) +6108865 System.Runtime.Serialization.SerializationEvents.InvokeOnSerializing(Object obj, StreamingContext context) +341 System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter, SerializationBinder binder) +448 System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write(WriteObjectInfo objectInfo, NameInfo memberNameInfo, NameInfo typeNameInfo) +969 System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, Header[] inHeaders, __BinaryWriter serWriter, Boolean fCheck) +1016 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Header[] headers, Boolean fCheck) +319 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph) +17 Allianz.Framework.Helpers.BinaryUtilities.SerializeCompressObject(Object obj) in D:\SVN\SUV\branches\SUVKendo\DotNet\Framework\Allianz.Framework.Helpers\BinaryUtilities.cs:98 Allianz.Framework.Session.State.BusinessLayer.BLState.SaveNewState(State state) in 

Sorry for the long history and the vague question, I will really appreciate any help.

+6
source share
1 answer

It is very interesting; which does not actually read or write data at that time - it calls a callback before serialization, aka [OnSerializing] , which is mapped to decimal.OnSerializing . What this does is a sanity attempt - to check a bit - but it looks like there is simply a mistake in BCL. Here's the implementation in 4.5 (cough "reflector" cough):

 [OnSerializing] private void OnSerializing(StreamingContext ctx) { try { this.SetBits(GetBits(this)); } catch (ArgumentException exception) { throw new SerializationException(Environment.GetResourceString("Overflow_Decimal"), exception); } } 

GetBits gets the lo / mid / hi / flags array, so we can be sure that the array passed to SetBits is not null and the correct length. Therefore, in order to fail, the part that must be unsuccessful is in SetBits , here:

  int num = bits[3]; if (((num & 2130771967) == 0) && ((num & 16711680) <= 1835008)) { this.lo = bits[0]; this.mid = bits[1]; this.hi = bits[2]; this.flags = num; return; } 

Basically, if test passes , we get, assign values, and successfully complete; if the if test does not work , it throws an exception. bits[3] is a piece of flags that contains the sign and scale, IIRC. So here is the question: how did you get an invalid decimal with a broken flags piece?

Note: 2130771967 is a mask:

 0111 1111 0000 0000 1111 1111 1111 1111 

16711680 is the mask:

 0000 0000 1111 1111 0000 0000 0000 0000 

and 1835008 - mask

 0000 0000 0001 1100 0000 0000 0000 0000 

(which is the decimal number 28 in the upper word)

for quoting from MSDN:

The fourth element of the returned array contains a scale factor and a sign. It consists of the following parts: Bits from 0 to 15, the lower word, are not used and should be equal to zero. Bits 16 through 23 should contain an index between 0 and 28, which indicates the power of 10 to divide an integer. Bits 24 through 30 are not used and should be zero. Bit 31 contains the sign: 0 means positive, and 1 means negative.

So, to skip this test:

  • the indicator is invalid (outside 0-28)
  • nonzero lower word
  • high byte (excluding MSB) is nonzero

Unfortunately, I have no magic way to find that decimal invalid ...

The only thing I can think of is to look:

  • the scatter of GetBits / new decimal(bits) throughout your code - perhaps like the void SanityCheck(this decimal) method (maybe with [Conditional("DEBUG")] or something else)
  • add [OnSerializing] methods to your main domain model, somewhere somewhere (maybe a console) so you can see what object it was working on when it exploded.
+3
source

Source: https://habr.com/ru/post/951379/


All Articles