Open Collective
Open Collective
Loading
Improving the serialization engine
Published on October 19, 2019 by Geert van Horrik

Optimizing the performance and memory allocations is a continuous project that we try to keep in mind while doing minor releases. For the next upcoming release (5.12), we decided to focus on taking a look at the serialization engine in Catel. There are a few pain points we’d like to address:

  1. Performance
  2. Memory allocations
  3. Re-using objects using graph ids

Xml serialization

We’ll focus on the XmlSerializer first, but any code written in the serializer base will benefit all serializers. The current version of Catel uses XDocument (and internally sometimes falls back to XmlWriter / XmlReader and turns that into XElement again). This is a bit heavy and it has always been on the backlog to convert this to a forward-only, non-cached version using only XmlWriter and XmlReader.

We currently have a feature branch with the improved XmlSerializer implementation and seeing very promising results so far.

Performance

The serialization engine in Catel is built specifically for Catel with ease and customization in mind. This means that it’s not the best performing serialization engine available. For most use cases that’s perfectly fine, and it’s not our goal to write the fastest serialization engine in the world.

We are currently evaluating the method call stacks during (de)serialization, trying to make the serialization engine more performant than it is at the moment. By replacing the XDocument serialization by direct XmlWriter / XmlReader, we were able to achieve about 50 % improvement in performance. The results below, from the Catel.Benchmarks project, show a complex, 3 levels deep serialization and deserialization of objects:

Memory allocations

Because the property bag inside the ModelBase is basically just a Dictionary<string, object>, every value type needs to be boxed. We were already aware of this, but decided to take a better look at this. During our research, we found out that every boxed value has a separate memory allocation (even if it’s representing the same value). In the upcoming release, we added generic methods to the ModelBase:

  • SetValue<TValue>(string, TValue)
  • GetValue<TValue>(string)

These methods will use the BoxingCache to make sure that it re-uses the boxed values of the most used value types, resulting in less memory usage. In one of our bigger test models, after deserializing we went from 9000 (9K) boxed bool values to 300 boxed bool values.

Note that we also updated Catel.Fody in order to pick the best SetValue<TValue> call automatically based on the version of Catel it’s used against.

This, combined with the memory improvements in the XmlSerializer, show an amazing memory allocation decrease of 84 %. The results below, from the Catel.Benchmarks project, show a complex, 3 levels deep serialization and deserialization of objects:

We will be continuously trying to improve the memory allocation behavior of Catel, even in minor releases.

Re-using objects using graph ids

Catel already supported re-using objects inside the serialization graph. Unfortunately, Catel uses the default .NET DataContractSerializer for collections. In the upcoming release, we’ve added explicit support for collections implementing List<T> so these support graph ids as well. This way it’s possible to generate very large, circular graphs re-using the same data with Catel:

<People xmlns:arr=”http://schemas.microsoft.com/2003/10/Serialization/Arrays” graphid=”3056″>

<Person graphrefid=”912″ />

<Person graphrefid=”914″ />

<Person graphrefid=”916″ />

<Person graphrefid=”918″ />

<Person graphrefid=”920″ />

<Person graphrefid=”922″ />

<Person graphrefid=”924″ />

<Person graphrefid=”926″ />

<Person graphrefid=”928″ />

<Person graphrefid=”930″ />

<Person graphrefid=”932″ />

<Person graphrefid=”934″ />

<Person graphrefid=”936″ />

<Person graphrefid=”938″ />

<Person graphrefid=”940″ />

<Person graphrefid=”942″ />

<Person graphrefid=”944″ />

<Person graphrefid=”946″ />

</People>