Member Serialization

Serialization at times throws curious situations that makes us look beyond the normal usages. Here is another situation that took sometime to find a solution for. Let’s examine the following code.

public class Test:BaseClass
{
  public string Name {get;set;}
}
[DataContract]
public class BaseClass
{
}

What would be the output if one was to serialize an instance of the Test class ?

var instance = new Test{Name="abc"};
var result = JsonConvert.SerializeObject(instance);

Let’s examine the output

{

}

Not surprising right ? After all the base class has been decorated with DataContractAttribute. This would ensure the only the members (or members of derieved classes) with DataMemberAttribute would be serialized.

As seen in the code above, while the base class doesn’t have any property, the child class has a single property (Name), which is NOT decorated with the mentioned attribute.
This is as per the design and works well in most cases. If one needs to be ensure the Child class members needs to be serialized, one needs to decorate it with the DataMemberAttribute.

public class Test:BaseClass
{
  [DataMember]
  public string Name {get;set;}
}
[DataContract]
public class BaseClass
{
}

This would ensure the Property Name is serialized.

{"Name":"abc"}

The other option he have is to remove the DataContractAttribute from the BaseClass, which would produce the same result.

But what if the Developer have following constrains
* Cannot access/change the BaseClass
* Should not use the DataMemberAttribute

The second constraint is chiefly driven by the factor that there are many Properties in the Child Class. This would require the developer to use the DataMemberAttribute on each of them, which is quite painful and naturally, one would want to avoid it.

The solution lies in a lesser known Property of the well known JsonObjectAttribute. The MemberSerialization.OptOut enumeratation ensures the following behavior.

All public members are serialized by default. Members can be excluded using JsonIgnoreAttribute or NonSerializedAttribute.

While this is the default member serialization mode, this gets overriden due the presence of ​DataContractAttribute` in the Parent Class.
Let’s modify our code again.

[JsonObject(MemberSerialization.OptOut)]
public class Test:BaseClass
{
  public string Name {get;set;}
}
[DataContract]
public class BaseClass
{
}

As seen the code above, the only change required would be decorated the Child Class with JsonObjectAttribute passing in the MemberSerialization.OptOut Enumeration for Member Serialization Mode.
This would produce the desired output

{"Name":"abc"}

A second look at char.IsSymbol()

Let us begin by examining a rather simple looking code.

var input = "abc#ef";
var result = input.Any(char.IsSymbol);

What would the output of the above code ? Let’s hit F5 and check it.

False

Surprised ?? One should not feel guilty if he is surprised. It is rather surprising one does not look behind to understand what exactly char.IsSymbol does. After all, it is one of the rather underused method.

So why this pecular behaior ? What exactly is a Symbol according to the char.IsSymbol() method. The answer lies in the documentation of the method.

Valid symbols are members of the following categories in UnicodeCategory: >MathSymbol, CurrencySymbol, ModifierSymbol, and OtherSymbol.

The character ‘#’ naturally doesn’t fall under the required categories. Now, with that understanding, let us examine few other characters.

var charList = new[]{'!','@','$','*','+','%','-'};
foreach(var ch in charList)
{
Console.WriteLine($"{ch} = IsSymbol:{char.IsSymbol(ch)}");
}

The output again, has few curious facts to verify. Let’s check the output first.

! = IsSymbol:False
@ = IsSymbol:False
$ = IsSymbol:True
* = IsSymbol:False
+ = IsSymbol:True
% = IsSymbol:False
- = IsSymbol:False

Some of the results are self-explanatory, but what looks interesting for us would be the characters "*","-", and "%". All three of them looks to fall under Mathematical symbols. This might raise eyebrows on why they weren’t recognized as Symbols.

The answer lies in the UnicodeCategory of the character. Let us change the code a bit to include the unicode category as well for each character.

var charList = new[]{'!','@','$','*','+','%','-'};
foreach(var ch in charList)
{
Console.WriteLine($"{ch} = IsSymbol:{char.IsSymbol(ch)}"
+ $"UnicodeCategory:{Char.GetUnicodeCategory(ch)}");
}

Before further discussion let us examine the output as well.

! = IsSymbol:FalseUnicodeCategory:OtherPunctuation
@ = IsSymbol:FalseUnicodeCategory:OtherPunctuation
$ = IsSymbol:TrueUnicodeCategory:CurrencySymbol
* = IsSymbol:FalseUnicodeCategory:OtherPunctuation
+ = IsSymbol:TrueUnicodeCategory:MathSymbol
% = IsSymbol:FalseUnicodeCategory:OtherPunctuation
- = IsSymbol:FalseUnicodeCategory:DashPunctuation

The answer to previous question now stares on us. The characters "*,%,-" lies under the OtherPunctuation and DashPunctuation Categories.

That explains the behavior of char.IsSymbol(). In most cases, it would be better to use Regex for validating passwords or other strings that needs to be validated for special characters.

Deserialize Json to Generic Type

One of the recent question in stackoverflow found interesting was about a Json, which needs to be deserialized to a Generic Class. What makes the question interesting was the Generic Property would have a different Json Property name depending on the type T. Consider the following Json

{
status: false,
employee:
{
firstName: "Test",
lastName: "Test_Last"
}
}

This needs to be Deseriliazed to the following class structures

public class Response<T>
{

[JsonProperty(PropertyName = "status")]
public bool Status {get;set;}

public T Item {get;set;}

}

[JsonObject(Title = "employee")]
public class Employee
{

[JsonProperty(PropertyName = "firstName")]
public string FirstName {get; set;}

[JsonProperty(PropertyName = "lastName")]
public string LastName {get; set;}

}

 

However, Response<T> being a generic class, would need to support additional Types as well. For example, the Json could also look like the following

{
status: false,
company:
{
companyname: "company name",
headquaters: "location"
}
}

Where the company needs to be deserialized to

[JsonObject(Title = "company")]
public class Employee {

[JsonProperty(PropertyName = "companyname")]
public string CompanyName {get; set;}

[JsonProperty(PropertyName = "headquaters")]
public string HeadQuaters {get; set;}

}

The solution lies in writing a Custom Contract Resolver, which does the magic. Let’s go ahead and write the ContractResolver.

public class GenericContractResolver<T> : DefaultContractResolver
{

protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization)
{
var property = base.CreateProperty(member, memberSerialization);
if (property.UnderlyingName == nameof(Response<T>.Item))
{
foreach( var attribute in System.Attribute.GetCustomAttributes(typeof(T)))
{
if(attribute is JsonObjectAttribute jobject)
{
property.PropertyName = jobject.Title;
}
}
}
return property;
}
}

The role of the ContractResolver is pretty simple. As soon as it recognizes the Generic Type passed, it would replace the property name with the name described in JsonObjectAttribute.

Now you can use the ContractResolver to deserialize the Json. For example

var result = JsonConvert.DeserializeObject<Response<Employee>>(json,
new JsonSerializerSettings
{
ContractResolver = new GenericContractResolver<Employee>()
});

Demo Samples could be found here in my C# Fiddles

String or Array Converter : Json

Imagine you have a method which returns a Json String of following format.

{Name:'Anu Viswan',Languages:'CSharp'}

In order to deserialize the JSON, you could define a class as the following.

public class Student
{
public string Name{get;set;}
public string Languages{get;set;}
}

This work flawlessly. But imagine a situation when your method could return either a single Language as seen the example above, but it could additionally return a json which has multiple languages. Consider the following json

{Name:'Anu Viswan',Languages:['CSharp','Python']}

This might break your deserialization using the Student class. If you want to continue using Student Class with both scenarios, then you could make use of a Custom Convertor which would string to a collection. For example, consider the following Converter.

class SingleOrArrayConverter<T> : JsonConverter
{
public override bool CanConvert(Type objectType)
{
return (objectType == typeof(List<T>));
}

public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
JToken token = JToken.Load(reader);
if (token.Type == JTokenType.Array)
{
return token.ToObject<List<T>>();
}
return new List<T> { token.ToObject<T>() };
}

public override bool CanWrite
{
get { return false; }
}

public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
throw new NotImplementedException();
}
}

 

Now, you could redefine your Student class as

public class Student
{
public string Name{get;set;}
[JsonConverter(typeof(SingleOrArrayConverter<string>))]
public List Languages{get;set;}
}

This would now work with both string and arrays.

Case 1 : Output

case 1

Case 2 : Output

case 2

Oxyplot : Selectable Point

While OxyPlot continues to be one of the attractive plotting library for .Net developers, there are times when you find longing for features that could make it even more better.

One of such feature is ability to select/highlight a point in a series. While there is a Selectable Property with LineSeries, it doesn’t quite let you highlight the Data Point.

However, this can be easily achieved using another series with has a single point (the point which was selected). I guess it is easier to show the code than describe it. So let’s go ahead and hit Visual Studio.

We would begin by defining couple of Custom Series. For the sake of example, I am considering LineSeries in this case. The first Custom Series we would define would be the series which displays the Selected Item.

public class SelectedLineSeries:LineSeries
{
}

As you can observe, it hardly does anything new. The purpose of the definition is to allow is to easily differenciate between our special Series from Line Series. This would be further clarified when introduce our second custom series, which denotes a series which can be selected.

public class SelectableLineSeries:LineSeries
{
public bool IsDataPointSelectable { get; set; }

public DataPoint CurrentSelection { get; set; }

public OxyColor SelectedDataPointColor { get; set; } = OxyColors.Red;

public double SelectedMarkerSize { get; set; }

public SelectableLineSeries()
{
SelectedMarkerSize = MarkerSize;
MouseDown += SelectableLineSeries_MouseDown;
}

private void SelectableLineSeries_MouseDown(object sender, OxyMouseDownEventArgs e)
{
if (IsDataPointSelectable)
{
var activeSeries = (sender as OxyPlot.Series.Series);
var currentPlotModel = activeSeries.PlotModel;
var nearestPoint = activeSeries.GetNearestPoint(e.Position, false);
CurrentSelection = nearestPoint.DataPoint;

currentPlotModel = ClearCurrentSelection(currentPlotModel);

var selectedSeries = new SelectedLineSeries
{
MarkerSize = MarkerSize + 2,
MarkerFill = SelectedDataPointColor,
MarkerType = MarkerType
};

selectedSeries.Points.Add(CurrentSelection);
currentPlotModel.Series.Add(selectedSeries);
currentPlotModel.InvalidatePlot(true);
}
}

private PlotModel ClearCurrentSelection(PlotModel plotModel)
{
while(plotModel.Series.Any(x=> x is SelectedLineSeries))
{
plotModel.Series.Remove(plotModel.Series.First(x=> x is SelectedLineSeries));
}
return plotModel;
}

}

The core functionality of the Class is defined in the Series Mouse Down event. This ensures that whenever a point is selected, a new series is added to the Parent PlotModel, which has a single Data Point ( same as the currently selected data point).

Well, that’s all our definition is about. Now we can go ahead and define our PlotModel to be used along with Series definition.

var random = new Random();
var collection = Enumerable.Range(1, 5).Select(x => new DataPoint(x, random.Next(100, 400))).ToList();
var series = new SelectableLineSeries
{
IsDataPointSelectable = true,
MarkerFill = OxyColors.Blue,
MarkerType = MarkerType.Square,
LineStyle = LineStyle.Solid,
Color = OxyColors.Blue,
ItemsSource = collection,
MarkerSize = 5
};

GraphModel = new PlotModel();

GraphModel.Axes.Add(new OxyPlot.Axes.LinearAxis
{
Position = Axes.AxisPosition.Bottom
});

GraphModel.Axes.Add(new OxyPlot.Axes.LinearAxis
{
Position = Axes.AxisPosition.Left
});

GraphModel.Series.Add(series);
GraphModel.InvalidatePlot(true);

That’s it. Hit F5 and test your Selectable Line Series. You can access the source code shown in the demo in my Github.

Oxyplot Selectable

Revisiting Threads – Overhead of explicit threads

Recently I had the good fortune to read some of the invaluable books such as CLR via C# by Jeffery Rictcher, C# in Depth by John Skeet and Writing High Performance code in .Net by Ben Watson. It allowed me to revisit some of the basics on Threads and I thought to write down my notes from the books. In this first part on Asynchronous Programming, we will begin by examining (or revisiting) internals of a thread and thereby understanding why creating explicit threads are such a bad idea.

Typical possible overhead of threads can be classified into two broad categories.
* Space , in terms of Memory Consumption
* Time, in terms of execution performace

Keeping the overheads in mind, let us look at what happens when a new thread is created.

Memory Allocation

For each new thread that is created, the operating system assigns each of the following data structures

Thread Kernel Object

Thread Kernel Object is a data structure/memory block allocated by the OS, which can be accessed only by the Kernel. The key objective of the Thread Kernel Object is to store information regarding the particular thread, including the thread context.The thread context includes states of CPU registers when the thread was last executed.

In addition the Thread Kernal Object also stores statistical information regarding the thread such as the Creation Time, State, Priority, Number of Context Switches done, Kernal Mode Time and User Mode Time among others.

Further more, the Thread Kernal Object also contains Stack pointer pointing to the starting location of stackframe of current function that is being executed in the thread and Instruction pointer to the current instruction that was executed by the CPU.It also contains address spaces refering the TEB and Stacks (User Mode and Kernal Mode).

Thread Environment Block (TEB)

The TEB, or Thread Environment Block is a block of memory allocated in the user mode (and hence accessible for application) for each thread which typically consumes 1 Page (4 Kb in most common processors) of Memory.

One of the key objectives of the TEB is to maintain a stack comprising of head of an exception handling chain. The node is removed each time the code exists the try block.

The TEB is also responsible for Threads Local Storage and data structures to be used for GDI/Open GL.

User Mode Stack

The User Mode Stack maintains reference to the address space indicating what the thread needs to execute once the method ends, which it removes when the method ends. It is also used for storing all the local variables and method parameters used in the method.

Windows by default allocates 1 MB per thread, but it can grow if the requirement arises.

Kernal Mode Stack

When the method access a Kernal Mode function, the arguements of the methods are stored in a different data structure called Kernal Model Stack. The application cannot directly access the Kernal Mode Stack. This is done for security reasons and during execution of Kernal functions, the OS copies the parameters from User Mode Stack to Kernal Mode Stack.

For a 32 bit System, the Kernal mode stack is typically 12Kb and 24Kb in case of 64 bit machines.

Unmanaged DLLs

One of the policies that Windows Operating System follows requires that for every new thread that is created, all unmanaged DLLs in the process should invoke their DLL_Main called with DLL_THREAD_ATTACH flag passed. Similarly, DLL_THREAD_DETACH is oassed when the Thread dies. This is required by some DLLs for initialization and clean up.

This,understandably has a performance implication every time a thread is created.

Context Switching

Every processor can run only a single thread at a time. Each thread is allowed to run for a specified sclice of time,(known as Thread Quantum) typically around 15-20 ms. When the thread quantum expires, the scheduler picks another thread from the another thread, allowing it to use the processor.

The OS Thread scheduler stores the kernel thread object in different queues based on the state of the thread (Ready, Waiting and Exiting). When the thread quantum finishes for a thread, the scheduler checks the Ready Queue, and picks a new thread causing a context switching.

Context Switching is the process of storing/restoring state of the given thread so that it can be resumed. This includes restoring the state of CPU registers with the states stored in Thread Kernel Object

Every context switching requires
* Save state of CPU registers for current thread in the Threads Kernel Object.
* Picks another thread.
* Load state of CPU registers for new thread, which has been previously stored in the new thread’s Kernel object

Additionally, when the context switching occurs, the CPU is already processing a thread and the executing threads code/data resides in the CPU’s cache. This is done to avoid frequent access to RAM, which is slighly slower compared to CPU’s own cache. CPU now must now access RAM to populate CPU’s cache

This whole proces has to repeat every 15/20 ms, which is a performance overhead. Obvious question that rises in mind is, wouldn’t that happen even with the Thread Pool.

The answer is Yes, but however, the one of the critical decission which the Thread Pool makes is maintaining optimal amount of threads. We will go into details of thread pool later, but the point of interest at this point would be how the thread pool ensures the number of threads remained optimal and doesn’t go out of hand. Also, with lesser threads, there would be higher chance for your thread to get an oppurtunity to schedule its run.

Garbage Collection

When the Garbage collector runs, the CLR suspends all the threads and walk through the stack to find roots to mark the object in heap. The GC would again walk though the stack again to update the roots once the objects has been moved.

This is another case where lesser or optimal number of threads would improve the performance.

Summary

All the above factors highlights why it is a bad choice to create threads explicitly. While threads are highly useful for employing asynchronous operations in your application, one needs to strike the right balance as far as the number of threads that are alive at a moment. Considering the amount of memory overhead required for allocating the thread, it would be highly useful if one could reuse the threads. This is exactly what the thread pool does.

Having said so, there are cases when creating threads explicitly could be recommended.
* By default, all thread pool threads are running in Normal Priority. When you need to run a thread in a non-Normal priority, you have the option to create explicit threads.

  • You need to create a Foreground threads. The threads in the threadpool are background threads.

  • If you have a extremely long running compute bound task, and you want avoid taxing the thread pool logic, you have a case where you could depend on explicit thread.

In the next part, we would examine Thread Pool and how it manages the optimal thread count balance.

Partitioner and Parallel Loops

Two common traps when using Parallel Loops could be summarized as following.
*  The amount of work done in the loop is not significantly larger than the amount of time spend in synchronizing any shared states.
*  Amount of work done is less than the cost of delegate or method invocation.

Both of the problems results in significant performance implications. However, both issues can be easily solved using the Partitioner.

Partitioner splits the range into set of tuples that describes a subset range that needs be iterated over the original collection. Let’s write some code with and without Partitioner and benchmark them.

[Benchmark]
public void ParallelLoopWithoutPartioner()
{
var maxValue = 100000;
var sum = 0L;

Parallel.For(0, maxValue, (value) =>
{
Interlocked.Add(ref sum, value);
});
}

[Benchmark]
public void ParallelLoopWithPartioner()
{
var maxValue = 100000;
var sum = 0L;
var partioner = Partitioner.Create(0,maxValue);

Parallel.ForEach(partioner, range =>
{
var (minValueInRange, maxValueInRange) = range;
var subTotal = 0;
for (int value = minValueInRange; value < maxValueInRange; value++)
{
subTotal += value;
}
Interlocked.Add(ref sum, subTotal);
});
}

Both methods calculates the Sum of first N Numbers using Parallel Loops by accessing a shared variable sum.

This creates a significant ‘wait delay’ when using the first approach. The second approach, which uses the Partitioner, splits the range into subsets and access the shared state less frequently. The results of Benchmark are shown below.

benchmark