A second look at char.IsSymbol()

Let us begin by examining a rather simple looking code.

var input = "abc#ef";
var result = input.Any(char.IsSymbol);

What would the output of the above code ? Let’s hit F5 and check it.

False

Surprised ?? One should not feel guilty if he is surprised. It is rather surprising one does not look behind to understand what exactly char.IsSymbol does. After all, it is one of the rather underused method.

So why this pecular behaior ? What exactly is a Symbol according to the char.IsSymbol() method. The answer lies in the documentation of the method.

Valid symbols are members of the following categories in UnicodeCategory: >MathSymbol, CurrencySymbol, ModifierSymbol, and OtherSymbol.

The character ‘#’ naturally doesn’t fall under the required categories. Now, with that understanding, let us examine few other characters.

var charList = new[]{'!','@','$','*','+','%','-'};
foreach(var ch in charList)
{
Console.WriteLine($"{ch} = IsSymbol:{char.IsSymbol(ch)}");
}

The output again, has few curious facts to verify. Let’s check the output first.

! = IsSymbol:False
@ = IsSymbol:False
$ = IsSymbol:True
* = IsSymbol:False
+ = IsSymbol:True
% = IsSymbol:False
- = IsSymbol:False

Some of the results are self-explanatory, but what looks interesting for us would be the characters "*","-", and "%". All three of them looks to fall under Mathematical symbols. This might raise eyebrows on why they weren’t recognized as Symbols.

The answer lies in the UnicodeCategory of the character. Let us change the code a bit to include the unicode category as well for each character.

var charList = new[]{'!','@','$','*','+','%','-'};
foreach(var ch in charList)
{
Console.WriteLine($"{ch} = IsSymbol:{char.IsSymbol(ch)}"
+ $"UnicodeCategory:{Char.GetUnicodeCategory(ch)}");
}

Before further discussion let us examine the output as well.

! = IsSymbol:FalseUnicodeCategory:OtherPunctuation
@ = IsSymbol:FalseUnicodeCategory:OtherPunctuation
$ = IsSymbol:TrueUnicodeCategory:CurrencySymbol
* = IsSymbol:FalseUnicodeCategory:OtherPunctuation
+ = IsSymbol:TrueUnicodeCategory:MathSymbol
% = IsSymbol:FalseUnicodeCategory:OtherPunctuation
- = IsSymbol:FalseUnicodeCategory:DashPunctuation

The answer to previous question now stares on us. The characters "*,%,-" lies under the OtherPunctuation and DashPunctuation Categories.

That explains the behavior of char.IsSymbol(). In most cases, it would be better to use Regex for validating passwords or other strings that needs to be validated for special characters.

Deserialize Json to Generic Type

One of the recent question in stackoverflow found interesting was about a Json, which needs to be deserialized to a Generic Class. What makes the question interesting was the Generic Property would have a different Json Property name depending on the type T. Consider the following Json

{
status: false,
employee:
{
firstName: "Test",
lastName: "Test_Last"
}
}

This needs to be Deseriliazed to the following class structures

public class Response<T>
{

[JsonProperty(PropertyName = "status")]
public bool Status {get;set;}

public T Item {get;set;}

}

[JsonObject(Title = "employee")]
public class Employee
{

[JsonProperty(PropertyName = "firstName")]
public string FirstName {get; set;}

[JsonProperty(PropertyName = "lastName")]
public string LastName {get; set;}

}

 

However, Response<T> being a generic class, would need to support additional Types as well. For example, the Json could also look like the following

{
status: false,
company:
{
companyname: "company name",
headquaters: "location"
}
}

Where the company needs to be deserialized to

[JsonObject(Title = "company")]
public class Employee {

[JsonProperty(PropertyName = "companyname")]
public string CompanyName {get; set;}

[JsonProperty(PropertyName = "headquaters")]
public string HeadQuaters {get; set;}

}

The solution lies in writing a Custom Contract Resolver, which does the magic. Let’s go ahead and write the ContractResolver.

public class GenericContractResolver<T> : DefaultContractResolver
{

protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization)
{
var property = base.CreateProperty(member, memberSerialization);
if (property.UnderlyingName == nameof(Response<T>.Item))
{
foreach( var attribute in System.Attribute.GetCustomAttributes(typeof(T)))
{
if(attribute is JsonObjectAttribute jobject)
{
property.PropertyName = jobject.Title;
}
}
}
return property;
}
}

The role of the ContractResolver is pretty simple. As soon as it recognizes the Generic Type passed, it would replace the property name with the name described in JsonObjectAttribute.

Now you can use the ContractResolver to deserialize the Json. For example

var result = JsonConvert.DeserializeObject<Response<Employee>>(json,
new JsonSerializerSettings
{
ContractResolver = new GenericContractResolver<Employee>()
});

Demo Samples could be found here in my C# Fiddles

String or Array Converter : Json

Imagine you have a method which returns a Json String of following format.

{Name:'Anu Viswan',Languages:'CSharp'}

In order to deserialize the JSON, you could define a class as the following.

public class Student
{
public string Name{get;set;}
public string Languages{get;set;}
}

This work flawlessly. But imagine a situation when your method could return either a single Language as seen the example above, but it could additionally return a json which has multiple languages. Consider the following json

{Name:'Anu Viswan',Languages:['CSharp','Python']}

This might break your deserialization using the Student class. If you want to continue using Student Class with both scenarios, then you could make use of a Custom Convertor which would string to a collection. For example, consider the following Converter.

class SingleOrArrayConverter<T> : JsonConverter
{
public override bool CanConvert(Type objectType)
{
return (objectType == typeof(List<T>));
}

public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
JToken token = JToken.Load(reader);
if (token.Type == JTokenType.Array)
{
return token.ToObject<List<T>>();
}
return new List<T> { token.ToObject<T>() };
}

public override bool CanWrite
{
get { return false; }
}

public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
throw new NotImplementedException();
}
}

 

Now, you could redefine your Student class as

public class Student
{
public string Name{get;set;}
[JsonConverter(typeof(SingleOrArrayConverter<string>))]
public List Languages{get;set;}
}

This would now work with both string and arrays.

Case 1 : Output

case 1

Case 2 : Output

case 2

Revisiting Threads – Overhead of explicit threads

Recently I had the good fortune to read some of the invaluable books such as CLR via C# by Jeffery Rictcher, C# in Depth by John Skeet and Writing High Performance code in .Net by Ben Watson. It allowed me to revisit some of the basics on Threads and I thought to write down my notes from the books. In this first part on Asynchronous Programming, we will begin by examining (or revisiting) internals of a thread and thereby understanding why creating explicit threads are such a bad idea.

Typical possible overhead of threads can be classified into two broad categories.
* Space , in terms of Memory Consumption
* Time, in terms of execution performace

Keeping the overheads in mind, let us look at what happens when a new thread is created.

Memory Allocation

For each new thread that is created, the operating system assigns each of the following data structures

Thread Kernel Object

Thread Kernel Object is a data structure/memory block allocated by the OS, which can be accessed only by the Kernel. The key objective of the Thread Kernel Object is to store information regarding the particular thread, including the thread context.The thread context includes states of CPU registers when the thread was last executed.

In addition the Thread Kernal Object also stores statistical information regarding the thread such as the Creation Time, State, Priority, Number of Context Switches done, Kernal Mode Time and User Mode Time among others.

Further more, the Thread Kernal Object also contains Stack pointer pointing to the starting location of stackframe of current function that is being executed in the thread and Instruction pointer to the current instruction that was executed by the CPU.It also contains address spaces refering the TEB and Stacks (User Mode and Kernal Mode).

Thread Environment Block (TEB)

The TEB, or Thread Environment Block is a block of memory allocated in the user mode (and hence accessible for application) for each thread which typically consumes 1 Page (4 Kb in most common processors) of Memory.

One of the key objectives of the TEB is to maintain a stack comprising of head of an exception handling chain. The node is removed each time the code exists the try block.

The TEB is also responsible for Threads Local Storage and data structures to be used for GDI/Open GL.

User Mode Stack

The User Mode Stack maintains reference to the address space indicating what the thread needs to execute once the method ends, which it removes when the method ends. It is also used for storing all the local variables and method parameters used in the method.

Windows by default allocates 1 MB per thread, but it can grow if the requirement arises.

Kernal Mode Stack

When the method access a Kernal Mode function, the arguements of the methods are stored in a different data structure called Kernal Model Stack. The application cannot directly access the Kernal Mode Stack. This is done for security reasons and during execution of Kernal functions, the OS copies the parameters from User Mode Stack to Kernal Mode Stack.

For a 32 bit System, the Kernal mode stack is typically 12Kb and 24Kb in case of 64 bit machines.

Unmanaged DLLs

One of the policies that Windows Operating System follows requires that for every new thread that is created, all unmanaged DLLs in the process should invoke their DLL_Main called with DLL_THREAD_ATTACH flag passed. Similarly, DLL_THREAD_DETACH is oassed when the Thread dies. This is required by some DLLs for initialization and clean up.

This,understandably has a performance implication every time a thread is created.

Context Switching

Every processor can run only a single thread at a time. Each thread is allowed to run for a specified sclice of time,(known as Thread Quantum) typically around 15-20 ms. When the thread quantum expires, the scheduler picks another thread from the another thread, allowing it to use the processor.

The OS Thread scheduler stores the kernel thread object in different queues based on the state of the thread (Ready, Waiting and Exiting). When the thread quantum finishes for a thread, the scheduler checks the Ready Queue, and picks a new thread causing a context switching.

Context Switching is the process of storing/restoring state of the given thread so that it can be resumed. This includes restoring the state of CPU registers with the states stored in Thread Kernel Object

Every context switching requires
* Save state of CPU registers for current thread in the Threads Kernel Object.
* Picks another thread.
* Load state of CPU registers for new thread, which has been previously stored in the new thread’s Kernel object

Additionally, when the context switching occurs, the CPU is already processing a thread and the executing threads code/data resides in the CPU’s cache. This is done to avoid frequent access to RAM, which is slighly slower compared to CPU’s own cache. CPU now must now access RAM to populate CPU’s cache

This whole proces has to repeat every 15/20 ms, which is a performance overhead. Obvious question that rises in mind is, wouldn’t that happen even with the Thread Pool.

The answer is Yes, but however, the one of the critical decission which the Thread Pool makes is maintaining optimal amount of threads. We will go into details of thread pool later, but the point of interest at this point would be how the thread pool ensures the number of threads remained optimal and doesn’t go out of hand. Also, with lesser threads, there would be higher chance for your thread to get an oppurtunity to schedule its run.

Garbage Collection

When the Garbage collector runs, the CLR suspends all the threads and walk through the stack to find roots to mark the object in heap. The GC would again walk though the stack again to update the roots once the objects has been moved.

This is another case where lesser or optimal number of threads would improve the performance.

Summary

All the above factors highlights why it is a bad choice to create threads explicitly. While threads are highly useful for employing asynchronous operations in your application, one needs to strike the right balance as far as the number of threads that are alive at a moment. Considering the amount of memory overhead required for allocating the thread, it would be highly useful if one could reuse the threads. This is exactly what the thread pool does.

Having said so, there are cases when creating threads explicitly could be recommended.
* By default, all thread pool threads are running in Normal Priority. When you need to run a thread in a non-Normal priority, you have the option to create explicit threads.

  • You need to create a Foreground threads. The threads in the threadpool are background threads.

  • If you have a extremely long running compute bound task, and you want avoid taxing the thread pool logic, you have a case where you could depend on explicit thread.

In the next part, we would examine Thread Pool and how it manages the optimal thread count balance.

Partitioner and Parallel Loops

Two common traps when using Parallel Loops could be summarized as following.
*  The amount of work done in the loop is not significantly larger than the amount of time spend in synchronizing any shared states.
*  Amount of work done is less than the cost of delegate or method invocation.

Both of the problems results in significant performance implications. However, both issues can be easily solved using the Partitioner.

Partitioner splits the range into set of tuples that describes a subset range that needs be iterated over the original collection. Let’s write some code with and without Partitioner and benchmark them.

[Benchmark]
public void ParallelLoopWithoutPartioner()
{
var maxValue = 100000;
var sum = 0L;

Parallel.For(0, maxValue, (value) =>
{
Interlocked.Add(ref sum, value);
});
}

[Benchmark]
public void ParallelLoopWithPartioner()
{
var maxValue = 100000;
var sum = 0L;
var partioner = Partitioner.Create(0,maxValue);

Parallel.ForEach(partioner, range =>
{
var (minValueInRange, maxValueInRange) = range;
var subTotal = 0;
for (int value = minValueInRange; value < maxValueInRange; value++)
{
subTotal += value;
}
Interlocked.Add(ref sum, subTotal);
});
}

Both methods calculates the Sum of first N Numbers using Parallel Loops by accessing a shared variable sum.

This creates a significant ‘wait delay’ when using the first approach. The second approach, which uses the Partitioner, splits the range into subsets and access the shared state less frequently. The results of Benchmark are shown below.

benchmark

Conditional Serialization using NewtonSoft Json

One of the least explored feature of Newtonsoft Json is the ability serialize properties conditionally. Consider the hypothetical situation wherein you want to serialize a property in a class only if a condition is satisfied. For example,

public class User
{
public string Name {get;set;}
public string Department {get;set;}
public bool IsActive {get;set;}
}

If the requirement is that you need to include serialize the Department Property only if the User Is Active, then the easiest way to do it would be to use the Conditional Serialization functionality of Json.Net. All you need to do is include a method that
a) Returns a boolean indicating whether to serialize or not.
b) Should be named with Property named prefixed with ‘ShouldSerialize’

For example, for the Property Department, the method should be named ‘ShouldSerializeDepartment’. Example,

public bool ShouldSerializeDepartment()=> IsActive;

Complete Code

public class User
{
public string Name {get;set;}
public string Department{get;set;}
public bool IsActive {get;set;}
public bool ShouldSerializeDepartment()=> IsActive;
}

Client Code

var user = new User{ Name = "Anu Viswan", IsActive = false} ;
var result = JsonConvert.SerializeObject(user);

Output

{"Name":"Anu Viswan","IsActive":false}

Serializing/Deserializing Dictionaries with Tuple as Key

Sometimes you run into things that might look trivial but it just do not work as expected. One such example is when you attempt to serialize/Deserialize a Dictionary with Tuples as the key. For example

var dictionary = new Dictionary<(string, string), int>
{
[("firstName1", "lastName1")] = 5,
[("firstName2", "lastName2")] = 5
};

var json = JsonConvert.SerializeObject(dictionary);
var result = JsonConvert.DeserializeObject<Dictionary<(string, string), string>>(json);

The above code would trow an JsonSerializationException when deserializing. But the good part is, the exception tells you exactly what needs to be done. You need to use an TypeConverter here.

Let’s define our required TypeConverter

public class TupleConverter<T1, T2> : TypeConverter
{
public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
{
return sourceType == typeof(string) || base.CanConvertFrom(context, sourceType);
}

public override object ConvertFrom(ITypeDescriptorContext context, CultureInfo culture, object value)
{
var elements = Convert.ToString(value).Trim('(').Trim(')').Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
return (elements.First(), elements.Last());
}
}

And now, you can alter the above code as

TypeDescriptor.AddAttributes(typeof((string, string)), new TypeConverterAttribute(typeof(TupleConverter<string, string>)));
var json = JsonConvert.SerializeObject(dictionary);
var result = JsonConvert.DeserializeObject<Dictionary<(string, string), string>>(json);

With the magic portion of TypeConverter in place, your code would now work fine. Happy Coding.