Deploy Github Sub Directory To Azure

In this blog post, I would walk you through publishing a sub directory of your Github repository to the Azure.

The first step would be to head over to your desired Resource Group in Azure Portal and Create a Web App Resource.

For sake of demonstration, we would be publishing a web portal build using VueJS. For the same reason, we are using a Runtime of Node 12 LTS.

Once you have created the Web App resource, head over to Deployment Center and choose Github under the Continuous Deployment Section.

If you are doing this for the first, you might be prompted to Authenticate your Github Account. Next, you need to Build Provider. We will choose Github Actions in here.

This would lead you to the following screen, which would help in choosing your repository.

The Add or overwrite workflow definition would generate the Github workflow for deployment. This looks something similiar to the following.

name: Build and deploy Node.js app to Azure Web App

on:
  push:
    branches:
      - main

jobs:
  build-and-deploy:
    runs-on: windows-latest

    steps:
    - uses: actions/checkout@master

    - name: Set up Node.js version
      uses: actions/setup-node@v1
      with:
        node-version: '12.13.0'

    - name: npm install, build, and test
      run: |
        npm install
        npm run build --if-present
        npm run test --if-present


    - name: 'Deploy to Azure Web App'
      uses: azure/webapps-deploy@v2
      with:
        app-name: 'yourappName'
        slot-name: 'production'
        publish-profile: ${{ secrets.YourSecretKey }}
        package: .

As you have noticed, the workflow also contains a secret key which would be used for authenticating the publish action.

So far, this has been quite similiar to how you would publish an entire repository in Github to Azure. But as mentioned earlier, we are particularly interested in publishing a sub directory in the Github repository. For this purpose, we will begin by ensuring the npm build actions are done within the particular sub directory.

For the same, we modify the workflow, with the following changes.

    - name: npm install, build, and test
      run: |
        npm install
        npm run build --if-present
        npm run test --if-present
      working-directory: portalDirectory/subDirectory


As you can observe, we have instructed the action to use a particular working directory while running the NPM scripts. However, we are not done yet.

Just like we ensured the npm build is done against the desired folder, we also need to ensure that only the desired sub folder gets published.

If you attempt to use working-directory with the ‘Deploy to Azure Web App’ Step in the action, you would be prompted with an error that working-directory cannot be used with an action that contains a with statement.

The .deployment file comes to our rescue at this point. The .deployment file needs to be created in the root of your repository. We will add the following contends to the file.

[config]
project = portalDirectory/subDirectory/dist

That would be all you need. The .deployment file would instruct the CI/CD process to deploy the contends of the portalDirectory/subDirectory/dist directory to Azure.

I hope that did help you.

CRUD Operations with Azure Table Storage in an Azure Function – C

In this series of byte sized tutorials, we will create an Azure Function for Crud Operations on an Azure Storage Table. For the demonstration, we will stick a basic web function, which would enable us to do the CRUD operations for a TODO table.

The reason to pick Azure Storage table is primarly it is extremely cost efficient and you could also emulate the storage within your development environment.That’s true, you do not need even a Azure subscription to try out Azure Storage, thanks the Storage Emulator.

One of the key points to remember before we proceed is how an Entity is identified uniquely in a Azure Table Storage. Partitions allows scaling of the system easily and whenever you store an item in the table, it is stored in a partition, which is scaled out in the system. The PartionId allows to uniquely identify the partition in which the data resides. The RowId, uniquely identifies the specific entity within the Partition and together with ParitionKey forms the composite key that would be unique identifier for your entity.

We will get back to this a bit later. But for now, we will define our Entity derieved from TableEntity.

using Microsoft.Azure.Cosmos.Table;
public class TodoTableEntity : TableEntity
{
    public string Title { get; set; }
    public string Description { get; set; }
    public bool IsCompleted { get; set; }
}

We require 3 columns in our Table in addition to the PartionKeyRowKey, and TimeStamp.

We will begin by writing the basic skeleton code for our web function and go through the key components, before the insert operation.

[FunctionName("TodoAdd")]
public static async Task<IActionResult> Add(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    [Table("todos")] CloudTable todoTable,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var data = JsonConvert.DeserializeObject<TodoDto>(requestBody);

    // TODO

    return new OkObjectResult(0);
}

public class TodoDto
{
    public string Title { get; set; }
    public string Description { get; set; }
    public bool IsCompletd { get; set; }
}

The function defined above (TodoAdd) accepts a HttpTrigger (both Get and Post requests). The data to be inserted is passed via the Request Body. We will use the HttpRequest.Body property to read the information and deserialize them. This is quite evident in the code above. What is of more interest at this point is the todoTable parameter.

The todoTable parameter, of type CloudTable represents a table in Microsoft Azure Table Service and provides us all the methods required to access the table. The bindings specify that the table is named todos.

Now that we have our data to be deserialized and the table name, we would proceed to insert the data in the table.

var dataToInsert = new TodoTableEntity
{
    Title = data.Title,
    Description = data.Description,
    IsCompleted = data.IsCompletd,
    PartitionKey = data.Title[0].ToString(),
    RowKey = data.Title[0].ToString()
};


var addEntryOperation = TableOperation.Insert(dataToInsert);
todoTable.CreateIfNotExists();
await todoTable.ExecuteAsync(addEntryOperation);


As mentioned earlier the CloudTable provides us with all the necessary ammuniation to access the table. In this case, we would be using the CloudTable.ExecuteAsync method to execute a TableOperation to insert the record.

However, the following code has a serious flaw, which we will discuss in a moment. Consider the Entity we are about to insert.

var dataToInsert = new TodoTableEntity
{
    Title = data.Title,
    Description = data.Description,
    IsCompleted = data.IsCompletd,
    PartitionKey = data.Title[0].ToString(),
    RowKey = data.Title[0].ToString() // This causes an error
};

After filling the Title,Description and IsCompleted fields from the data we recieved from the Http Request, we are also assigning the PartitionKey and RowKey to the entity. We have decided, for the sake of example, to partition the table based the first alphabet of the Title. This works fine – we would end up with multiple partitions. However, the RowKey would cause an issue. Consider the following two requests.

// First Request
{
    title:'A Test',
    description:'A Test Description`,
    isCompleted:'false`
}
// Second Request
{
    title:'Another Test',
    description:'A Test Description`,
    isCompleted:'false`
}

Both these request would have PartitionKey value as “A” according to the code we wrote above. This is fine, as would want to group all the Entities with title starting with “A” in the same partition. However, the above code would also result in both the RowId to be same as well. This leads to an error as these needs to be separate entities and cannot share the same combination of PartitionKey and RowId.

For this reason, we will need a unique Id to identify the RowId. In this example, we will use a simple technique in which we will create another partition, namely Key, which would contain a single Row. This Row would contain a numerical value which we would use as the Identity value to be used in the table. With each request, we would also need to update the key.

So let us rewrite the code again to make use of the Key entity.

[FunctionName("TodoAdd")]
public static async Task<IActionResult> Add(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    [Table("todos","Key","Key",Take =1)] TodoKey keyGen,
    [Table("todos")] CloudTable todoTable,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string name = req.Query["name"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var data = JsonConvert.DeserializeObject<TodoDto>(requestBody);

    if (keyGen == null)
    {
        keyGen = new TodoKey
        {
            Key = 1000,
            PartitionKey = "Key",
            RowKey = "Key"
        };

        var addKeyOperation = TableOperation.Insert(keyGen);
        await todoTable.ExecuteAsync(addKeyOperation);
    }

    var rowKey = keyGen.Key;

    var dataToInsert = new TodoTableEntity
    {
        Title = data.Title,
        Description = data.Description,
        IsCompleted = data.IsCompletd,
        PartitionKey = data.Title[0].ToString(),
        RowKey = keyGen.Key.ToString()
    };

    keyGen.Key += 1;
    var updateKeyOperation = TableOperation.Replace(keyGen);
    await todoTable.ExecuteAsync(updateKeyOperation);
    var addEntryOperation = TableOperation.Insert(dataToInsert);
    todoTable.CreateIfNotExists();
    await todoTable.ExecuteAsync(addEntryOperation);

    return new OkObjectResult(keyGen.Key);
}

As observed in the code above, we have introduced a new parameter keyGen, which points to the new entity in the same todos table. We have used the bindings to specify the ParitionKey and RowKey to fetch the entity for us.

We then increment the Key, and use it as the RowId for rest of our entities. The resultant table storage would look like following.

In this example, we have create a simple Create Operation for Azure Table Storage. We will explore more into the Azure Bindings and rest of CRUD Operations in rest of the series, but hope this provides a good starting point for learning Azure Storage with Azure functions.

Custom Traits in xUnit

One of the implicit key characterstics which define the readability of any Unit Test cases is its ability to be grouped depending on multiple factors.

NUnit uses CategoryAttribute, while MSTest uses the TestCategoryAttribute for grouping tests. With xUnit, you could make use the TraitAttribute to achieve this. However, this is not short of problems of its own.

The most inconvenient part of using Traits are that they are basically a name-value pair of strings. This makes it vurnerable to typos or a headache when decide to change the name. Luckily, xUnit provides us an easy to use extensibility point.

ITraitAttribute and ITraitDiscoverer

You can create your own Custom Traits which could be used to decorate the test cases. For the sake of example, let us create two attributes – FeatureAttribute and BugAttribute which would be used to Categorize Tests cases for Features and Bugs.

[TraitDiscoverer(FeatureDiscoverer.TypeName,TraitDiscovererBase.AssemblyName)]
public class FeatureAttribute:Attribute, ITraitAttribute
{
    public string Id { get; set; }
    public FeatureAttribute(string id) => Id = id;
    public FeatureAttribute() { }
}

The attribute implements the ITraitAttribute interface and has a property to indicate the Id of the Feature. What is more interesting is the TraitDiscoverer attribute. It isn’t sufficient that we have the attributes, but it also needs to be discovered by the Test Explorer. This is where the TraitDiscoverer comes into place.

The attribute accepts two parameters, the fully qualified name of the Discoverer associated with the FeatureAttribute and the assembly which defines it. The Dicoverer for Feature is defined as

public class FeatureDiscoverer : TraitDiscovererBase,ITraitDiscoverer
{
    public const string TypeName = TraitDiscovererBase.AssemblyName + ".Helpers.CustomTraits.FeatureDiscoverer";

    protected override string CategoryName => "Feature";
    public override IEnumerable<KeyValuePair<string, string>> GetTraits(IAttributeInfo traitAttribute)
    {
        yield return GetCategory();
        var id = traitAttribute.GetNamedArgument<string>("Id");
        if (!string.IsNullOrEmpty(id))
        {
            yield return new KeyValuePair<string, string>(TypeName, id);
        }
    }
}

public class TraitDiscovererBase : ITraitDiscoverer
{
    public const string AssemblyName = "Nt.Infrastructure.Tests";
    protected const string Category = nameof(Category);
    protected virtual string CategoryName => nameof(CategoryName);

    protected KeyValuePair<string,string> GetCategory()
    {
        return new KeyValuePair<string, string>(Category, CategoryName);
    }
    public virtual IEnumerable<KeyValuePair<string, string>> GetTraits(IAttributeInfo traitAttribute)
    {
        return Enumerable.Empty<KeyValuePair<string,string>>();
    }
}

The Discoverer needs to be implement the ITraitDiscoverer which has a single method, GetTraits and returns a collection of KeyValuePairs. That’s all you need. Now you could decorate your Test Cases as the following

[Theory]
[MemberData(nameof(CreateMovieTest_ResponseStatus_200_TestData))]
[Feature("1523")]
public async Task CreateMovieTest_ResponseStatus_200(CreateMovieRequest request, CreateMovieResponse expectedResult)
{
    // Test Case
}

The above is whole lot cleaner than the following

[Trait("Category","Feature")]
[Trait("Feature","1523")]

Roslyn Analyzer : Analyzing Comments

One of the things one might encounter quite early while writing a Roslyn Code analyzer for comments is it is slightly different from detection of Syntax Nodes. In fact, it is not a node at all.

Syntax nodes represent syntactic constructs, for example declaration, statements, clauses and expressions. A Comment do not quite fall in the same category. It is rather a Syntax Trivia. Syntax Trivia are, as Microsoft states, largely non-significant components such as whitespaces, comments and preprocessor directives. Due to their rather insignificant presence, they are not included as child nodes of Syntax Tree. However, since they are still important in their own ways, they are still part of the Syntax Tree.

For the same reason, you would need to register a SyntaxTreeAction using RegisterSyntaxTreeAction rather SyntaxNodeAction using RegisterSyntaxNodeAction. For analyzing comments, our typical Initialize() method would like the following.

public override void Initialize(AnalysisContext context)
{
    context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.Analyze | GeneratedCodeAnalysisFlags.None);
    context.EnableConcurrentExecution();

    context.RegisterSyntaxTreeAction(AnalyzeComment);
}

The next step involves parsing the SingleLineCommentTrivia and MultiLineCommentTrivia trivias from the syntax tree. You could achieve this with a simple Linq Query.

private void AnalyzeComment(SyntaxTreeAnalysisContext context)
{
    SyntaxNode root = context.Tree.GetCompilationUnitRoot(context.CancellationToken);
    var commentTrivias = root.DescendantTrivia()
                            .Where(x => x.IsKind(SyntaxKind.SingleLineCommentTrivia) || x.IsKind(SyntaxKind.MultiLineCommentTrivia));

    // Rest of the code
}

That’s all you need. Didn’t that turn up quite easy. I wish there was a easy to parse the actual comment out of Trivia, however, unfortunately, I haven’t found one yet (not sure if one exist). At the moment, the most likely way to use the ToString() method. This would in fact, include comment characters as well, which you can parse out using Regex or simple string transformations.

WPF Validations using DataAnnotation and INotifyDataErrorInfo

DataAnnotations and INotifyDataErrorInfo provides an easy and cleaner way to implement validations in WPF ViewModels. Unlike IDataErrorInfo, INotifyDataErrorInfo allows you to raise multiple errors (there are many more features which makes it more interesting than IDataErrorInfo including making it possible to use asynchronous validations).

This being a hands-on demonstration, we will keep the theoritical part to the minimum and hit the code at the earliest. We will begin by writing our little Model class, which would be used for demonstration.

public class Model
{
    [Required(ErrorMessage = "Name cannot be empty")]
    [StringLength(9, MinimumLength = 3, ErrorMessage = "Name should have min 3 characters and max 9")]
    public string Name { get; set; }
    [Required]
    [Range(0,100,ErrorMessage = "Age should be between 1 and 100")]
    public int Age { get; set; }
}


Not too much to explain there. We have used the DataAnnotationAttribute to specify requirements of each property. We will now define a View to display this model. We will get to the ViewModel last, because thats where the trick lies.

<Grid>
    <StackPanel>
        <Grid Margin="10">
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="3*"/>
                <ColumnDefinition Width="7*"/>
            </Grid.ColumnDefinitions>

            <Label Content="Name"/>
            <TextBox Grid.Column="1" Text="{Binding Name, UpdateSourceTrigger=PropertyChanged, ValidatesOnNotifyDataErrors=True}" >
                <Validation.ErrorTemplate>
                    <ControlTemplate>
                        <StackPanel>
                            <AdornedElementPlaceholder x:Name="textBox"/>
                            <TextBlock Text="{Binding [0].ErrorContent}" Foreground="Red"/>
                        </StackPanel>
                    </ControlTemplate>
                </Validation.ErrorTemplate>
            </TextBox>
        </Grid>

        <Grid Margin="10">
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="3*"/>
                <ColumnDefinition Width="7*"/>
            </Grid.ColumnDefinitions>

The most significant part here is of course the property ValidatesOnNotifyDataErrors being set to true. Additionally I have also used ErrorTemplate to display the error but that is more of an implementation detail for this particular demo, you could opt for the default template or choose another that suits your needs.

The next task obiovusly would be to implement the INotifyDataErrorInfo property in your ViewModel. The INotifyDataErrorInfo is defined as following.

 public interface INotifyDataErrorInfo
 {
       bool HasErrors { get; }
       event EventHandler<DataErrorsChangedEventArgs>? ErrorsChanged;
       IEnumerable GetErrors(string? propertyName);
 }


As noticed, the interface comprises mainly of a property,method and event.

  • HasErrors : Indicates a value whether the current entity has Validation errors. This is a read-only property.
  • GetErrors : A Method which retrieves all the validation errors for the given Property. The property name is passed via parameter.
  • ErrorsChanged : This event would be raised each time the Validation errors has changed for a particular property or for the Entity as a whole.

Let us go ahead and implement our view model. We will use a dictionary to store the errors.

private IDictionary<string, List<string>> _errors = new Dictionary<string, List<string>>();

The next step is to create our Properties.

private Model _model = new ();

public string Title => "Data Annotation and INotifyDataErrorInfo";
public string Name
{
    get => _model.Name;
    set
    {
        if (Equals(_model.Name, value)) return;

        _model.Name = value;
        NotifyOfPropertyChange();
        Validate(value);
    }
}


public int Age
{
    get => _model.Age;
    set
    {
        if (_model.Age == value) return;

        _model.Age = value;
        NotifyOfPropertyChange();
        Validate(value);
    }
}

Notice a method Validate is being invoked each time a property changes. This method is responsible for validating the property and maintaining the dictionary of errors.

The implementation of Validate looks like following.

private void Validate(object val, [CallerMemberName] string propertyName = null)
{
    if (_errors.ContainsKey(propertyName)) _errors.Remove(propertyName);

    ValidationContext context = new ValidationContext(_model) { MemberName = propertyName };
    List<ValidationResult> results = new();

    if (!Validator.TryValidateProperty(val, context, results))
    {
        _errors[propertyName] = results.Select(x => x.ErrorMessage).ToList();
    }

    ErrorsChanged?.Invoke(this, new DataErrorsChangedEventArgs(propertyName));
}

What remains now is to impement our interface, which is pretty straightforward.

public bool HasErrors => _errors.Any();

public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged;

public IEnumerable GetErrors(string propertyName)
{
    return _errors.ContainsKey(propertyName) ? _errors[propertyName] : null;
}

The complete ViewModel looks like following.

public class DataAnnotionViewModel : ViewModelBase, INotifyDataErrorInfo
{
    private IDictionary<string, List<string>> _errors = new Dictionary<string, List<string>>();
    private Model _model = new ();

    public string Title => "Data Annotation and INotifyDataErrorInfo";
    public string Name
    {
        get => _model.Name;
        set
        {
            if (Equals(_model.Name, value)) return;

            _model.Name = value;
            NotifyOfPropertyChange();
            Validate(value);
        }
    }


    public int Age
    {
        get => _model.Age;
        set
        {
            if (_model.Age == value) return;

            _model.Age = value;
            NotifyOfPropertyChange();
            Validate(value);
        }
    }

    private void Validate(object val, [CallerMemberName] string propertyName = null)
    {
        if (_errors.ContainsKey(propertyName)) _errors.Remove(propertyName);

        ValidationContext context = new ValidationContext(_model) { MemberName = propertyName };
        List<ValidationResult> results = new();

        if (!Validator.TryValidateProperty(val, context, results))
        {
            _errors[propertyName] = results.Select(x => x.ErrorMessage).ToList();
        }

        ErrorsChanged?.Invoke(this, new DataErrorsChangedEventArgs(propertyName));
    }

    public bool HasErrors => _errors.Any();

    public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged;

    public IEnumerable GetErrors(string propertyName)
    {
        return _errors.ContainsKey(propertyName) ? _errors[propertyName] : null;
    }
}

That’s all you need for Validating your ViewModels using DataAnnotationAttribute and INotifyDataErrorInfo. The complete code sample is available in my Github here.

Why Initialize Collection Size

In possibly every code base you might have seen, usage of Collections would have been a common sight. The introduction of Generic Collections in early stages of evolutin of language has made is all the more favourite of developers comparing to the non-generic cousins that existed before. One of the most common patterns seen in any code base is as follows.

var persons = GetPersonList();
var studentList = new List<Student>();
foreach(var person in persons)
{
    studentList.Add(new Student
    {
        FirstName = person.FirstName,
        LastName = person.LastName
    });
}

At the first glance, there is nothing wrong with the code. In fact, it works well too. Unlike arrays, one doesn’t need to declare the size of List beforehand, and it sort of magically does the trick to expand itself. However, there is a hidden problem caused due to the optimized way the Collection is implemented to magically expand.

Beneath the List declaration, there is still an array, which is made to resize according to the ever growing requirements of the Collection. This is done in the Add method, which is defined as

public void Add(T item) 
{
    if (_size == _items.Length) EnsureCapacity(_size + 1);
    _items[_size++] = item;
    _version++;
}

private void EnsureCapacity(int min) 
{
    if (_items.Length < min) {
        int newCapacity = _items.Length == 0? _defaultCapacity : _items.Length * 2;

        if ((uint)newCapacity > Array.MaxArrayLength) newCapacity = Array.MaxArrayLength;
        if (newCapacity < min) newCapacity = min;
        Capacity = newCapacity;
    }
}

The Add() method checks if the array is full, and if so, it calls the EnsureCapacity method to double the Capacity.

The resizing of course happens in the Capacity Property.

public int Capacity 
{
    get {
        Contract.Ensures(Contract.Result<int>() >= 0);
        return _items.Length;
    }
    set {
        if (value < _size) {
            ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.value, ExceptionResource.ArgumentOutOfRange_SmallCapacity);
        }
        Contract.EndContractBlock();

        if (value != _items.Length) {
            if (value > 0) {
                T[] newItems = new T[value];
                if (_size > 0) {
                    Array.Copy(_items, 0, newItems, 0, _size);
                }
                _items = newItems;
            }
            else {
                _items = _emptyArray;
            }
        }
    }
}

The key take away from the Add/EnsureCapcity methods is that every time the size of array is breached, the capacity of the array is doubled. This leads to couple of undesired things.

  • Expensive operation to resize the array

As the List grows, there would innumerous expensive resizing operations, which is quite undesired.

  • Possibility that a lot of memory spaces left allocated but unused.

Consider if the number of Person in the personList is 258. Due to the logic shown above, the Capacity of Array would now 512. That is a whole lot of unused allocated memory space when you are working with memory intensive application.

// Default Behavior
Console.WriteLine($"StudentList(Count={studentList.Count},Capacity={studentList.Capacity})");
// Output
StudentList(Count=258,Capacity=512)

The ideal way to counter this would be to use overloaded constructor and initialize the size of array.

var studentList = new List<Student>(persons.Count());

This would reduce the need for resizing of internal array.

// With Collection Size Iniitialized as above
 Console.WriteLine($"StudentList(Count={studentList.Count},Capacity={studentList.Capacity})");
// Output
StudentList(Count=258,Capacity=258)

Of course there could be places where you might not be having the convenience of knowing the collection size beforehand. But still, if you are able to be pick an initial size well, you could eliminate or reduce the need to resize the array.

MahApps HamburgerMenu and Caliburn Micro

MahApps is probably one of the most used UI library among WPF developers, with a galaxy of great controls. Despite that, recently I was surprised to see lack of proper example for Hamburger Menu control, particulary MVVM based. I was also more keen to know how to make best use of capabilities of Caliburn Micro along the way.

Here is how I ended up achieving my objectives.

XAML : Control, Content, ItemTemplate and HeaderTemplate

The first step would be to define the Control in Xaml, along with desired ContendTemplate,ItemTemplate and HeaderTemplate.

<mah:HamburgerMenu x:Name="Menu" DisplayMode="CompactOverlay" SelectedItem="{Binding ActiveItem,Converter={StaticResource SelectedItemConverter}}"
           ItemsSource="{Binding MenuItems,Converter={StaticResource MenuConverter}}"
           cal:Message.Attach="[Event ItemClick]=[Action MenuSelectionChanged($source,$eventArgs)]"
           HamburgerMenuHeaderTemplate="{StaticResource MenuHeaderTemplate}"
           ItemTemplate="{StaticResource MenuItemTemplate}">
    <mah:HamburgerMenu.Content>
       <ContentControl Grid.Column="0" Grid.Row="1" x:Name="ActiveItem" />
    </mah:HamburgerMenu.Content>
</mah:HamburgerMenu>

As you would have noticed couple of converters in here. We will get to that in a moment, but first let us define our MenuItemTemplate and MenuHeaderTemplate.

<!-- Menu Header Template -->
<DataTemplate x:Key="MenuHeaderTemplate">
    <TextBlock HorizontalAlignment="Center" VerticalAlignment="Center"  FontSize="16"  Foreground="White"  Text="Sample App" />
</DataTemplate>

<!-- Menu Item Template -->
<DataTemplate x:Key="MenuItemTemplate">
    <Grid x:Name="RootGrid" Height="48" Background="Transparent">
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="48" />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>
        <ContentControl Grid.Column="0"
            HorizontalAlignment="Center"
            VerticalAlignment="Center"
            Content="{Binding Icon}"
            Focusable="False" />
        <TextBlock Grid.Column="1"
           VerticalAlignment="Center"
           FontSize="16"
           Text="{Binding Label}" />
    </Grid>
</DataTemplate>

At this point, we need to create our ViewModels for each of the menu items. As we notice from the templates above, there are couple of fields which are required for each of Menu Items – a Title, and a Icon (_Icon is not quite mandatory, depending on the type of Menu Item you are creating).

Destination ViewModels

Let us go ahead and define our ViewModels for Menu Items.

public class PageViewModelBase : Screen
{
    public virtual string Title { get; }
    public virtual object Icon { get; }
}
public class HomePageViewModel : PageViewModelBase
{
    public override string Title => "Home";
    public override object Icon => new PackIconMaterial { Kind = PackIconMaterialKind.Home };
}
public class ProfilePageViewModel : PageViewModelBase
{
    public override string Title => "Profile";
    public override object Icon => new PackIconMaterial { Kind = PackIconMaterialKind.AccountStar };
}

In the above example, we are using the MahApps.Metro.IconPack for Icons, but that is only an implementation detail which could can easily vary. For sake of simplicity, I have omitted the Views here, but that could be found in the example solution given in the end of this article.

The Glue : ValueConverters and ActiveItem

We now have different ViewModels defined, and it is time to create the dynamic collection to bind to our HamburgerMenu Control. So, in our ShellViewModel, we will do the following.

public ShellViewModel()
{
    MenuItems = new List<PageViewModelBase>
    {
        new HomePageViewModel(),
        new ProfilePageViewModel()
    };
}

public IEnumerable<PageViewModelBase> MenuItems { get; }

We are almost there, and now as the final step, we need to write our converters. There are two converters involved,

  • PageToMenuItemConverter – Converts from ViewModel Collection to HamburgerMenuIconItem.
  • SelectedItemConverter – Parse the ViewModel from SelectedItem
public class PageToMenuItemConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
    {
        if (value is IEnumerable<PageViewModelBase> nmItemCollection)
        {
            return nmItemCollection.Select(item => new HamburgerMenuIconItem
            {
                Tag = item,
                Icon = item.Icon,
                Label = item.Title
            });
        }
        return value;
    }

    public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
    {
        throw new NotImplementedException();
    }
}

public class SelectedItemConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
    {
        return value;
    }

    public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
    {
        return value is HamburgerMenuIconItem menuItem ? menuItem.Tag : value;
    }
}

That’s all we need. Your menu would be up and running now. You can find the example code in my Github here

Bootstrap 4 Theming – Changing Primary Color

Bootstrap allows you to customize the existing definition greatly. With Bootstrap 4, the way we customize the theming has changed a bit, and that’s exactly what I would like to explain here – step by step.

Step 1: Get the Bootstrap source

Of course you need to get the bootstrap source to customize it. You would then recompile it to get the required css.

We could use npm to get the source.

 npm install bootstrap --save

Step 2: Install Sass compiler

We would rely on Sass to customize the variables and maps for creating our custom bootstap. You could head over to the Sass Website to read the installation instruction in detail. But I have tried to highlight the necessary steps.

You would need to install Ruby first and then follow it up with the following command.

npm install -g sass

Step 3: Arrange your folder structure.

The last step before we actually start customizing is to arrange the folder structure as mentioned by the Bootstrap website.

your-project/
├── scss
│   └── custom.scss
└── node_modules/
    └── bootstrap
        ├── js
        └── scss

Step 4: Customize your theme.

You cutomize your theme by modiying the variables and maps in the bootstrap code. Using your custom.css you import the whole bootstrap source (you could alternatively pick and choose if you are sure).

@import "node_modules/bootstrap/scss/bootstrap";

The way team bootstrap has written the code is that it allows you to customize the variable, provided it is defined before the import.

In this example, we will alter the Primary color to a shade of purpose.

$theme-colors: (
  "primary": #4b1076,
);
@import "node_modules/bootstrap/scss/bootstrap";


Step 5: Compile your scss and include in your application

Now all that is left is to compile your custom.scss and create your css which you can then use in your application. To compile the scss, use the following command.

sass custom.scss custom.css

That’s all you need.

Axios Interceptors and Exception Handling

Handling exceptions globally and gracefully is essential part of any application. This reduces the scattered try-catch cases and enable to handle everything from one point.

In this article, we will explore how to implement global exception handling when making Api calls in an SPA design. We will use VueJs for demonstration along with Axios packages to assist the Http calls.

Axios Interceptors

Interceptors in axios provide a great oppurtunity to handle responses at a single point. Interceptors are methods which is invoked for every axios request and hence, allows you to intercept and transform the response before it makes its way to calling method.

To begin with, let us head over to our main.js and import axios

import axios from "axios";

Axios exposes two types of interceptors, one each for request and response.

Response Interceptor

For our purpose, we need the response interceptor (we will breifly speak about the request interceptor later). Axios invokes the response interceptor after the request has been send and response recieved. You could intercept this response using axios.interceptors.response.use, modify it to suit your needs before sending it back to the calling method.

axios.interceptors.response.use(
  (response) => successHandler(response),
  (error) => exceptionHandler(error)
);

The method accepts two parameters (second one is optional), each corresponding to a valid and error Response, which would be invoked depending on whether the request has succeeded or failed.

Let us plugin our interceptor in main.js to intercept the errors.

axios.interceptors.response.use(
  (response) => {
    return response;
  },
  (error) => {
    switch (error.response.status) {
      case 401:
        // Handle Unauthorized calls here
        // Display an alert
        break;
      case 500:
        // Handle 500 here
        // redirect
        break;
      // and so on..
    }

    if (error.response.status == 400) {
    }
    return Promise.reject(error);
  }
);

I ususually handle my BadRequst calls within the component. I could hence write a special response in the switch case for Error Code 400, which does the same for me.

Modifying the switch case,

switch (error.response.status) {
  case 400:
    return {
      data: null,
      hasError: true,
      error: [error.response.data],
    };
  case 401:
    // Handle Unauthorized calls here
    // Display an alert
    break;
  case 500:
    // Handle 500 here
    break;
  // and so on..
}

As seen, the interceptors provides us a global platform the handle exceptions.

Request Interceptor

A word about the request interceptor before signing off – The request interceptors are called each time before the request is made.

This provides an great oppurtunity to inject/modify headers to include authentication tokens. For example,

axios.interceptors.request.use((request) => {
   const header = getHttpHeader(); // get your custom http headers with auth token
  request.headers = header;
  return request;
});

The interceptors could be also used to show busy indicators while the request is being processed as both request interceptor and response interceptor provies us a window of oppurtunity to display a indicator before the request is send out and hide the indictor as soon as the response is recieved.

Hope that was useful.

Evil Code #13 : Tuple Deconstruction Assignment

It is not often that I end up writing two back to back posts on Evil Code series, and that too on the same topic. But like I said in the previous post, Tuples are really an interesting topic (if you are reading Jon Skeet’s book, then everything is interesting)

In this post, we will explore a certain behavior of tuples observed during deconstruction assignment

Consider the following code.

var person = new Person{Name="Jia",Age=4};
var personCopy = person;
(person,person.Age) = (new Person{Name="Naina"},0.2);

Console.WriteLine($"Student => Name: {person.Name}, Age: {person.Age}");
Console.WriteLine($"StudentCopy => Name: {personCopy.Name}, Age: {personCopy.Age}");

The output might surprse you at the first glance, but seems reasonable as you attempt to understand what goes behind the deconstruction assignment. Let us look at the output first before we see what happens during the deconstruction.

Student => Name: Naina, Age: 0
StudentCopy => Name: Jia, Age: 0.2

This behavior attributes itself to the 3 stage process involved in the deconstruction assignment.

Note that unlike User defined type deconstruction, which requires invocation of Deconstruct method with appropriate number of parameters.

As Microsoft has put down in the feature description,

The evaluation order can be summarized as: (1) all the side-effects on the left-hand-side, (2) all the Deconstruct invocations (if not tuple), (3) conversions (if needed), and (4) assignments.

From the above, (2 & 3) aren’t part of our concern (it is a tuple and no conversion if required as the source/destination types are the same). In the first step, the side-effect of LHS is evaluated while only in the last step the actual assignments happen.

So in this case, each of the expresion on the Left hand side, is evaluated one by one, and a storage location is reserved for it.After this, the RHS is evaluated, and any conversion to destination type is done if required.

Finally the conversion results or values from the RHS is assigned to the storage location identified in the first step.

This would mean the above code gets roughly translated to

Person person = new Person
{
    Name = "Jia",
    Age = 4.0
};
Person personCopy = person;
// LHS is evaluated and storage for person.age is reserved
Person person2 = person;
//RHS is evaluated and converted if required
Person obj = new Person
{
    Name = "Naina"
};
double num = 0.2;

// assignments
person = obj;
person2.Age = num;

Notice that during the evaluation of LHS, space reservation and evaluation is done only for person.age and not for  person from (person,person.age). This is because when assigning to a property of a variable, an extra step is required to evaluate the value of variable. This step could be skipped when the target is a variable and not property of a variable.

That explains the output we received for the earlier code.

Note: If you are a C# developer, and haven’t read Jon Skeet’s C# in Depth, do it immediately. This book is a MUST read for any C# developer.