Deploy Github Sub Directory To Azure

In this blog post, I would walk you through publishing a sub directory of your Github repository to the Azure.

The first step would be to head over to your desired Resource Group in Azure Portal and Create a Web App Resource.

For sake of demonstration, we would be publishing a web portal build using VueJS. For the same reason, we are using a Runtime of Node 12 LTS.

Once you have created the Web App resource, head over to Deployment Center and choose Github under the Continuous Deployment Section.

If you are doing this for the first, you might be prompted to Authenticate your Github Account. Next, you need to Build Provider. We will choose Github Actions in here.

This would lead you to the following screen, which would help in choosing your repository.

The Add or overwrite workflow definition would generate the Github workflow for deployment. This looks something similiar to the following.

name: Build and deploy Node.js app to Azure Web App

on:
  push:
    branches:
      - main

jobs:
  build-and-deploy:
    runs-on: windows-latest

    steps:
    - uses: actions/checkout@master

    - name: Set up Node.js version
      uses: actions/setup-node@v1
      with:
        node-version: '12.13.0'

    - name: npm install, build, and test
      run: |
        npm install
        npm run build --if-present
        npm run test --if-present


    - name: 'Deploy to Azure Web App'
      uses: azure/webapps-deploy@v2
      with:
        app-name: 'yourappName'
        slot-name: 'production'
        publish-profile: ${{ secrets.YourSecretKey }}
        package: .

As you have noticed, the workflow also contains a secret key which would be used for authenticating the publish action.

So far, this has been quite similiar to how you would publish an entire repository in Github to Azure. But as mentioned earlier, we are particularly interested in publishing a sub directory in the Github repository. For this purpose, we will begin by ensuring the npm build actions are done within the particular sub directory.

For the same, we modify the workflow, with the following changes.

    - name: npm install, build, and test
      run: |
        npm install
        npm run build --if-present
        npm run test --if-present
      working-directory: portalDirectory/subDirectory


As you can observe, we have instructed the action to use a particular working directory while running the NPM scripts. However, we are not done yet.

Just like we ensured the npm build is done against the desired folder, we also need to ensure that only the desired sub folder gets published.

If you attempt to use working-directory with the ‘Deploy to Azure Web App’ Step in the action, you would be prompted with an error that working-directory cannot be used with an action that contains a with statement.

The .deployment file comes to our rescue at this point. The .deployment file needs to be created in the root of your repository. We will add the following contends to the file.

[config]
project = portalDirectory/subDirectory/dist

That would be all you need. The .deployment file would instruct the CI/CD process to deploy the contends of the portalDirectory/subDirectory/dist directory to Azure.

I hope that did help you.

CRUD Operations with Azure Table Storage in an Azure Function – C

In this series of byte sized tutorials, we will create an Azure Function for Crud Operations on an Azure Storage Table. For the demonstration, we will stick a basic web function, which would enable us to do the CRUD operations for a TODO table.

The reason to pick Azure Storage table is primarly it is extremely cost efficient and you could also emulate the storage within your development environment.That’s true, you do not need even a Azure subscription to try out Azure Storage, thanks the Storage Emulator.

One of the key points to remember before we proceed is how an Entity is identified uniquely in a Azure Table Storage. Partitions allows scaling of the system easily and whenever you store an item in the table, it is stored in a partition, which is scaled out in the system. The PartionId allows to uniquely identify the partition in which the data resides. The RowId, uniquely identifies the specific entity within the Partition and together with ParitionKey forms the composite key that would be unique identifier for your entity.

We will get back to this a bit later. But for now, we will define our Entity derieved from TableEntity.

using Microsoft.Azure.Cosmos.Table;
public class TodoTableEntity : TableEntity
{
    public string Title { get; set; }
    public string Description { get; set; }
    public bool IsCompleted { get; set; }
}

We require 3 columns in our Table in addition to the PartionKeyRowKey, and TimeStamp.

We will begin by writing the basic skeleton code for our web function and go through the key components, before the insert operation.

[FunctionName("TodoAdd")]
public static async Task<IActionResult> Add(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    [Table("todos")] CloudTable todoTable,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var data = JsonConvert.DeserializeObject<TodoDto>(requestBody);

    // TODO

    return new OkObjectResult(0);
}

public class TodoDto
{
    public string Title { get; set; }
    public string Description { get; set; }
    public bool IsCompletd { get; set; }
}

The function defined above (TodoAdd) accepts a HttpTrigger (both Get and Post requests). The data to be inserted is passed via the Request Body. We will use the HttpRequest.Body property to read the information and deserialize them. This is quite evident in the code above. What is of more interest at this point is the todoTable parameter.

The todoTable parameter, of type CloudTable represents a table in Microsoft Azure Table Service and provides us all the methods required to access the table. The bindings specify that the table is named todos.

Now that we have our data to be deserialized and the table name, we would proceed to insert the data in the table.

var dataToInsert = new TodoTableEntity
{
    Title = data.Title,
    Description = data.Description,
    IsCompleted = data.IsCompletd,
    PartitionKey = data.Title[0].ToString(),
    RowKey = data.Title[0].ToString()
};


var addEntryOperation = TableOperation.Insert(dataToInsert);
todoTable.CreateIfNotExists();
await todoTable.ExecuteAsync(addEntryOperation);


As mentioned earlier the CloudTable provides us with all the necessary ammuniation to access the table. In this case, we would be using the CloudTable.ExecuteAsync method to execute a TableOperation to insert the record.

However, the following code has a serious flaw, which we will discuss in a moment. Consider the Entity we are about to insert.

var dataToInsert = new TodoTableEntity
{
    Title = data.Title,
    Description = data.Description,
    IsCompleted = data.IsCompletd,
    PartitionKey = data.Title[0].ToString(),
    RowKey = data.Title[0].ToString() // This causes an error
};

After filling the Title,Description and IsCompleted fields from the data we recieved from the Http Request, we are also assigning the PartitionKey and RowKey to the entity. We have decided, for the sake of example, to partition the table based the first alphabet of the Title. This works fine – we would end up with multiple partitions. However, the RowKey would cause an issue. Consider the following two requests.

// First Request
{
    title:'A Test',
    description:'A Test Description`,
    isCompleted:'false`
}
// Second Request
{
    title:'Another Test',
    description:'A Test Description`,
    isCompleted:'false`
}

Both these request would have PartitionKey value as “A” according to the code we wrote above. This is fine, as would want to group all the Entities with title starting with “A” in the same partition. However, the above code would also result in both the RowId to be same as well. This leads to an error as these needs to be separate entities and cannot share the same combination of PartitionKey and RowId.

For this reason, we will need a unique Id to identify the RowId. In this example, we will use a simple technique in which we will create another partition, namely Key, which would contain a single Row. This Row would contain a numerical value which we would use as the Identity value to be used in the table. With each request, we would also need to update the key.

So let us rewrite the code again to make use of the Key entity.

[FunctionName("TodoAdd")]
public static async Task<IActionResult> Add(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    [Table("todos","Key","Key",Take =1)] TodoKey keyGen,
    [Table("todos")] CloudTable todoTable,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string name = req.Query["name"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var data = JsonConvert.DeserializeObject<TodoDto>(requestBody);

    if (keyGen == null)
    {
        keyGen = new TodoKey
        {
            Key = 1000,
            PartitionKey = "Key",
            RowKey = "Key"
        };

        var addKeyOperation = TableOperation.Insert(keyGen);
        await todoTable.ExecuteAsync(addKeyOperation);
    }

    var rowKey = keyGen.Key;

    var dataToInsert = new TodoTableEntity
    {
        Title = data.Title,
        Description = data.Description,
        IsCompleted = data.IsCompletd,
        PartitionKey = data.Title[0].ToString(),
        RowKey = keyGen.Key.ToString()
    };

    keyGen.Key += 1;
    var updateKeyOperation = TableOperation.Replace(keyGen);
    await todoTable.ExecuteAsync(updateKeyOperation);
    var addEntryOperation = TableOperation.Insert(dataToInsert);
    todoTable.CreateIfNotExists();
    await todoTable.ExecuteAsync(addEntryOperation);

    return new OkObjectResult(keyGen.Key);
}

As observed in the code above, we have introduced a new parameter keyGen, which points to the new entity in the same todos table. We have used the bindings to specify the ParitionKey and RowKey to fetch the entity for us.

We then increment the Key, and use it as the RowId for rest of our entities. The resultant table storage would look like following.

In this example, we have create a simple Create Operation for Azure Table Storage. We will explore more into the Azure Bindings and rest of CRUD Operations in rest of the series, but hope this provides a good starting point for learning Azure Storage with Azure functions.

Custom Traits in xUnit

One of the implicit key characterstics which define the readability of any Unit Test cases is its ability to be grouped depending on multiple factors.

NUnit uses CategoryAttribute, while MSTest uses the TestCategoryAttribute for grouping tests. With xUnit, you could make use the TraitAttribute to achieve this. However, this is not short of problems of its own.

The most inconvenient part of using Traits are that they are basically a name-value pair of strings. This makes it vurnerable to typos or a headache when decide to change the name. Luckily, xUnit provides us an easy to use extensibility point.

ITraitAttribute and ITraitDiscoverer

You can create your own Custom Traits which could be used to decorate the test cases. For the sake of example, let us create two attributes – FeatureAttribute and BugAttribute which would be used to Categorize Tests cases for Features and Bugs.

[TraitDiscoverer(FeatureDiscoverer.TypeName,TraitDiscovererBase.AssemblyName)]
public class FeatureAttribute:Attribute, ITraitAttribute
{
    public string Id { get; set; }
    public FeatureAttribute(string id) => Id = id;
    public FeatureAttribute() { }
}

The attribute implements the ITraitAttribute interface and has a property to indicate the Id of the Feature. What is more interesting is the TraitDiscoverer attribute. It isn’t sufficient that we have the attributes, but it also needs to be discovered by the Test Explorer. This is where the TraitDiscoverer comes into place.

The attribute accepts two parameters, the fully qualified name of the Discoverer associated with the FeatureAttribute and the assembly which defines it. The Dicoverer for Feature is defined as

public class FeatureDiscoverer : TraitDiscovererBase,ITraitDiscoverer
{
    public const string TypeName = TraitDiscovererBase.AssemblyName + ".Helpers.CustomTraits.FeatureDiscoverer";

    protected override string CategoryName => "Feature";
    public override IEnumerable<KeyValuePair<string, string>> GetTraits(IAttributeInfo traitAttribute)
    {
        yield return GetCategory();
        var id = traitAttribute.GetNamedArgument<string>("Id");
        if (!string.IsNullOrEmpty(id))
        {
            yield return new KeyValuePair<string, string>(TypeName, id);
        }
    }
}

public class TraitDiscovererBase : ITraitDiscoverer
{
    public const string AssemblyName = "Nt.Infrastructure.Tests";
    protected const string Category = nameof(Category);
    protected virtual string CategoryName => nameof(CategoryName);

    protected KeyValuePair<string,string> GetCategory()
    {
        return new KeyValuePair<string, string>(Category, CategoryName);
    }
    public virtual IEnumerable<KeyValuePair<string, string>> GetTraits(IAttributeInfo traitAttribute)
    {
        return Enumerable.Empty<KeyValuePair<string,string>>();
    }
}

The Discoverer needs to be implement the ITraitDiscoverer which has a single method, GetTraits and returns a collection of KeyValuePairs. That’s all you need. Now you could decorate your Test Cases as the following

[Theory]
[MemberData(nameof(CreateMovieTest_ResponseStatus_200_TestData))]
[Feature("1523")]
public async Task CreateMovieTest_ResponseStatus_200(CreateMovieRequest request, CreateMovieResponse expectedResult)
{
    // Test Case
}

The above is whole lot cleaner than the following

[Trait("Category","Feature")]
[Trait("Feature","1523")]

Roslyn Analyzer : Analyzing Comments

One of the things one might encounter quite early while writing a Roslyn Code analyzer for comments is it is slightly different from detection of Syntax Nodes. In fact, it is not a node at all.

Syntax nodes represent syntactic constructs, for example declaration, statements, clauses and expressions. A Comment do not quite fall in the same category. It is rather a Syntax Trivia. Syntax Trivia are, as Microsoft states, largely non-significant components such as whitespaces, comments and preprocessor directives. Due to their rather insignificant presence, they are not included as child nodes of Syntax Tree. However, since they are still important in their own ways, they are still part of the Syntax Tree.

For the same reason, you would need to register a SyntaxTreeAction using RegisterSyntaxTreeAction rather SyntaxNodeAction using RegisterSyntaxNodeAction. For analyzing comments, our typical Initialize() method would like the following.

public override void Initialize(AnalysisContext context)
{
    context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.Analyze | GeneratedCodeAnalysisFlags.None);
    context.EnableConcurrentExecution();

    context.RegisterSyntaxTreeAction(AnalyzeComment);
}

The next step involves parsing the SingleLineCommentTrivia and MultiLineCommentTrivia trivias from the syntax tree. You could achieve this with a simple Linq Query.

private void AnalyzeComment(SyntaxTreeAnalysisContext context)
{
    SyntaxNode root = context.Tree.GetCompilationUnitRoot(context.CancellationToken);
    var commentTrivias = root.DescendantTrivia()
                            .Where(x => x.IsKind(SyntaxKind.SingleLineCommentTrivia) || x.IsKind(SyntaxKind.MultiLineCommentTrivia));

    // Rest of the code
}

That’s all you need. Didn’t that turn up quite easy. I wish there was a easy to parse the actual comment out of Trivia, however, unfortunately, I haven’t found one yet (not sure if one exist). At the moment, the most likely way to use the ToString() method. This would in fact, include comment characters as well, which you can parse out using Regex or simple string transformations.