Azure WebJob with Azure Queue


Cron job is essential part of complex systems to execute of certain script or program at a specific time interval. Traditionally, developer or system administrator create Windows Scheduled Task to execute scheduled job within the operating system.

In one project, I used to have multiple .exe programs scheduled to update the production database during mid night for various use cases such as expiring user credit. This gets the job done within application context but this is not the cleanest way when my system administrator need to take care of 20 other cron jobs coming from different machines and different operating systems.

The next thing I have implemented is to expose a cron job through WCF API endpoint. For example, I opened a WCP API endpoint to be triggered on a functionality in sending email notification. This end point will map user’s saved criteria and business inventory on a daily basis. (Yes, this is the annoying 9.00AM email spam notification you get everyday. Sorry!) The WCF API endpoint does not do anything if no one hits it. It is a simple HTTP endpoint waiting for something to tell him to get up and work.

The reason to expose the cron job as WCF API endpoint is to allow my system administrator to have a centralized system to trigger and monitor all the cron jobs in one place rather than logging into multiple servers (operating systems) to monitor and troubleshoot. This works alright except that now I have my cron job stuck in a WCF project instead of simple script or a lightweight .exe program.

Azure WebJob

The next option is Azure WebJob. Azure WebJob enables me to run programs or scripts in web app context as background processes. It runs and scales as part of Azure Web Apps. With Azure WebJob, now I can write my cron job as simple script or a light-weight .exe rather than WCF. With Azure WebJob, my system administrator can also have a centralized interface, Azure Portal to monitor and configure all the cron jobs. In fact, it’s pretty cool that I can trigger a .exe program using a public HTTP URL using the Web Hook property in WebJob.

Azure WebJob goes beyond the traditional cron job definition (timer based). Azure WebJobs can run continuously, on demand or on a schedule.

The following file types are accepted:

  • .cmd, .bat, .exe (using windows cmd)
  • .ps1 (using powershell)
  • .sh (using bash)
  • .php (using php)
  • .py (using python)
  • .js (using node)
  • .jar (using java)

I will use C#.NET to create a few .exe to demonstrate how Azure WebJob works.

Prerequisites (nice to have)

Preferably, you should have Microsoft Azure SDK installed on your machine. At the point of writing this, I’m running Visual Studio 2015 Update 3, so I have the SDK for VS2015 installed using Web Platform Installer.

Note that this is NOT a MUST have. You can still write your WebJob in any above-mentioned language and upload you job manually in Azure Portal. The reason I recommend you to install it is to make your development and deployment much easier.

Working with Visual Studio

If you have Microsoft Azure SDK installed for Visual Studio, you will see Azure WebJob template. We are going to start with a simple Hello World program to get started.

Your WebJob project will be pre-loaded with some sample codes. As usual, you will still need to build it once for NuGet to resolve the packages. 

The following packages are part of your packages.config if you started the project with Azure WebJobs template. No big deal if you didn’t, you can install them manually, although it’s a little tedious.

For now, we will ignore the fancy built-in SDK support for various Azure services. We will create a simple Hello World program and deploy it to Azure to get an end-to-end experience.

Remove everything else and left with Program.cs writing a simple message:

Go to your Azure Portal. Click on Get publish profile to download your publishing profile. You will need it to import into your Visual Studio later when publishing your WebJob.

Import your publish profile into your project in Visual Studio. The Import dialog will kick in at the first time when you publish your project as WebJob.

Right click on your project and select “Publish as Azure WebJob”, you will see the following dialog to set up your publishing profile.

Import the earlier downloaded publishing profile setting file into your WebJob project in Visual Studio.

Validate your connection.

Click on “Publish” to ship your application to Azure.

Upon successful publishing, you should be seeing a message “Web App was published successfully…”

Go to your Azure portal and verify that your WebJob is indeed listed.

Select your WebJob, click on the Logs button on top to see the following page.

The impressive part about using WebJob in Azure is the following WebJobs monitoring page. You can use this page to monitor multiple WebJobs status and drill down deeper into the respective logs. No extra cost, coding or configuration, all work out of the box!

Now we have our first Hello World application running in Azure. We have deployed our WebJob to run continuously, which means it will get triggered automatically every 60 seconds. Once the first round is completed, status will change to PendingRestart and wait for the next 60 seconds to kick in.

WebJob SDK sample project in GitHub demonstrates comprehensively how you can work with WebJob through Azure Queue, Azure Blob, Azure Table, Azure Service Bus. In this article, we will do a little bit more coding by using WebJob to interact with Azure Queue.

Azure WebJob with Azure Queue Storage

Microsoft.Azure.WebJobs namespace provides QueueTriggerAttribute. We will use it to trigger a method in our WebJob.

This works by whenever a new message is added into the queue, the WebJob will be triggered to pick up the message in the queue.

Before we continue in our codes, we first need to create a Azure Storage account to host the queue. Here, I have a storage account name “danielfoo”.

We will use Microsoft Azure Storage Explorer to get visual on our storage account. It’s a handy tool to visualize your data. If you do not have it, no worry, just imagine the queue message in your mind 🙂

Let’s add a new console application project in our solution to put some messages in our queue.

There are two packages that you’ll need to install into your queue project:

We will write the following simple codes to initialize a queue and put a message into the queue.

Of course, you will have to configure your StorageConnectionString in app.config for codes to recognize the connection string.

You can get your account name and key from Azure Portal.

Let’s execute our console application to test if our queue can be created and whether a message can be placed into the queue properly.

After execution, look at Storage Explorer to verify if the message is already in the queue.

Now we will dequeue this message so that it will not interfere with the actual QueueTrigger in our exercise later.

Next, we will create a new WebJob project that get triggered whenever a message is added into the queue by using QueueTriggerAttribute under Microsoft.Azure.WebJobs namespace.

This time we do not remove Functions.cs nor modify Program.cs.

Make sure that your Functions.cs method parameter contains the same queue name as what you defined earlier in your Queue.MessageGenerator project. In this example, we are using the name “danielqueue”.


Remember to fill up your App.config on the following connection string. This is to allow the WebJob to know which storage account to monitor.

Now, let’s start WebJob.QueueTrigger project as a new instance and allow it to wait for a new message add into “danielqueue”.

Then, we will start Queue.MessageGenerator project as a new instance to drop a message into the queue for WebJob.QueueTrigger to pick up.

Yes! Our local debug is has detected a new message is added into “danielqueue” hence hit the ProcessQueueMessage function.

Let’s publish our WebJob.QueueTrigger to Azure to see it processing the queue message in Azure context instead of local machine. After successful publishing, we now have 2 WebJobs.

Select QueueTrigger (the WebJob we just published) and click on Logs button on top. You will see the following log on queue message processing.

If you drill down into particular message, you will be redirected to the Invocation Details page

We have just setup our WebJob to work with Azure Queue!

That wraps up everything I want to show you in working with Azure WebJob and Azure Queue.

Obviously in reality you will write something more complex than simply output the log in your WebJob. You may write some logic to perform certain task. You may even use this to trigger another more complex job sitting in another service.

In the queue, obviously you also wouldn’t write a real “message” like I did. You will probably create one queue for very specific purpose. For example, you will create a queue to store a list of ID, where each of the ID is required for another type of process such as indexing. The queue will index the entity (represented by the ID) in batches (let’s say 4 messages at a time) instead of having a large surge of load in a short period of time.

Few more thoughts…

  1. By default, JobHostConfiguration.QueuesConfiguration.BatchSize handles 16 queue messages concurrently. I recommend you to override the default value with a smaller value (let’s say, 4) to ensure the other end which does the more heavy processing (for example indexing a document in Solr or Azure Search) is able to handle the load. The maximum value for JobHostConfiguration.QueuesConfiguration.BatchSize is 32. If having WebJob to handle 32 message at a go is not sufficient for you, you can further tweak the performance by setting a short JobHostConfiguration.QueuesConfiguration.MaxPollingInterval time to make sure you do not accumulate too many message before the processing kicks in.
  2. If for whatever reason you have max out the built-in configuration (such as BatchSize, MaxPollingInterval) and yet it is not good enough, a quick win will be to scale up your WebApp. Note that you cannot scale your WebJob alone because WebJob sits under the context of WebApp. If scaling up WebApp for the sake WebJob sound like an inefficient way, consider migrating your jobs to Worker Role.
  3. WebJobs are good for lightweight processing. They are good for tasks that only need to be run periodically, scheduled, or triggered. They are cheap and easy to setup and run. Worker Roles are good for more resource intensive workloads or if you need to modify the environment where they are running (for example .NET framework version). Worker Roles are more expensive and slightly more difficult to setup and run, but they offer significantly more power when you need to scale. There is a pretty comprehensive blog post by kloud comparing WebJob and Worker Role.
  4. Azure Storage Queue has no guarantee on message Ordering. In other words, a message get placed into the queue first does not necessary get processed first. Delivery for Azure Queue is At-Least-Once but not At-Most-Once. In other words, a message potentially get processed more than once. The application codes will need to handle the duplication of what happens after a message is picked up. If this troubles you, you should consider Service Bus Queue. The Ordering is First-In-First-Out (FIFO) and delivery is At-Least-Once and At-Most-Once. If you are wondering then why people still use Azure Storage Queue, it is because Storage Queue is designed to handle super large scale queuing. For example, maximum queue size for Storage Queue is 200 TB while Service Bus Queue is 1 GB to 80 GB; maximum number of queues for Storage Queue is Unlimited while for Service Bus Queue is 10,000. For complete comparison reference, please refer to Microsoft doc.

I hope you have enjoyed reading this article. If you find it useful, please share it with your friends who might benefit from this. Cheers!

Working with Azure Search in C#.NET


What is Azure Search

Azure Search is a service in Microsoft Azure Cloud Platform providing indexing and querying capabilities. Azure Search is intended to provide developers with complex search capabilities for mobile and web development while hiding infrastructure requirements and search algorithm complexities.

I like the idea of having an index service provider being part of my solution as it allows developers to perform searches quickly and effortlessly without going through the pain of writing essay-length SQL query.

The key capabilities of Azure Search include scalable full-text search over multiple languages, geo-spatial search, filtering and faceted navigation, type-ahead queries, hit highlighting, and custom analyzers.


Why Azure Search (or index provider in general)

  1. It’s Brilliant – Having an index provider sitting in between application and database is a great way to shield the database from unnecessary or inefficient Read requests. This allows database I/O and computing power to be reserved for operations that truly matters.
  2. It’s Modern – Traditional search approach requires developers to write essay-length SQL query to retrieve simple aggregated data. Index provider such as Azure Search allows developers to initiate a search request without writing complicated search algorithm (for example geo-spatial search) and no complex SQL query is required (for example faceted navigation).
  3. It Scales – Solutions that gain the most benefited from index provider are relatively larger enterprise system. Such system often require scaling to certain extain. Scaling Azure Search in Microsoft Azure Cloud Platform is several clicks away compared to having an on-premise service provider such as Solr.
  4. It’s FREE – Microsoft Azure provides a free tier for Azure Search. Developers can create and delete Azure Search service for development and self-learning purpose.

Use Case

A car classified site is a good example to make use of index service provider such as Azure Search. I will use a local car classified site, to illustrate several features the system can potentially tap into.

Type-ahead Queries allow developer to implement auto suggestions as user types. In the following example, as I was typing “merce”, the site returns me a list of potential Mercedes-Benz car model that I might be interested in.


Facet allow developers to retrieve aggregated data (such as car make, car model, car type, car location, car price range, etc) without writing writing complex SQL query, which also means saving the Read load at database. In the following example, the site returns me a list of Mercedes-Benz model and the count in the bracket to indicate how many classified ads are available for the specific model.


Filter allows developer to retrieve documents that fit the searching criteria without writing complex SQL queries with endless INNER JOIN and WHERE clauses. In the following example, I specified that I want all the used Mercedes-Benz, model of E-Class E200, variant of Avantgarde from Kuala Lumpur with the price range of RM100,000 to RM250,000. You can imagine the kind of INNER JOIN and WHERE clauses the poor developer has to design dynamically if this were to be retrieved from a database directly.


Another feature the car classified site can potentially tap into is Geo-spatial Search although it is not seen implemented. For example if I were to search for a specific car in a dealership, the portal can suggest similar cars from other dealerships nearby to the dealership I’m looking at. That way when I make a trip to visit a dealership, I can also visit other nearby dealerships that have the similar cars.

Using C#.NET to work with Azure Search

Let’s roll our sleeves and hack some codes. I will be using a C#.NET console application to illustrate how we can design and create an index, upload documents into the index and perform several types of searches on the index. This solution will be used to simulate some of the potential codes required by a car classified portal.

First, we create a solution name AzureSearchSandbox.

We will need “Microsoft.Azure.Search” NuGet package from Open your Package Manager Console and run the following command:

Upon successful installation, you will see several NuGet packages are added into your packages.config file in your solution.

Note that you will only need to install “Microsoft.Azure.Search”, the other packages are dependencies. The dependencies are resolved automatically. 

In your solution, add a new class Car.cs

This Car object will be used to represent your Azure Search document.

Next we create a static Helper.cs class to take care of the initialization of the index in Azure Search.

Initialize() is the main method to kick start our Azure Search index. Unlike other on-premise index service that require certain amount of setup such as Solr, it doesn’t take long to have our index up and running. In Solr, I have to install Solr using NuGet, install the right Java version, set the environment variable and finally create a core in Solr admin portal. With Azure Search, no upfront set up is required.

The index is created in CreateIndex() method, where we tell Azure Search client SDK that we want an index with the fields we define in our Index object.

To ensure this set of code is running on a fresh index, we have DeleteIfIndexExist() method to ensure the previous index is removed. We call this right before CreateIndex() method.

Next, we add a new class Uploader.cs to deal with the documents we are about to upload into our Azure Search index.

PrepareDocuments() is a simple method to construct a list of dummy Car object for our searches later on.

Upload() method gets the dummy Car objects from PrepareDocuments() and pass these object into Azure Search client SDK to upload the documents into index in a batch. Note that we added a 2000 millisecond sleep time to allow our service to upload and process car documents properly before moving on to next part of the code, which is search. However in practical sense, we would not want to add sleep time in our upload code. Instead, the component that takes care of searching should expect index is not available immediately. We also catch IndexBatchException implicitly to handle the index in case the batch upload of index failed. In this example, we merely output the index key. In practical sense, we should implement a retry or at least logging the failed index.

Once the index upload operation is completed, we will add another class Searcher.cs to take care of the searching capability.

SearchDocuments() method is to handle the searching mechanism on the index we created earlier. No fancy algorithm, only passing specific instruction to Azure Search client SDK on what we are looking for and display them. In this method, we take care simple text search, filter and facets. There are much more capabilities Azure Search client SDK can provide. Feel free to explore the SearchParameters and response object on your own.

Putting them all together in Program.cs

First we define index service name and API key to create a search index client instance. The instance is returned by Helper.Initialize(). We will make use of this instance for both search and upload later.

After initializing the index, we call Upload() method to upload some dummy car documents to the index.

Next, we perform the following searches:

  1. Simple text search. We will search for the text “Perodua” in the documents.

The result as following. Azure Search index returns 2 documents which contains the keyword “Perodua”


2. Using Filter. A more targeted and efficient approach to look for documents. In the following example, first we look for Category field which is equal to ‘Hatchback’; second we look for Price field which is greater than 100,000 and is a ‘Sedan’ category. More details on how to write expression syntax in Azure search.

The result as following: Car with the category of Hatchback and car cost more than 100,000 and is a Sedan.


3. Searching facets. With facets, developer will no longer need to write long query that combines Count() and endless GROUP BY clauses.

If we have a traditional database table that represent the dummy car data, this is equivalent to “SELECT Category, Count(*) FROM Car GROUP BY Category”.

Result as following:


This example might not look like a big deal if you have a small data set or simple data structure. Facet is super handy and fast when you have large number of data and when your query is getting more complex. The ability to define which facet is required in C# codes make the”query” much cleaner and easier to maintain.

One last thought…

You can clone the above source code from my GitHub repo. I will be happy to accept Pull Request if you want to demonstrate how to make use of other capabilities in Azure Search client SDK. Remember to change the API key and service name before compiling. Have fun hacking Azure Search!

Entity Framework vs Stored Procedure


When developer queries SQL Server for data, Entity Framework (EF) and Stored Procedure (SP) are 2 of the most common options. Often, individual preferences could be debatable.

In an existing system I am working on, there are about 40+ stored procedures being implemented.

Stored procedure adds complexity for maintenance. For example, you need to ensure certain version of the SP is compatible with certain version of the codes. During deployment, you need to deploy code + SP instead of just codes. Imagine rolling back changes if your deployment fails. Another factor is debugging. Debugging codes is obviously more pleasant than debugging SP.

I prefer to only use my database as a medium of storage. I like to keep the database as simple and clean as possible and off load other works to external components or services that are best at doing what they are designed for. For example, if I need fast searching I prefer index service such as Solr / Azure Search by designing relevant facets rather than building up big indexes that add storage overhead to database; if I need to process business logic, I will do it in application because it is much easier to scale an application over a database.

Having said that, SP has it’s advantage such as being pre-compiled, it is included in the execution plan hence has performance advantage. The reusability factor of SP is debatable because if the codes is structured in a reusable manner, the codes that stored the database logic can definitely be reused just like a SP.

Since the system I’m working on has a bunch of SP (and the main argument is performance). Out of curiosity, I did a performance profiling comparing EF vs SP on an actual database.

Some initialization codes…

Codes to query database using Entity Framework (with LINQ) that I re-implement base on an existing stored procedure:

Codes to query database using Stored Procedure:

Where this is the query in the stored procedure:

Here are the result after 5 rounds of execution:

Stored Procedure won

Stored Procedure won

Entity Framework won

Entity Framework won

Stored Procedure won

The overall winner is Stored Procedure, where Stored Procedure won 3 times while Entity Framework won 2 times.

A few interesting insight from the profiling:

  1. Stored Procedure performed marginally better in overall.
  2. Entity Framework is marginally slower but it is not as slow as making Stored Procedure a clear winner.
  3. The first call for Entity Framework is significantly higher compare to the consecutive calls. If we were to exclude initial call in EF, EF will be a clear winner.

Choosing between Entity Framework or Stored Procedure might not be straight forward especially if you have an existing application that has massive implementation in either approach. However here are some guidelines to make your decision making less painful.

  1. If you have an existing system that has massive implementation on either approach, stick with the existing approach.
  2. If you have existing development team members who have strong background in stored procedure or entity framework respectively, continue to tap onto their strength.
  3. If you are starting a new project, go for Entity Framework for ease of development and good maintainability although the cost to pay is slight performance downgrade.

Until next time!

“Attach to process…” tricks


When you are working on a large system, where your code base (project or solution) is only a subset of the entire system you will often use “Attach to process…” to tap into the execution of the application pool for debugging.

“Attach to process…” might not the most convenient approach for debugging, however at times it can be more efficient than launching whole application from your code base. There are still value in using “Attach to process…” although this approach could be painful occasionally.

Here are a few tricks to make the debugging process less painful.

Trick No.1 – Identify your AppPool

When you are working with a little more complex application where your application have dependencies on other applications, you might have multiple w3wp.exe running like following:

Which process should you attach to?

All of them? Of course that would work but obviously that’s not the kindest thing you can do to the poor machine memory and CPU…

The following command help you identify the AppPool name and the process ID so that your Visual Studio will only attach to the relevant w3wp.exe

C:\Windows\System32\inetsrv>appcmd list wp

Base on the AppPool name, you can now know the process ID you should attach to in your Visual Studio.

Trick No.2 – ReAttach (Visual Studio extension)

Imagine you have to use “Attach to process…” for debugging purpose in your project. You modify your code a little to change the logic or add some validation and want to continue debugging. It is annoying to keep launching the above “Attach to Process” dialog to select the w3wp.exe.

ReAttach is a Visual Studio extension to help you to attach to the process ID that you have attached a moment ago.

Download and install it then you will have a new option in your Visual Studio menu, Reattach to Process… instead of just “Attach to Process…”

The extension basically helps you to reattach your debugging to the process that you attached earlier. It will continue to work as long as your process ID did not change / disappear.

If for whatever reason you need to run iisreset and your process ID changed. By using “Reattach to process…” it will help filtering out all other irrelevant process and only show you the available w3wp.exe

Hope these 2 tricks help your debugging by attaching to your AppPool. Until next time!

Technical Interview Part 1


Technical interview is both an exciting and stressful moment. It is exciting because there is a potential career opportunity ahead of you. It is stressful because you subconsciously aware that the people in the room are there to judge you.

I have been on the both side of the table – being an interviewer and an interviewee. It is stressful to be an interviewee for obvious reason. We need to try hard to sell ourselves and “sales” is not a skill that come naturally to technical people like us. Furthermore, you never know what kind of psychopath you might meet, asking you to find a bug in his rocket science algorithm. It is equally stressful to be an interviewer. Now your shoulder carries the responsibility of evaluating a candidate whether the candidate will be a right fit to the organization for long term. Being too lenient, you might get a bad apple into the existing harmonious team; being too strict, you might lose a black horse who might just need a little polishing.

As an interviewee

Let’s deal with the stressful problem for the interviewee first. Throughout my experience, I noticed I perform best when I’m not feeling nervous. The key to not feeling nervous is not to feel desperate for a job. Always look for a new job when you least needed it. When you don’t “need” the job, you are going into the interview room as an equal. Low need, high power and vice versa. Did you notice the term interview basically means “viewing each other”. You go in as an equal to evaluate the company as much as the company is evaluating you. The outcome of having this mentality allow you to feel more confident. Again, from my personal experience when I go into a technical interview with this mindset, I often have a pleasant technical discussion with the interviewer.

As an interviewer

Now for the interviewer. I’m not sure how many interviewer will feel stressful. I did not feel being an interviewer is a stressful task until I’m conducting interview for the 3rd year. Interviewee will usually be polite and humble. Most of the time, interviewee will do their best not to offend or make it difficult for the interviewer. I always felt I have an upper hand while conducting interview, hence I never thought there was a problem. It was only until I pull myself out of the technical interviewer’s role and give a more holistic insight from the organization perspective. I realized there are so many other aspects I need to put into consideration while conducting technical interview.

For example, during one interview I found out that talking to me in a technical interview is the 7th round of interview the candidate has gone through. He has taken online technical test and other technical interviews prior to talking to me. My final feedback on the candidate is a clear ‘No’. However, that got me thinking how and why did the candidate was able to pass the previous 6 rounds of interview but not my technical interview. Is there something wrong with the way I asked technical questions? Or does it simply means the previous 6 rounds of interview were not done effectively?

Another example, the organization has an expansion plan is to grow another 100 headcount in 1.5 year. That is equivalent to approximately 6 new hires in a month. Aggressive? Definitely! However base on the current hiring rate, we will not be able to hit the number. What need to be done differently? Should I lower my technical benchmark? Should I say we can’t meet this number simply because we cannot find the talents? How big (or small) the impact is to the projects if we do not meet the numbers? Most importantly, where should I find the balance?

The nature of software development skill set has both breadth and depth. Ideally it will be perfect to pair an interviewer and interviewee who have the identical technical domain experience. Reality is due to the today’s technology breadth, developers often focus on very different vertical skill set. For example, the interviewer might be an expert in Azure Web WebJob, Azure Storage and MVC but the interviewee has been working on Angular, Web API and SQL Server. Both of them are expert in their respective full-stack domain but there is very little common ground.

Let’s face it, both the interviewer and interviewee would not know every topics in great depth, even just within the Microsoft stack of technologies. How can the technical interview being conducted in a fruitful manner with this breadth and depth nature? Do I dismiss a candidate just because they don’t share the similar background with me even though he is talented, passionate and willing to learn?

What is the solution? Should the interviewer ask something more generic and academic like object-oriented concept? Something more holistic yet sophisticated like design pattern? Or something more brainy like algorithm?

Popular topics interviewer use

Object-oriented concepts

In my previous job, my technical interview is the 1st round of interview after the candidate has passed a codility test. The online test involves assessing candidates basic programming skill (fixing a logical operator in a small function) and writing a basic SQL query with some joins. I think it was necessary to cover the basic of object-oriented concept for a C#.NET developer. So I ended up with asking questions like:

  • Explain to me what are method overloading and method overriding?
  • What are the differences between interface and abstract class?

I was under the assumption these questions were alright until one day I have a candidate who answered me so fluently as if he was reading it out from a book – except he didn’t have a book in front of him. This suggests the candidate have rehearsed these answers a thousand times before talking to me.

Well, the reality is at first I thought the candidate was such a bright developer that knows these concept so well. I decided to give him a little more challenging question to see how far he could go. The question was base on what he has explained earlier where an abstract class can contain both method with empty implementation and concrete implementation; while interface can only contain method signature without implementation. Great!

My next question was, if an abstract class can do both methods with empty implementation and concrete implementation, why do we still need interface? I was expecting him to explain something along the line where a child class can only inherit 1 abstract class but multiple interfaces. I would be happy to accept the answer and prepared move on to the next topic even if he just give me a one liner answer. To my surprised he kept quiet and could not provide any explanation.

From there, I realized there are candidates who really put a lot effort in preparing for technical interview like rehearsing answers for those top 50 interview questions from Google result. Ahem… the truth was, I was too lazy to craft any original interview question back then so I ended up using questions from those top 50 interview questions where candidates can easily prepare for. The problem with this was I ended up evaluating how much preparation work a candidate has done rather than how strong his technical capability is. It was a bad idea to use top 50 interview questions.

The top 50 interview questions

When you use those top 50 interview questions, not only you cannot accurately assess the candidate, you will push away those developers who really know their stuff. Remember interview is about viewing each other between the interviewer and the interviewee. Under normal circumstances, a company will put one of their best guys to conduct the interview. If the best guy in the company can only conduct interview base on top 50 interview questions, it will really make me think twice whether I want to join the company when the company offers me a job.

In fact, I encountered this once. I was talking to an interviewer in a MNC who has prepared a long list of technical questions. We covered those questions in approximately 30 minutes instead of his normal 60 minutes. At one point, after he asked question A, I knew he will follow up with question B, so I explained the answer for question B along with the answer in question A. At the end of the interview, his feedback was it was as if I already have the list of question that he was holding. The truth was, I have gone through those questions 5629 times when I was preparing interview questions for my candidates.

Eventually, I did not take up the offer in the MNC. There are many factors that influenced the decision. One of them is knowing the best technical guy in the team could only do what I did 2 years ago, it wasn’t very motivating.

I have stopped using those top 50 interview questions. They are for amatures 🙂

Design pattern

Design pattern seems like a favorite topic for discussion during technical interview in the past few years. This topic got so popular to the point that a recruiter without a computer science background will start asking candidates to explain design patterns. It took me by surprised when two HR looking ladies (they were recruiters) were asking me to explain the design patterns I have worked with. I got a feeling they did not understand 9 out of 10 sentences came out from my mouth because they never ask any follow up question base on what I explained. They probably just wanted to see how clearly I can articulate my ideas.

Design pattern is something you implement it once and it becomes a second nature in your project. Developers do not apply 7 patterns at a go and revisiting them every 3 weeks to evaluate whether they are still appropriate. If they are not, revamp them and apply another 5 new patterns. This simply do not happen for any software with real delivery timeline. Most developers will be working with 1-2 patterns on a daily basis. This will be a breadth and depth issue. The interviewer might be an expert with Adapter and Abstract Factory while the interviewee is an expert in Observer and Singleton. It is not always possible to have an in-depth discussion on all design patterns.

Shouldn’t a good developer know a few more patterns at least on the theoretical level? Yes, I think it’s a valid point. However there will still be a gap between interviewer and interviewee’s level of understanding. For example, the interviewer has been working with Adapter for the last 3 years and the interviewee only read 3 articles on Adapter pattern (or vice versa). The level of discussion between interviewer and interviewee on Adapter pattern is going to be shallow.

The bad news is, some interviewers doesn’t seem to recognize the breadth and depth gap. Some interviewers insist on discussing rigid details on specific design pattern. It will end up being an unpleasant experience for both interviewer and interviewee. Interviewee feeling inferior for not being able to provide an answer; while interviewer feeling not satisfied because he cannot have a meaningful technical discussion with interviewee to assess his technical skill.

The good news is, when design pattern base questions are done right, it gives both the interviewer and interviewee a good discussion to explore areas they both might not have thought of before as an individual.


This is a very safe approach to use during technical interviews because all programmers are expected to have solid logical thinking. Algorithm is all about combining programming techniques and logical thinking to solve specific problem. It is a very suitable approach to assess interviewee’s ability to solve a problem using codes.

Interview questions base on algorithm could be as simple as writing a function to print a few asterisks (*) on the screen, to detect whether the input is an odd or even number, to sorting a series of numbers, to printing a calendar. Usually the company who uses algorithm base questions will have 3 level questions such as easy, medium, hard. If you want to secure a job, you should at least get it right on easy and medium. The hard question is there for the interviewer to identify a grand-master coder over a senior coder.

The ironic part about algorithm base question is a lot of candidates tend to shy away from them.

Example 1:  About 8 years back it was still pretty common to have the candidate to write down the solution on paper. The question was about a simple string manipulation function. Unfortunately, the candidate who appeared to be an experienced developer handed me empty paper and left with an apologetic tone saying this job might not be right for him.

Example 2: One company that I know of is asking the candidate to code a function to detect an integer input whether is an odd or even number and display an appropriate message – using the provided laptop with Visual Studio on it. The answer is surprisingly simple which is to use a modulus (%) and put an If check at the remainder. However this took a candidate who is applying a senior developer position 20 minutes to type a few keystrokes and a few backspaces, type a few keystrokes and a few backspaces.

Example 3: Codility has been an handy tool for conducting programming test online to save everyone’s time. I recently found out a friend who applied for Tech Lead position. He was asked to write a function to work with zero-based index in Codility. To my surprise, he could not understand the question. He did not even attempt to write the solution and closed the browser.

It appears that interviewee feels very stressful when the technical interview involve writing algorithm. In the above examples, the question was not complicated, the answer was not complex. I believe all 3 candidates in the above examples can do reasonably well if they are not in an “technical interview” mode.

In the next article, I will discuss more about how I conduct technical interview instead…

Facebook Feed Widget in Sitecore Habitat


In the previous post we have a introductory view on what Sitecore Habitat is. In this post we will get our hands dirty creating a Facebook Widget in Sitecore Habitat.

Out of the box, Sitecore Habitat contains number of Features. One of the Features is “Social” and we will use Sitecore.Feature.Social project in Habitat solution for Facebook Feed Widget. Before we get started, let’s examine what is available in Sitecore.Feature.Social project.


From here we can see that Social comes with certain implementation for Twitter feed. If we visit our Habitat instance, we can see there is a Twitter feed shows in the following page:


From here, we will create a Facebook feed to replace the Twitter feed.

First step, we create a new data template to cater for the fields for Facebook feed.


Click on image for full size view

Then, we create an item in Content Editor named FacebookDetails for setting the actual properties for the Facebook feed. In this example, we are using Sitecore Facebook page for demonstration purpose.


Click on image for full size view

The complete properties are as following:

Type: timeline
Width: 350
Height: 600
Hide Facebook Cover: [unchecked]
Hide Face Pile: [checked]
Title: Sitecore Facebook Feed

Type is a Droplist. Facebook feed supports multiple types, namely timeline, events and messages. By having Type as a Droplist, user can select which kind of feed to fetch from Facebook. However for simplicity sake, you can also configure your Type as a Single-Line Text. Doing that you will have to type in the Type field yourself each time you were to change the feed type. Not a show stopper, but something nice to have 🙂

Next, we will move to Visual Studio to tie the fields to a FacebookFeed View that we are about to create.

Look for Template.cs under Sitecore.Feature.Social project. Add the following struct into the class. Do note that the GUID in your instance will be different from the GUID displayed here if you are creating the fields by yourself. Hence, remember to replace the GUID in these properties. If you installing the Sitecore package provided at the end of this post, the GUID will remain the same.

This is the code that “tie” the fields we created earlier in Content Editor. The next thing we will do is to create a view that consume these properties’ GUID.

Add a new FacebookFeed View in Sitecore.Feature.Social project under Social folder.


Add in the following code to construct a Facebook Feed widget.

Note all the HTML attributes are the fields we created earlier in data template and the values are the fields we filled in FacebookDetail item in Content Editor.

Now, compile the solution (or project). Ensure that FacebookFeed.cshtml and Sitecore.Feature.Social.dll are copied over to your Sitecore instance. Again, if you are installing the package provided at the end of this article, you will not need to copy the files over.

At this point, we have successfully created created a Facebook Feed Widget. However the widget has not been configured to display on any page on our instance. Now let’s create and configure the necessary items in Content Editor for our Facebook Feed Widget.

Create a view rendering item to represent the View (.cshtml) we have created earlier.

Click on image to view full size

Click on image to view full size

Add FacebookFeed rendering view into Layout that we want to display the View from. For simplicity sake, we use “col-wide-1” as this layout is readily available on Social page.


Click on image to view full size

Next, we will now move to Experience Editor to replace the Twitter Feed with Facebook Feed Widget we have just created.

Navigate to Social page using Breadcrumb and remove the existing Twitter Feed.


Add the new Facebook Feed into the page. Check the “Open the Properties dialog box immediately” option:


Click on Browse under Data Source:


Select FacebookDetails and click OK


On your browser, navigate to http://habitat.local/en/Modules/Feature/Social and Facebook Feed Widget will appear in your page:


Obviously, this is the most perfect Facebook Feed Widget we can build (just kidding!). You are welcome to modify any steps to cater for your own Sitecore instance.

The objective of this article is to guide you through the steps in creating a Facebook Feeds Widget in Sitecore Habitat. If you would like to have a Facebook Feeds Widget in Sitecore Habitat without going through (most of) the steps, feel free to download and install the ready made package.

Download Sitecore-Habitat-Facebook-Feed-Widget package.

Json Data in SQL Server


The rise of NoSQL database such as mongoDb is largely due to the agility to store data in non-structured format. A fixed schema is not required like traditional relational databases such as SQL Server.

However, NoSQL database such as mongoDb is not a full-fledged database system. It is designed for very specific use cases. If you don’t know why you need to use NoSQL in your system, chances are you don’t need to. For those who find it essential to use a NoSQL database, often they only use NoSQL database for certain portion of their system and then use another RDBMS for the remaining part of their system that have more traditional business use cases.

Wouldn’t it be nice if RDBMS is able to support similar data structure – having the ability to store flexible data format without altering database tables?

Yes, it is possible. For years, software developers have been storing various JSON data in one table column. Then, developers will make use of library such as Newtonsoft.Json within the application (data access layer) to deserialize the data to make sense out of the JSON data.

Reading / Deserializing JSON

This works. However “JsonConvert.DeserializeObject” method is working extremely hard to deserialize the whole JSON data to only retrieve a simple field such as Name.

Imagine there is a requirement for searching certain Genres on a table that has 1 million row of records, the application codes will have to read 1 million row of records, then perform filtering on the application side. Bad for performance. Now imagine if you have a more complex data structure than the example above…

The searching mechanism will be much efficient if developers can pass a query (SQL statement) for database to handle the filtering. Unfortunately SQL Server does not support querying JSON data out of the box.

It is impossible to directly query JSON data in SQL Server until the introduction of a library known as JSON SelectJSON Select allows you to write SQL statement to query JSON data directly from SQL Server.

How JSON Select Works

First you need to download an installer from their website. When you run the installer, you need to specify the database you wish to install this library at:


What this installer essentially does is to create 10 functions in the database you have targeted. You can see the functions at:

SSMS > Databases > [YourTargetedDatabase] > Programmability > Functions > Scalar-valued Functions


Next, you can start pumping in some JSON data in your table to test it out.

I create a Student table with the following structure for my experiment:


In my StudentData column, I enter multiple rows of records in the following structure:

For demonstrating the query purpose, I have entered multiple rows as following:


If you want to write a simple statement to read the list of student names in JSON data, you can simply write:

You will get result as following in SSMS:


How about more complex query? Does it work with Aggregate Functions?

If you want to find out about how many students come from each city and what is their average age, you can write your SQL Statement as following:

You will get result as following in SSMS:


It appears the library allows you to query any JSON data in your table column using normal T-SQL syntax. The only difference is you need to make use of the predefined scalar-valued functions to wrap around the values you want to retrieve.

Few Last Thoughts…

  1. The good about this library is it allows developers to have hybrid version of storage (NoSQL & relational database) under one roof – minus the deserialize code at application layer. Developer can continue using the classical RDBMS for typical business use cases and leverage on the functions provided in the library to deal with JSON data.
  2. The bad about this library is it lacks proven track record and commercial use cases to demonstrate the robustness and stability.
  3. Although the library is not free, the license cost is relatively affordable at $AU 50. However the library is free for evaluation.
  4. SQL Server 2016 provides native support for JSON data. This library is only useful for SQL Server 2005 to 2014 where upgrading to 2016 is not a feasible option.

Dependency Injection for Web Service


Dependency Injection allows developers to cleanly inject a portion of concrete code implementation into the bigger scope of the system base on certain logical condition. Dependency Injection has been a preferred way of implementing code. In fact, ASP.NET 5 (Visual Studio 2015) supports Dependency Injection as 1st class citizen. Dependency Injection removes the need of having tons of if-else statement in code implementation and keep the codes clean. Some developers implement Dependency Injection even there isn’t such need at the moment as a standard practice because they might be needed in the future. Personally I’d prefer not to implement Dependency Injection unless there is a “good reason” for implementing it.

One of the “good reasons” is while designing a web service. Web service often serve multiple clients having various needs base on certain logical condition. It is web service responsibility to handle the various logic implementation. For example a web service is serving clients from various countries on the same endpoint might need to execute different codes to produce localized result depending on who is triggering the end point.

Consider this simple scenario:

We need to design a WCF web service that serve multiple countries on an eCommerce platform. There is an endpoint to accept a Product Id and the endpoint will return the formatted local price. The local price calculation depend on various factors such as tax, shipping, marketing promotion, and other business consideration to lower or higher the price for each country through custom discount mechanism.


Here is a good scenario to create a WCF web service that implements Dependency Injection to separate different calculation formula for respective country.

In the sample code, the Dependency Injection library that we are using is WCF (C#.NET) with Autofac.

Create a BaseService class. In BaseService class, we define a static container to register and store a list of logic. In this example, IProduct is an interface that get registered with different logic classes (UsProduct, UkProduct, MyProduct) depending on the logical condition.

We create a BaseService class for this purpose so that all the services in WCF could inherit BaseService class and access to BuildContainer method, which will be common among all services. In the following example, all public API method would be required to build the container to initialize the logic classes.

Base service class

Catalog service class

  1. Create a new Catalog service in WCF.
  2. Add a method GetProductPrice. It has UriTemplate as following which means Country and ProductId are parameters to be constructed in the Url endpoint.
  3. Inherits BaseService. Implements IProduct.
  4. BuildContainer is a method defined in parent class.

Define the respective logic class for each country…

United State product logic class

UsProduct class is also the parent class for rest of the country class. For example, the method GetProductBasePriceById is applicable for all countries implementation, hence this method stays at the parent class (UsProduct) so that it could be accessed by the child classes (UkProduct & MyProduct).

United Kingdom product logic class

UkProduct class inherits UsProduct and implements IProduct. GetProductBasePriceById could be accessed by UkProduct as UsProduct is the parent class.

Malaysia product logic class

MyProduct class inherits UsProduct and implements IProduct. GetProductBasePriceById could be accessed by UkProduct as UsProduct is the parent class.

All the product logic classes implement IProduct interface so that they all could be registered into Autofac container builder. (The following code is part of BaseService class shown earlier)

By registering the interface with the appropriate concrete logic class (base on the logical condition of country), Autofac would build a static container that allows the whole application know which concrete logic implementation to call.

If we put this into test using Postman, we would get the following result

Product price for United State

Country is specified by replacing {Country} parameter to “Us”. Result is returned base on UsProduct logic class implementation.


Product price for United Kingdom

Country is specified by replacing {Country} parameter to “Uk”. Result is returned base on UkProduct logic class implementation.


Product price for Malaysia

Country is specified by replacing {Country} parameter to “My”. Result is returned base on MyProduct logic class implementation.


Some consideration…

  1. Performance for building a container and inject dependency on runtime instead of direct initialization of concrete class. From the above 3 examples, we could see each request completed within 15-16ms. I ran a few more test switching between countries, most of the request completed in less than 20ms, which shows there isn’t any major overhead for builder the container.
  2. What are some other reasons to implement Dependency Injection? Answer is unit test. With Dependency Injection in place, the codes would be much testable. (If you currently have codes that is not testable, consider using Shim under Microsoft.Fakes. However this reason is arguable as we do not necessary need Dependency Injection to test our code. All we need is appropriate interfaces in the codes.

One last thought…

Dependency Injection is a clean way of separating codes implementation which conforms to a standard set of Interface. With Dependency Injection developer no longer need to separate which line to execute using complex if-else statement. It makes the code base much cleaner and easier to implement different codes base on logical condition. The drawback is Dependency Injection makes debugging more complicated as the developer need to first figure out which class has been injected during runtime. It is a recommended approach if you have a standard set of interface but requires different codes to be executed base on logical condition.