Working with Azure Search in C#.NET

Standard

What is Azure Search

Azure Search is a service in Microsoft Azure Cloud Platform providing indexing and querying capabilities. Azure Search is intended to provide developers with complex search capabilities for mobile and web development while hiding infrastructure requirements and search algorithm complexities.

I like the idea of having an index service provider being part of my solution as it allows developers to perform searches quickly and effortlessly without going through the pain of writing essay-length SQL query.

The key capabilities of Azure Search include scalable full-text search over multiple languages, geo-spatial search, filtering and faceted navigation, type-ahead queries, hit highlighting, and custom analyzers.

azure-search-daniel-foo

Why Azure Search (or index provider in general)

  1. It’s Brilliant – Having an index provider sitting in between application and database is a great way to shield the database from unnecessary or inefficient Read requests. This allows database I/O and computing power to be reserved for operations that truly matters.
  2. It’s Modern – Traditional search approach requires developers to write essay-length SQL query to retrieve simple aggregated data. Index provider such as Azure Search allows developers to initiate a search request without writing complicated search algorithm (for example geo-spatial search) and no complex SQL query is required (for example faceted navigation).
  3. It Scales – Solutions that gain the most benefited from index provider are relatively larger enterprise system. Such system often require scaling to certain extain. Scaling Azure Search in Microsoft Azure Cloud Platform is several clicks away compared to having an on-premise service provider such as Solr.
  4. It’s FREE – Microsoft Azure provides a free tier for Azure Search. Developers can create and delete Azure Search service for development and self-learning purpose.

Use Case

A car classified site is a good example to make use of index service provider such as Azure Search. I will use a local car classified site, Carlist.my to illustrate several features the system can potentially tap into.

Type-ahead Queries allow developer to implement auto suggestions as user types. In the following example, as I was typing “merce”, the site returns me a list of potential Mercedes-Benz car model that I might be interested in.

type-ahead-queries

Facet allow developers to retrieve aggregated data (such as car make, car model, car type, car location, car price range, etc) without writing writing complex SQL query, which also means saving the Read load at database. In the following example, the site returns me a list of Mercedes-Benz model and the count in the bracket to indicate how many classified ads are available for the specific model.

facet

Filter allows developer to retrieve documents that fit the searching criteria without writing complex SQL queries with endless INNER JOIN and WHERE clauses. In the following example, I specified that I want all the used Mercedes-Benz, model of E-Class E200, variant of Avantgarde from Kuala Lumpur with the price range of RM100,000 to RM250,000. You can imagine the kind of INNER JOIN and WHERE clauses the poor developer has to design dynamically if this were to be retrieved from a database directly.

filter

Another feature the car classified site can potentially tap into is Geo-spatial Search although it is not seen implemented. For example if I were to search for a specific car in a dealership, the portal can suggest similar cars from other dealerships nearby to the dealership I’m looking at. That way when I make a trip to visit a dealership, I can also visit other nearby dealerships that have the similar cars.

Using C#.NET to work with Azure Search

Let’s roll our sleeves and hack some codes. I will be using a C#.NET console application to illustrate how we can design and create an index, upload documents into the index and perform several types of searches on the index. This solution will be used to simulate some of the potential codes required by a car classified portal.

First, we create a solution name AzureSearchSandbox.

We will need “Microsoft.Azure.Search” NuGet package from NuGet.org. Open your Package Manager Console and run the following command:

Upon successful installation, you will see several NuGet packages are added into your packages.config file in your solution.

Note that you will only need to install “Microsoft.Azure.Search”, the other packages are dependencies. The dependencies are resolved automatically. 

In your solution, add a new class Car.cs

This Car object will be used to represent your Azure Search document.

Next we create a static Helper.cs class to take care of the initialization of the index in Azure Search.

Initialize() is the main method to kick start our Azure Search index. Unlike other on-premise index service that require certain amount of setup such as Solr, it doesn’t take long to have our index up and running. In Solr, I have to install Solr using NuGet, install the right Java version, set the environment variable and finally create a core in Solr admin portal. With Azure Search, no upfront set up is required.

The index is created in CreateIndex() method, where we tell Azure Search client SDK that we want an index with the fields we define in our Index object.

To ensure this set of code is running on a fresh index, we have DeleteIfIndexExist() method to ensure the previous index is removed. We call this right before CreateIndex() method.

Next, we add a new class Uploader.cs to deal with the documents we are about to upload into our Azure Search index.

PrepareDocuments() is a simple method to construct a list of dummy Car object for our searches later on.

Upload() method gets the dummy Car objects from PrepareDocuments() and pass these object into Azure Search client SDK to upload the documents into index in a batch. Note that we added a 2000 millisecond sleep time to allow our service to upload and process car documents properly before moving on to next part of the code, which is search. However in practical sense, we would not want to add sleep time in our upload code. Instead, the component that takes care of searching should expect index is not available immediately. We also catch IndexBatchException implicitly to handle the index in case the batch upload of index failed. In this example, we merely output the index key. In practical sense, we should implement a retry or at least logging the failed index.

Once the index upload operation is completed, we will add another class Searcher.cs to take care of the searching capability.

SearchDocuments() method is to handle the searching mechanism on the index we created earlier. No fancy algorithm, only passing specific instruction to Azure Search client SDK on what we are looking for and display them. In this method, we take care simple text search, filter and facets. There are much more capabilities Azure Search client SDK can provide. Feel free to explore the SearchParameters and response object on your own.

Putting them all together in Program.cs

First we define index service name and API key to create a search index client instance. The instance is returned by Helper.Initialize(). We will make use of this instance for both search and upload later.

After initializing the index, we call Upload() method to upload some dummy car documents to the index.

Next, we perform the following searches:

  1. Simple text search. We will search for the text “Perodua” in the documents.

The result as following. Azure Search index returns 2 documents which contains the keyword “Perodua”

perodua

2. Using Filter. A more targeted and efficient approach to look for documents. In the following example, first we look for Category field which is equal to ‘Hatchback’; second we look for Price field which is greater than 100,000 and is a ‘Sedan’ category. More details on how to write expression syntax in Azure search.

The result as following: Car with the category of Hatchback and car cost more than 100,000 and is a Sedan.

filter

3. Searching facets. With facets, developer will no longer need to write long query that combines Count() and endless GROUP BY clauses.

If we have a traditional database table that represent the dummy car data, this is equivalent to “SELECT Category, Count(*) FROM Car GROUP BY Category”.

Result as following:

facets

This example might not look like a big deal if you have a small data set or simple data structure. Facet is super handy and fast when you have large number of data and when your query is getting more complex. The ability to define which facet is required in C# codes make the”query” much cleaner and easier to maintain.

One last thought…

You can clone the above source code from my GitHub repo. I will be happy to accept Pull Request if you want to demonstrate how to make use of other capabilities in Azure Search client SDK. Remember to change the API key and service name before compiling. Have fun hacking Azure Search!

Microservices

Standard

I was first introduced to Microservices Architecture in 2014. At that point of time, I have no idea what I was doing is known as Microservices. We designed our system that way simply because it made practical sense. We started off with 1 PHP web service, 1 PHP frontend application, 1 .NET web service, 2 .NET frontend applications and a CRM. The number of Microservices grow along with the business needs. Since then I learned that managing Microservices is as interesting and fun as building Microservices.

Microservices are relatively small applications that interacts with each other to achieve specific business requirement. In each Microservice, they are designed to do one thing and to do it really well. Sometimes, they are independent on their own but they often work together to accomplish more complex business requirement. The opposite of Microservices is a monolithic system, the kind of system where you have 5,000,000 line of codes in single code base.

Why do we use Microservices?

Technology Heterogeneity – It means the diversity of technology within a solution. Your Microservices have independent stacks of technology. You can choose the most suitable stack of technology depending on the problem you are solving. For example, a Photo Album Printing business might have a PHP frontend application (because the want to tap into WordPress as CMS), a .NET backend business rules exposed as Web API (because there is a legacy logic and SQL Server database), a Java image processing engine (because there are proprietary image processing libraries written in Java), and an R application to crunch big data on customer sentiment. In Microservices architecture, you can have different stacks of technologies that work together seamlessly. They interact through set of API exposed to each other.

Technology-Heterogeneity

This reason also align with Scrum. Each Scrum team potentially owns one Microservice and there will be multiple Scrum teams based on the technological domain. By the time the Scrum team gets too big, it also serves as an indicator it is time to break the Microservice to be smaller. Ideally you do not want to wait for the Microservice to be too big before you break it. You should be alert on not to stuff your Microservice to be bloated in the first place. Kick start another Microservice whenever you can logically scope the context boundary into a separate Microservice.

Scaling – The fundamental of scaling boils down to 2 approaches: Vertical and Horizontal. Vertical scaling is quick and easy but could get very expensive especially when hitting the top tiers of resources. Horizontal scaling is cheaper but could be difficult to implement if the solution is not designed to scale horizontally. For example, a stateful monolithic system. As a general rule of thumb, always design your solution to scale horizontally. To put this into perspective, one large virtual machine could be substantially more costly than three small virtual machines that provide the equal amount of processing power, depending on which cloud provider you are working with.

Building solution as Microservices provides the foundation to scale horizontally. Using the earlier Photo Album Printing system, say there are many users who submit photos in bulk for processing during 9.00 AM to 12.00 PM. The DevOps guy only need to scale up the Java image processing engine service.

9.00 AM-12.00 PM

Scaling for 9.00 AM-12.00 PM

At 6.00PM to 11.00PM, say there are many visitors come to the website to browse the photos. The DevOps guy only need to scale up the PHP frontend application.

6.00PM-11.00PM

Scaling for 6.00PM-11.00PM

If we have a gigantic monolithic system, we have to scale the entire system regardless of which component is being utilized most. To put this into perspective, imagine you keep your car engine running just because you want the air-cond. Heads will not roll, it is just not the most efficient way to use your technologies.

Ease of Deployment – If you have tried waking up 2.00 AM in the morning for a “major deployment”, or been through a 20-hour deployment, you probably will agree it is important to have clean and quick deployment. I can vividly remember how nervous my CIO got whenever we have a “major deployment”. Sometimes he will come in early morning together with us give us moral support by supplying us with coffee and McDonalds. Despite the heart-warming breakfast, it was really stressful for everyone go through such deployment. Long story short, we have improved our deployment to be able making 4 productions deployment within a day with 0 down time. It is not possible (or significantly more difficult) if we have not built our codes on Microservices architecture.

Deployment for Microservices is definitely easier compared to a gigantic monolithic system. The database that the Microservice using is simpler which make altering database schema changes less painful. The amount of code is lesser which indirectly means there are less configuration to deal with during deployment. The scope of what the Microservices is designed for is smaller which makes post-deployment (both automated or manual) testing faster. In worse case scenario, rolling back a small service is significantly straightforward compared to rolling back a monolithic system with 25 other dependencies where some of they need to be rolled back together.

Scaling Microservices

The secret to scaling Microservices is: start small, think big. You might start your Microservice as a small service coded by a solo developer in 2 weeks. Although a service could be small, you need to think about how to deal with it when your audience size grow 10 times larger. As we discussed earlier, scaling vertically is easy but could be fairly expensive when you get nearer to the top tiers. You want to design your Microservice to scale horizontally from day one.

How to build a horizontal-scale friendly Microservice? The most common reason some services cannot scale horizontally efficiently is due to session. When you have a session stuck in your service memory, your client will always have to go back to the same service, else you will discover all kind of weird behavior. Of course, you can overcome this problem by enabling stickiness in your load balancer, or have an additional SQL Server database to keep all the session (InProc mode). However why would you want to get yourselves into this situation in the first place? If having session within the service is going make your scaling effort more challenging, avoid relying on session from day one so that you can scale horizontally, effortlessly. Building your service base on RESTful principles is a good starting point.

If your Microservice really have to make use of session, make use of additional session service such as Redis instead of keeping your session in service memory.

Front your service with a load balancer. Having your service instances sit behind a load balancer acts as the foundation for horizontal scaling. Configure auto-scaling in whichever cloud provider you are using. By the time the additional load kicks in, your auto-scaling will automatically boot up additional hosts to serve the load.

load-balancer

Another advantage of having your Microservice instances sit behind a load balancer is to avoid single point of failure. You should consider having at least 2 hosts to avoid single point of failure. For example, perhaps your Microservice only need 1 medium size virtual machine computing power. While 2 small size virtual machines provide the equal computing power as 1 medium size virtual machine (you have to work out the Maths yourselves). Having 2 small size virtual machines sit behind a load balancer rather than 1 medium size virtual machine being connected directly is a good remedy to avoid single point of failure.

Scaling Databases

As the number of your Microservice instance grows, it usually means more load in your database. Database IO is the most common bottleneck in software performance. Unless you have put in explicit design to protect your database from getting hit, you will often end up with scaling your database vertically. However you might want to keep this option as your last card.

Out of the box, SQL Server provides you the option to do Transactional Replication, Merge Replication, and Snapshot Replication. You have to determine which mode is most optimum for your system. If you have no idea what all these are about, Transactional Replication is your safest bet. But if you are adventurous, you can mix and match different approaches.

Transactional Replication works by Publisher and Subscriber model. All Write will happen in Publisher while all Read will happen in Subscribers. In typical services, the number of Read far out number Write. In this set up, you distribute the Read load to multiple Subscriber hosts. You can continue to add more Subscriber hosts as you see fit.

database

The drawback is, you need to code your Microservice in a way to perform all data manipulation and insertion in the Publisher while the reading in the Subscriber(s), which requires conscious effort from developers to code in such manner. Another drawback for adopting replication is the skill set required to set up and maintain the replication. Replaying transactional log is pretty fragile from my experience. You need someone who understand the mechanism behind replication to troubleshoot the failure effectively.

I highly recommend you to give some forward thinking on how to avoid your database get hit unnecessary in the first place. Tap into search engines such as Solr and ElasticSearch when suitable. Identify how your data will be used up front. Keep the on-the-fly data aggregation to minimum. Design your search indexes accordingly. At the very minimum, make use of caching.

The key for scaling your data is to achieve eventual consistency. It is alright to have your data out of sync for a short period of time especially on non-mission critical system. As long as your data will be consistent eventually, you are heading to the right direction.

Scaling database could be tricky. If you need something to be done by tomorrow 9.00AM, the easiest option is to scale vertically. No code change and no SQL Server expert involved. Everyone will be happy… probably except the CFO.

Keep Failure in Mind

Failure is inevitable in software. In Microservices architecture, the chances for software to fail is even greater. The risk of failure is exponentially higher because your service no longer depend on yourselves alone. Every Microservice that your Microservice depends on could go wrong at any given time. Your Microservice need to expect other Mircroservices to fail and handle the failure gracefully.

Bake in your failure handling. Your Microservice depends on other Microservices at one point or another. Can your Microservice core features operate as usual when other Microservices start failing? For example, a CMS depends on a Comment Service. In an article page, if the Comment Service is not responding, how will that affect CMS ability to display the article? Will the CMS article page just crash when the visitor visits? Or will the CMS be able to handle the Comment Service failure gracefully by not showing the comments but the rest of the article is loaded as usual?

Another example, I was using Redis to keep my user token after every successful login. At one point, Redis decided not to keep token for me anymore by actively rejecting the new connection. My users could not login although they have entered the correct username and password. The users could not login simply because part of the non-critical authentication process has failed. We discovered the root cause later. However, in order to avoid such embarrassing moment from happening again, at code level we changed the interaction with Redis to an asynchronous call because creating a token in Redis is not the main criteria in authentication. By changing the Redis call to asynchronous, users can continue to utilize the core functionalities although minor portion of features that relies on Redis token will not work.

It is fine if you do not have a sophisticated failure handling mechanism. At the very minimum, code defensively. Expect failure to happen at every interaction point with other Microservices. Ideally we do not want any of the Microservices to fail. But when they do fail (and they will), defensive coding help your Microservice being minimally affected. It is better to have 70% of your core functionality working working than the whole service crashing down.

The Backends for Frontends

This is another concept I discovered by accident when I was hacking some codes in Android Studio in 2013. The Backends for Frontends design is extra practical for mobile application, although you can still apply the concept on any Microservices.

In essence, the Backends for Frontends design is to back your frontend application with a backend service. The primary objective of this backend service is to serve your frontend application. This is a very good choice for mobile application for several reasons.

backend-for-frontend

First, mobile application is known for having connectivity limitation. Instead of asking your mobile app to connect to 7 different other Microservices to request various information and do the processing at the client (mobile) side, it makes more sense to get the Backend for Frontend service to make the necessary server-to-server calls, process data, then only send the necessary data back to mobile client.

Second, the Backend for Frontend service also serve as a security gateway. Obviously you do not want to expose all your backend core services (for example your CRM) to the public. You need to design your network to have your backend core services sit in a private network. Then, grant permission for your public facing Backend for Frontend service to access to this private network. By doing this, your backend core services are protected from public access yet there are explicit permission granted to specific Backend for Frontend service. You can implement whichever security model you find fit in the Backend for Frontend service where your client application must and can comply to.

backend-for-frontend-security

Third, mobile application sits at client side, which makes updating the application more challenging. You want to minimize the logic in the client side. The Backend for Frontend service plays the perfect role for handling business logic. You can update the logic much easier in the Backend for Frontend service compared to the client application. In other words, your frontend application will be lightweight and is only responsible for UI presentation.

One Last Thought…

Microservices is a huge topic by itself. This article serves as a triggering point for you to get to know Microservices without going through at 400 pages book. If you would like to learn more, there are many books available. I recommend you to look at Building Microservices by Sam Newman. I hope you have discovered something new in this article. Until next time!