Scaling SQL Server

Standard

Scaling database refers the ability to serve significantly more request to both read and write data without compromising performance. In many enterprise applications, performance bottleneck often happens at database, hence scaling database is a critical part on improving system performance. In the last article on Microservices, we discussed scaling database horizontally and vertically on high level. In this article we will talk more in-depth about:

  1. Three types of replication in SQL Server
  2. Distribute database load using log shipping
  3. Tools to shield your database from being hit.

3 Types of Replication

In typical enterprise applications, Read request significantly outnumber Write request. By implementing Replication, you effectively offload the bulk of Read request to the Subscribers while reserving Publisher for Writing.

Transactional Replication

Transactional Replication is the simplest form of replication to understand and to implement. Transactional Replication is implemented by having a Publisher to publish the changes. One or more than one Subscriber will replay the transaction log. Data changes and schema modifications made at the Publisher are delivered to the Subscriber(s) as they occur (almost real time). In this way, transactional consistency is guaranteed.

The incremental changes in Publisher will be propagated to Subscribers as they occur. If a row changes for 3 times in Publisher, the Subscriber will also change for 3 times. It is not just the net data change that get propagated over.

For example, if a row in Product table changes price three times from $1.00, to $1.10, to $1.20 and finally to $1.30, transactional replication allows an application to respond to each change. Perhaps, send a notification to user when the price hit $1.20. It is not simply the net data change to the row by changing the price from $1.00 to $1.30. This is ideal for applications that require access to intermediate data states. For example, a stock market price alert application that tracks near real time stock price changes to send price alert to users.

Is it possible to scale your Publisher horizontally? Yes, Bidirectional Transactional Replication and Peer-to-Peer Transactional Replication will help you to achieve it. However, Microsoft strongly recommend that write operations for each row be performed at only one node – for 2 reasons. First, if a row is modified at more than one node, it can cause a conflict or even a lost update when the row is propagated to other nodes. Second, there is always some latency involved when changes are replicated. For applications that require the latest change to be seen immediately, dynamically load balancing the application across multiple nodes can be problematic.

From experience, the most optimum solution is to scale your Subscriber horizontally by having multiple nodes. Keep your Publisher in one node and scale the Publisher vertically, when you really have to.

transactional-replication

 

Merge Replication

In Merge Replication, Subscriber synchronizes with the Publisher when connected to the network and exchanges all rows that have changed between the Publisher and Subscriber since the last synchronization occurred. You may see Merge Replication as a batch update from Subscriber to Publisher that propagates only the net data changes. For example, if a row changes five times at a Subscriber before it synchronizes with Publisher, the row will change only change once at the Publisher to reflect the net data change (which is the 5th value). Then, the unified changes in Publisher will be propagated back to other Subscribers.

Merge Replication is suitable for situation where Subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers. For example a nationwide POS (point of sale) system where retail branches are spread across multiple physical locations. The retail branches will first initialize a snapshot from the Publisher database, make local offline changes (for example, through sales) to Subscriber database. The sales are not required to be propagated back to Publisher immediate. The many other retail branches also do not need immediate update on the changes happened at another retail branch although a more recent update will be beneficial, for example knowing whether another branch nearby has the stock that the local branch has run out to recommend customers where to go accordingly. Once a day or multiple times a day, depending on business need, the sales number in retail branches (Subscribers) will be propagated back to HQ (Publisher) and the executives in the HQ office can view the daily sales report.

merge-replication

Conflict can happen in Merge Replication and conflict will happen. Good news is, conflicts are resolved without the need of user intervention because SQL Server has built-in mechanism to resolve conflicts on data changes. However if you have unique use cases where you want to ensure SQL Server is doing exactly what you intended while resolving a conflict, you can view the conflict at Microsoft Replication Conflict Viewer and the outcome of the resolution can be modified.

Snapshot Replication

Snapshot Replication propagate data exactly as it appears at a specific moment in time and does not monitor for updates to the data. When synchronization occurs, the entire snapshot is generated and is sent to Subscribers. In simpler terms, Snapshot Replication takes a snapshot of the Publisher data state and overwrite it at the Subscribers. No conflict will happen as this replication basically overwrite the whole data set. Snapshot Replication is also used in both Transactional Replication and Merge Replicate to initialize the database at the Subscribers.

snapshot-replication

Snapshot Replication is suitable for system where

  • Subscriber deal with data that does not change frequently.
  • Subscriber do not require the most recent set of data for a long time.
  • The data set is small.

I was working on a small project with an advertising agency where the requirement is to analyze the trend of discussion and sentiment related to Telcos in a community forum. I quickly hack some web scraping codes to scrap what I needed and populate them into my database. As a result, I got 730 discussion topics and the total size of the database is about 5MB. From there I need to write some more algorithm to find out the trend and sentiment of discussion. During the development of my algorithm, I did not need the most updated dataset that reflects what is happening in the live forum. I pleasantly work (read) on a Subscriber node to develop and test out my algorithm. Few weeks later when the forum is updated with a lot more discussion topics, I simply replicate the changes in Publisher to my Subscriber in a fairly short amount of time. On production, knowing the analytical codes read database extensively, I pointed my codes to run on the Subscriber node. My Publisher node will not get any hit apart from getting the necessary insert operation from my web scraping service. Since my users do not need to know what was being discussed on a daily basis or on real time basis (because accurate trend and sentiment require months and months worth of data), I have configured my Snapshot Replication to happens on a monthly basis. The replication can be completed in a fairly short amount of time. On monthly basis the users will get a fresh copy of the trend and sentiment report base on at least 1 year worth of backdated forum discussion data without any performance downgrade. By implementing Snapshot Replication, I allow my web scraping service to write to Publisher without worrying if anyone need the database for reading to generate report at any particular time. Through Subscriber, I have also set up the foundation to make it possible to generate much more sophisticated reports without downgrading the performance by spinning up more Subscriber nodes when and if I need to.

Log Shipping

Log Shipping is used to automatically send transaction log backups from primary database to one or more secondary databases. The primary and secondary database should sit on different nodes. The transaction log backups are applied to each of the secondary database. Log Shipping is often used as part of disaster recovery strategy but creative database administrator often use Log Shipping for various other purposes *wink*.

In one of the projects I was working on, the SQL Server database was storing 130k registered users and their related activities such as payment history, credit spending history, Account & Contact relationship, products, login audit, etc. The company was at a rapid expansion stage where the CFO decided it’s time to get in a Business Intelligence guy to churn out some reports to give a sense how the business is doing on daily basis. The obvious thing to do here is to replicate a database for the BI colleague to run his heavy queries because running the reporting queries on production database is going to kill the poor database. The most suitable type of replication will be Transactional Replication. However the challenge was Replication requires SQL Server Enterprise Edition and we were running on SQL Server Standard Edition. The bad news is, the company did not have a lot of cash lying around for our disposal. We will have to find an alternative. After getting the green light from CTO, I implemented Log Shipping for BI reporting. In essence, I was “scaling” the SQL Server using a disaster recovery technique by offloading the reporting query load to a secondary database by replaying the transaction log to simulate Transactional Replication on an interval basis. It was the most optimum option we have in order to satisfy various stakeholders while keeping the cost low.

log-shipping

You can use this trick as long as your client does not require real time data. Take note on the following practical issues while “scaling” your SQL Server using Log Shipping.

  1. Understand that Log Shipping is a 3 steps process. First, primary database backup the transaction log at the primary server instance. Second, copy the transaction log file to the secondary server instance. Third, restore the log backup on the secondary server instance. From experience, third step Restore is the most fragile step where it broke often. To find out why Restore fails, go to SQL Server Agent to view the job history to see the detail error message.
  2. Understand your transaction backup cycle. If you have another Transaction Log backup automation happening by another agent / service, your Log Shipping will stop working fairly quickly (note: not immediately). Log Shipping works by taking all the transaction log since the last Full backup and clear off the log. It is important for SQL Server to match to log sequence. If there is another Transaction Log backup happened somewhere else, the tail of your newest Transaction Log will not match the head of last Transaction Log hence fail. If you need to use Log Shipping, disable / stop all other Transaction Log backup.
  3. Move your agent job interval up gradually. Note that Backup, Copy, Restore jobs are run by SQL Server agent on an interval. When you are setting up your Log Shipping, after the initial full database backup is restored at secondary server, your Transaction Log is ready for action. When do your Backup, Copy, and Restore jobs kick in depend on what is the interval you set while configuring your Log Shipping. I recommend you set a super short interval in the beginning so that you can monitor the failure in your setup fairly quickly. You do not want to wait for 6 hours for your Backup, Copy, Restore jobs to kick in to find out they failed, you then make some changes and wait for another 6 hours. I always start with 1 minute. Then, 5 minutes then to the actual time frame depending on business requirement. In my case, it was 2 hours.
  4. Enable Standby mode so that your client can read the data. I highly recommend you to check “Disconnect user in the database when restoring backups.”. SQL Server Agent Restore job will handle the Transaction Log intelligently by replaying the log that was missed previously. However as I mentioned earlier, Restore job is the most fragile step. Sometimes when Restore job breaks, you have to resetup the whole log shipping mechanism. Imagine a database size of 300GB (the actual size I was working with), it is pretty painful to wait for the whole process to complete. Hence, to ensure the integrity of the Transaction Log sequence, I would rather terminate all open connection to ensure my Restore step can be executed successfully. standby-mode
  5. Monitoring your Log Shipping continuity. Again, Log Shipping is pretty fragile especially if this is the first time you are doing it. You need certain mechanism to monitor your Log Shipping to ensure they are still running as expected 6 months down the road. You can open up SQL Server Agent to view the job history ensuring they are all green or configure Alert Job to raise an alert when job does not complete successfully. But personally, nothing is more assuring than knowing the data actually changes in the secondary database during specific time frame. What I do is monitoring a table column that is supposed to change after replay of Transaction Log. In my case, I monitor the latest user login time because I know this table is updated fairly frequently. The probability that no one login in the last 2 hours is close to zero. I make sure that every 2 hour the value in column changes. If you do not have a user login audit, you can make use of any table that you are sure the data will most likely change, for example your CreatedOn column on the highest transaction table.

Shielding your database

Shielding your database is a great way to keep your database load low so that you can serve more requests. But if you shield the database, where does the data come from? The data has to come from somewhere and that somewhere is known as index provider. I’m not referring to building more indexes within SQL Server because indexes served from SQL Server still add load to SQL Server and not to mention additional storage on disk. The index provider I was referring to are fast-searching services such as Solr, Elasticsearch, Azure Search, or even Redis.

This approach is to drastically offload the reading at database level to another service which are built for super fast searching. Not only the response time is much faster, you will save the poor programmers from write essay-length query to retrieve data.

Example 1: You have a clean 3rd normalized form database. With SQL Profiler you observed that clients have been issuing long query such as a query with 15 JOIN and 5 GROUP BY on very large tables. Not only the that query result comes back slowly, it also drags down the server performance and other queries are badly affected. You have discussed with your best SQL query guru to review the queries, studied the execution plans, revisited the data structure and the conclusion is that is really what it takes the get the desired result. So, what do you do now?

Index provider will come into rescue when you design your schema for your query. The joining is no longer required because the index has flatten the defined fields during indexing. The grouping is also no longer required because that info come from facets. Now instead of the JOIN and GROUP BY happen in SQL Server level, you get the data from an index lightening fast – as a simple read provided you have designed your index schema properly.

index-example-1

Example 2: You are working with 2 microservices where each of the service has its own database keeping different domain data. Your application requires data from both the services and both of them are responding to you slowly. Again, after reviewing the request with respective microservice owner and you all came to the conclusion that both the codes and SQL statement sare at the most optimum form, what do you do now?

Index provider will come into rescue when you index both the data from different microservices into one index provider and you query from the index provider instead. Not only you bypass the two slow microservices, you are also querying from one index instead of multiple databases. Of course, this is easier said than done because now it involves the two microservices to index the required data and additional effort to maintain the index provider.

index-example-2

Here are some of the practical tips you can consider while implementing index provider to shield your database from getting hit:

  1. Initial indexing will take a lot of time, especially from various sources. This is one of the reason why some deployments take long time. A trick to overcome this is to not relying on the on-the-fly index API call provided by index provider. Export your data as .csv and import them for indexing. Do it manually if your data is large enough. Automate this process if full reindexing happen on recurring basis.
  2. Minimize your index schema changes. Every schema change will require you to reindex your indexes, which usually take time and lead to longer down time. The key is when designing your schema, think holistically what is the use case and potential use cases instead of designing a schema just for one client and one use case. In the meantime, you also do not want to design an overly generic schema else it will be inefficient. The art is finding this balance and the way to do it is to understand your domain and context well before designing your schema.
  3. Ensure your stakeholders are aware that index is a reflection of your database. Your database remains as the single source of truth. Expect delay in your index. Eventual consistent is what you aim to achieve. Often, it is acceptable to have a few seconds or even minutes delay depending on how critical the data is. Ensure you get the acknowledgement from your stakeholders.
  4. Queue your indexing. A simple phone number update into database could result in 100 of reindexing request. Have a queueing mechanism to protect your index provider from sudden surge of reindexing request.
  5. Take note of the strongly type fields. Index provider such as Solr takes everything as string while index provider such as Azure Search has strongly-typed field (Edm.Boolean, Edm.String, Edm.Int32, etc). If you plan to switch index provider in the future, take care of the data type from day one else you will end up having an additional layer of mapper to deal with the data type later on.

Hope these help you in your journey to scale your SQL Server. Have fun!