Microservices

Standard

I was first introduced to Microservices Architecture in 2014. At that point of time, I have no idea what I was doing is known as Microservices. We designed our system that way simply because it made practical sense. We started off with 1 PHP web service, 1 PHP frontend application, 1 .NET web service, 2 .NET frontend applications and a CRM. The number of Microservices grow along with the business needs. Since then I learned that managing Microservices is as interesting and fun as building Microservices.

Microservices are relatively small applications that interacts with each other to achieve specific business requirement. In each Microservice, they are designed to do one thing and to do it really well. Sometimes, they are independent on their own but they often work together to accomplish more complex business requirement. The opposite of Microservices is a monolithic system, the kind of system where you have 5,000,000 line of codes in single code base.

Why do we use Microservices?

Technology Heterogeneity – It means the diversity of technology within a solution. Your Microservices have independent stacks of technology. You can choose the most suitable stack of technology depending on the problem you are solving. For example, a Photo Album Printing business might have a PHP frontend application (because the want to tap into WordPress as CMS), a .NET backend business rules exposed as Web API (because there is a legacy logic and SQL Server database), a Java image processing engine (because there are proprietary image processing libraries written in Java), and an R application to crunch big data on customer sentiment. In Microservices architecture, you can have different stacks of technologies that work together seamlessly. They interact through set of API exposed to each other.

Technology-Heterogeneity

This reason also align with Scrum. Each Scrum team potentially owns one Microservice and there will be multiple Scrum teams based on the technological domain. By the time the Scrum team gets too big, it also serves as an indicator it is time to break the Microservice to be smaller. Ideally you do not want to wait for the Microservice to be too big before you break it. You should be alert on not to stuff your Microservice to be bloated in the first place. Kick start another Microservice whenever you can logically scope the context boundary into a separate Microservice.

Scaling – The fundamental of scaling boils down to 2 approaches: Vertical and Horizontal. Vertical scaling is quick and easy but could get very expensive especially when hitting the top tiers of resources. Horizontal scaling is cheaper but could be difficult to implement if the solution is not designed to scale horizontally. For example, a stateful monolithic system. As a general rule of thumb, always design your solution to scale horizontally. To put this into perspective, one large virtual machine could be substantially more costly than three small virtual machines that provide the equal amount of processing power, depending on which cloud provider you are working with.

Building solution as Microservices provides the foundation to scale horizontally. Using the earlier Photo Album Printing system, say there are many users who submit photos in bulk for processing during 9.00 AM to 12.00 PM. The DevOps guy only need to scale up the Java image processing engine service.

9.00 AM-12.00 PM

Scaling for 9.00 AM-12.00 PM

At 6.00PM to 11.00PM, say there are many visitors come to the website to browse the photos. The DevOps guy only need to scale up the PHP frontend application.

6.00PM-11.00PM

Scaling for 6.00PM-11.00PM

If we have a gigantic monolithic system, we have to scale the entire system regardless of which component is being utilized most. To put this into perspective, imagine you keep your car engine running just because you want the air-cond. Heads will not roll, it is just not the most efficient way to use your technologies.

Ease of Deployment – If you have tried waking up 2.00 AM in the morning for a “major deployment”, or been through a 20-hour deployment, you probably will agree it is important to have clean and quick deployment. I can vividly remember how nervous my CIO got whenever we have a “major deployment”. Sometimes he will come in early morning together with us give us moral support by supplying us with coffee and McDonalds. Despite the heart-warming breakfast, it was really stressful for everyone go through such deployment. Long story short, we have improved our deployment to be able making 4 productions deployment within a day with 0 down time. It is not possible (or significantly more difficult) if we have not built our codes on Microservices architecture.

Deployment for Microservices is definitely easier compared to a gigantic monolithic system. The database that the Microservice using is simpler which make altering database schema changes less painful. The amount of code is lesser which indirectly means there are less configuration to deal with during deployment. The scope of what the Microservices is designed for is smaller which makes post-deployment (both automated or manual) testing faster. In worse case scenario, rolling back a small service is significantly straightforward compared to rolling back a monolithic system with 25 other dependencies where some of they need to be rolled back together.

Scaling Microservices

The secret to scaling Microservices is: start small, think big. You might start your Microservice as a small service coded by a solo developer in 2 weeks. Although a service could be small, you need to think about how to deal with it when your audience size grow 10 times larger. As we discussed earlier, scaling vertically is easy but could be fairly expensive when you get nearer to the top tiers. You want to design your Microservice to scale horizontally from day one.

How to build a horizontal-scale friendly Microservice? The most common reason some services cannot scale horizontally efficiently is due to session. When you have a session stuck in your service memory, your client will always have to go back to the same service, else you will discover all kind of weird behavior. Of course, you can overcome this problem by enabling stickiness in your load balancer, or have an additional SQL Server database to keep all the session (InProc mode). However why would you want to get yourselves into this situation in the first place? If having session within the service is going make your scaling effort more challenging, avoid relying on session from day one so that you can scale horizontally, effortlessly. Building your service base on RESTful principles is a good starting point.

If your Microservice really have to make use of session, make use of additional session service such as Redis instead of keeping your session in service memory.

Front your service with a load balancer. Having your service instances sit behind a load balancer acts as the foundation for horizontal scaling. Configure auto-scaling in whichever cloud provider you are using. By the time the additional load kicks in, your auto-scaling will automatically boot up additional hosts to serve the load.

load-balancer

Another advantage of having your Microservice instances sit behind a load balancer is to avoid single point of failure. You should consider having at least 2 hosts to avoid single point of failure. For example, perhaps your Microservice only need 1 medium size virtual machine computing power. While 2 small size virtual machines provide the equal computing power as 1 medium size virtual machine (you have to work out the Maths yourselves). Having 2 small size virtual machines sit behind a load balancer rather than 1 medium size virtual machine being connected directly is a good remedy to avoid single point of failure.

Scaling Databases

As the number of your Microservice instance grows, it usually means more load in your database. Database IO is the most common bottleneck in software performance. Unless you have put in explicit design to protect your database from getting hit, you will often end up with scaling your database vertically. However you might want to keep this option as your last card.

Out of the box, SQL Server provides you the option to do Transactional Replication, Merge Replication, and Snapshot Replication. You have to determine which mode is most optimum for your system. If you have no idea what all these are about, Transactional Replication is your safest bet. But if you are adventurous, you can mix and match different approaches.

Transactional Replication works by Publisher and Subscriber model. All Write will happen in Publisher while all Read will happen in Subscribers. In typical services, the number of Read far out number Write. In this set up, you distribute the Read load to multiple Subscriber hosts. You can continue to add more Subscriber hosts as you see fit.

database

The drawback is, you need to code your Microservice in a way to perform all data manipulation and insertion in the Publisher while the reading in the Subscriber(s), which requires conscious effort from developers to code in such manner. Another drawback for adopting replication is the skill set required to set up and maintain the replication. Replaying transactional log is pretty fragile from my experience. You need someone who understand the mechanism behind replication to troubleshoot the failure effectively.

I highly recommend you to give some forward thinking on how to avoid your database get hit unnecessary in the first place. Tap into search engines such as Solr and ElasticSearch when suitable. Identify how your data will be used up front. Keep the on-the-fly data aggregation to minimum. Design your search indexes accordingly. At the very minimum, make use of caching.

The key for scaling your data is to achieve eventual consistency. It is alright to have your data out of sync for a short period of time especially on non-mission critical system. As long as your data will be consistent eventually, you are heading to the right direction.

Scaling database could be tricky. If you need something to be done by tomorrow 9.00AM, the easiest option is to scale vertically. No code change and no SQL Server expert involved. Everyone will be happy… probably except the CFO.

Keep Failure in Mind

Failure is inevitable in software. In Microservices architecture, the chances for software to fail is even greater. The risk of failure is exponentially higher because your service no longer depend on yourselves alone. Every Microservice that your Microservice depends on could go wrong at any given time. Your Microservice need to expect other Mircroservices to fail and handle the failure gracefully.

Bake in your failure handling. Your Microservice depends on other Microservices at one point or another. Can your Microservice core features operate as usual when other Microservices start failing? For example, a CMS depends on a Comment Service. In an article page, if the Comment Service is not responding, how will that affect CMS ability to display the article? Will the CMS article page just crash when the visitor visits? Or will the CMS be able to handle the Comment Service failure gracefully by not showing the comments but the rest of the article is loaded as usual?

Another example, I was using Redis to keep my user token after every successful login. At one point, Redis decided not to keep token for me anymore by actively rejecting the new connection. My users could not login although they have entered the correct username and password. The users could not login simply because part of the non-critical authentication process has failed. We discovered the root cause later. However, in order to avoid such embarrassing moment from happening again, at code level we changed the interaction with Redis to an asynchronous call because creating a token in Redis is not the main criteria in authentication. By changing the Redis call to asynchronous, users can continue to utilize the core functionalities although minor portion of features that relies on Redis token will not work.

It is fine if you do not have a sophisticated failure handling mechanism. At the very minimum, code defensively. Expect failure to happen at every interaction point with other Microservices. Ideally we do not want any of the Microservices to fail. But when they do fail (and they will), defensive coding help your Microservice being minimally affected. It is better to have 70% of your core functionality working working than the whole service crashing down.

The Backends for Frontends

This is another concept I discovered by accident when I was hacking some codes in Android Studio in 2013. The Backends for Frontends design is extra practical for mobile application, although you can still apply the concept on any Microservices.

In essence, the Backends for Frontends design is to back your frontend application with a backend service. The primary objective of this backend service is to serve your frontend application. This is a very good choice for mobile application for several reasons.

backend-for-frontend

First, mobile application is known for having connectivity limitation. Instead of asking your mobile app to connect to 7 different other Microservices to request various information and do the processing at the client (mobile) side, it makes more sense to get the Backend for Frontend service to make the necessary server-to-server calls, process data, then only send the necessary data back to mobile client.

Second, the Backend for Frontend service also serve as a security gateway. Obviously you do not want to expose all your backend core services (for example your CRM) to the public. You need to design your network to have your backend core services sit in a private network. Then, grant permission for your public facing Backend for Frontend service to access to this private network. By doing this, your backend core services are protected from public access yet there are explicit permission granted to specific Backend for Frontend service. You can implement whichever security model you find fit in the Backend for Frontend service where your client application must and can comply to.

backend-for-frontend-security

Third, mobile application sits at client side, which makes updating the application more challenging. You want to minimize the logic in the client side. The Backend for Frontend service plays the perfect role for handling business logic. You can update the logic much easier in the Backend for Frontend service compared to the client application. In other words, your frontend application will be lightweight and is only responsible for UI presentation.

One Last Thought…

Microservices is a huge topic by itself. This article serves as a triggering point for you to get to know Microservices without going through at 400 pages book. If you would like to learn more, there are many books available. I recommend you to look at Building Microservices by Sam Newman. I hope you have discovered something new in this article. Until next time!

Powershell PathTooLongException

PathTooLongException
Standard

PathTooLongException could be triggered in various occasions when the path you are dealing with exceed 260 characters. For example while copying an innocent file to a destination with super long path. I hit PathTooLongException when I need to clean up a build in my Powershell build script.

For simplicity sake, I extracted the script that trigger PathTooLongException as following:

Here is our friendly error message:

PathTooLongException

Here is an example of what a (really) long path looks like:

C:\inetpub\wwwroot\sc82rev160610\Data\packages\MyApplication.Experience.Profile.8.2.0.60r160607\serialization\core\MyApplication\client\Applications\ExperienceProfile\Contact\Main\PageSettings\UserTabs\Activity\ActivitySubtabs\Minor\Automations\AutomationsPanel\EngagementPlans\Item\EngagementPlan.item

We will not argue whether it is right or wrong to have such a long path. Often, we are given a piece of software to work on. We have little choice but to find the most efficient solution to deal with the situation.

PathTooLongException is known since 2006. It has been discussed extensively by Microsoft BLC Team in 2007 here. At the point of writing this, Microsoft has no plan to do anything about it. The general advice you will find is is: do not use such long path. (Wow! Really helpful!)

I have been digging around the web for solution but unfortunately there was little luck. Well, the truth is I wasn’t happy with many of the solutions or they are just unnecessary complicated. I eventually wrote my own “hack” after some discussion with Alistair Deneys.

The trick is to setup a short symbolic link to represent the root of the working directory. Pass the short symbolic link as target directory to Remove-Item. We will use mklink command to set up the short symbolic link.

The bad news is, mklink is a Operating System level command. It is not recognized in Powershell. I will have to open Command Prompt to set up the symbolic link before proceeding my Powershell build script. As my build script is supposed to be fully automated without manual intervention, manually setting up the symbolic link using Command Prompt is obviously an ugly option.

The following is the command we need to execute to set up the short symbolic link using Command Prompt:

The good news is we can call Command Prompt from Powershell to set up the symbolic link fairly easily. How easy?

Of course, we will need to ensure there isn’t a folder or another symbolic link which is already being named the same as our target symbolic link. (Else you will get another exception.)

Once we created the symbolic link by executing the above command, there will be a “shortcut” icon.

symbolic-link

Prior to deleting the directory with a really long path, we will set up the symbolic link on the fly to “cheat” Powershell that the path isn’t really that long 😉

Obviously we don’t want to leave any trace for another person to guess what this “shortcut” is about especially the symbolic link is no longer required after our operation. Hence, once we are done doing what we need to do (in this case, I’m deleting some files), we will need to clean up the symbolic link.

There isn’t any dedicated command to remove a symbolic link, we will use the bare bone approach to remove the symbolic link / shortcut created earlier.

The interesting thing about removing a symbolic link / shortcut is it will remove it as a shortcut but the shortcut becomes a real directory (What?!). Hence we will need to remove it one more time! I don’t know what is the justification for this behaviour and I do not have the curiouscity to find out for now. What I ended up doing is calling Remove-Item twice.

Then, things got a little more sticky because Remove-Item will throw ItemNotFoundException if the symbolic link or the actual directory is not there.

ItemNotFoundException

Theoretically this should not happen because all we need to do is create a symbolic link, delete the symbolic link 2 times and we are clear. However reality is not always as friendly as theory 🙂 So, we need to script our Remove-Item defensively. I ended up creating a function to handle the removing task really carefully:

The full script looks as following:

Here, my Powershell happily helped me to remove the files which the path exceed 260 characters and I have a fairly simple “hack” that I can embed into my Powershell build script.

One Last Thought…

During my “hacking” time, I have also tried Subst and New-PSDrive. You might be able to do the same using them. However in my case, some of the dependencies didn’t work well with the above. For example, AttachDatabase could not recognize the path set by Subst. That’s why I settled with Mklink.

This obviously this isn’t a bullet proof solution to handle PathTooLongException. Imagine if the “saving” you gain by representing the actual directory is still not sufficient – as in the path starting from the symbolic link is still more than 260 characters. However I have yet to encounter such situation and this “hack” works for me everytime so far. If you have a simpler way to get this to work, feel free to drop me a comment. Enjoy hacking your Powershell!

Continuous Integration and Continuous Delivery with NuGet

Standard

Continuous Integration (CI) is a development practice that requires developers to integrate codes into a shared repository. Each commit will then be verified by an automated build and sometimes with automated tests.

Why Continuous Integration is important? If you have been programming in a team, you probably encountered situation where one developer committed codes that cause every developer’s code base to break. It could be extremely painful to isolate the codes that broke the code base. Continuous Integration serves as a preventive measurement by building the latest code base to verify whether there is any breaking changes. If there is, raise an alert perhaps by sending out an email to the developer who last committed the codes or perhaps notify the whole development team or even to reject the commit. If there isn’t any breaking change, CI will proceed to run a set of unit test to ensure the last commit has not modify any logic in an unexpected manner. This process sometimes also known as Gated CI, which guarantees the sanity of the code base in a relatively short period of time (usually within few minutes).

CI

The idea of Continuous Integration goes beyond validating the code base in a team of developers working on. If the code base utilizes other development teams’ components, it is also about continuously pulling the latest components to build against the current code base. If the code base utilizes other micro-services, then it is about continuously connecting to the latest version of the micro-services. On the other hand, if the code base output is being utilized by other development teams, it is also about continuously delivering the output so that other development teams can pull the latest to integrate with. If the code base output is a micro-service, then it is about continuously exposing the latest micro-service so that other micro-services can connect and integrate to the latest version. The process of delivering the output for other teams to utilize leads us to another concept known as Continuous Delivery.

Continuous Delivery (CD) is a development practice where development team build software in a manner the latest version of software can be released to production at any time. The delivery could mean the software being delivered to a staging or pre-production server or simply a private development NuGet feed.

Why Continuous Delivery is important? In today software development fast pace of change, stakeholders and customers wanted all the features yesterday. Product Managers do not want to wait 1 week for the team to “get ready” to release. Business expectation is as soon as the codes are written and functionalities are tested, software should be READY to ship. Development teams must establish an efficient delivery process where delivering software is as simple as pushing a button. A good benchmark is the delivery can be accomplished by anyone in the team. Perhaps to be done by a QA after he has verified the quality of the deliverable or by Product Manager when he thinks the time is right. In complex enterprise system, it is not always possible to ship codes to production quickly. Therefore complex enterprise system is often broken into smaller components or micro-services. In this case, the components or micro-services must be ready to be pushed to a shared platform so that other components or micro-services can consume the deliverable as soon as available. This delivery process must be at READY state at all time. The decision of whether to deliver the whole system or the smaller component should be a matter of business decision.

Note that Continuous Delivery does not necessary mean Continuous Deployment. Continuous Deployment is where every change goes through the pipeline and automatically gets pushed into production. This could lead to several production deployments every day, which is not always desirable. Continuous Delivery allows development team to do frequent deployments but may choose not to do it. In today’s standard for .NET development, NuGet package is commonly used for either delivering a component or a whole application.

NuGet is the package manager for the Microsoft development platform. A NuGet package is a set of well-managed library and the relevant files. NuGet packages can be installed and be added to .NET solution from GUI or command line. Instead of referencing to individual library in the form of .dll, developers can reference to a NuGet package which provides much better management in handling dependencies and assemblies versions. In a more holistic view, a NuGet package can even be an application deliverable by itself.

Real life use cases

Example 1: Micro-services

In a cloud based (software as a service) solution, domains are encapsulated in the respective micro-service. Every development team is responsible for their own micro-services.

microservices

Throughout the Sprint, developers commit codes into TFS. After every commit, TFS will build the latest code base. Once the building process is completed, unit tests will be executed to ensure existing logic are still intact. Several NuGet packages are then generated to represent several micro-services (WCF, Web application, etc). These services will be deployed by a deployment tool known as Octopus Deploy to a Staging environment (hosted in AWS EC2) for QA to perform testing. This process continues until the last User Story is completed by the developers.

In a matter of clicks, the earlier NuGet package can also be deployed to Pre-production environment (hosted in AWS EC2) for other types of testing. Lastly, with the blessing from Product Manager, DevOps team will use the same deployment tool to Promote the same NuGet packages that were tested by QA earlier into Production. Throughout this process, it is very important that there is no manual intervention (such as copying a dll, changing a configuration, etc) by hands to ensure the integrity of the NuGet package and deployment process. The entire delivery process must be pre-configured or pre-scripted to ensure the process is consistent, replicatable, and robust.

Example 2: Components

In a complex enterprise application, functionalities are split into components. Each component is a set of binary (dll) and other relevant files. A component is not a stand-alone application. The component has no practical usage until it sits on the larger platform. Development teams are responsible for their respective component.

Throughout the Sprint, developers commit codes into a Git repository. The repository is monitored by Team City (build server). Team City will pull the latest changes and execute a set of Powershell script. From the Powershell script, an instance of the platform is setup. The latest code base will be built and the output is placed on top of the platform. Various tests are executed on the platform to ensure the component functionality is intact. Then, a set of NuGet package will be generated from the Powershell script to be published as the artifacts. These artifacts will be used by QA to run other forms of tests. This process continues until the last User Story is completed by the developers.

When QA gives the green light and with the blessing from Product Manager, the NuGet packages will be promoted to ProGet (an internal NuGet feeds). This promotion process happens in a matter of clicks. No manual intervention (modifying the dependencies, version, etc) should happen to ensure the integrity of the NuGet package.

Once the NuGet package is promoted / pushed into ProGet, other components update this latest component into their components. In Scaled Agile, a release train is planned on frequent and consistent time frame. Internal release happens on weekly basis. This weekly build will always pull all of the latest components from ProGet to generate a platform installer.

Summary

From the examples, we can tell that Continuous Integration and Continuous Delivery are a fairly simple concepts. There is neither black magic nor rocket science in both the use cases. The choice of tools and approaches to accomplish largely depend on the nature of the software we are building. While designing software, it is always a good idea to keep Continuous Integration and Continuous Delivery in mind to maximize team productivity and to have quick and robust delivery.

Facebook Feed Widget in Sitecore Habitat

Standard

In the previous post we have a introductory view on what Sitecore Habitat is. In this post we will get our hands dirty creating a Facebook Widget in Sitecore Habitat.

Out of the box, Sitecore Habitat contains number of Features. One of the Features is “Social” and we will use Sitecore.Feature.Social project in Habitat solution for Facebook Feed Widget. Before we get started, let’s examine what is available in Sitecore.Feature.Social project.

social

From here we can see that Social comes with certain implementation for Twitter feed. If we visit our Habitat instance, we can see there is a Twitter feed shows in the following page:

habitat-twitter

From here, we will create a Facebook feed to replace the Twitter feed.

First step, we create a new data template to cater for the fields for Facebook feed.

template

Click on image for full size view

Then, we create an item in Content Editor named FacebookDetails for setting the actual properties for the Facebook feed. In this example, we are using Sitecore Facebook page for demonstration purpose.

Facebook-item

Click on image for full size view

The complete properties are as following:

FacebookUrl: https://www.facebook.com/Sitecore
Type: timeline
Width: 350
Height: 600
Hide Facebook Cover: [unchecked]
Hide Face Pile: [checked]
Title: Sitecore Facebook Feed

Type is a Droplist. Facebook feed supports multiple types, namely timeline, events and messages. By having Type as a Droplist, user can select which kind of feed to fetch from Facebook. However for simplicity sake, you can also configure your Type as a Single-Line Text. Doing that you will have to type in the Type field yourself each time you were to change the feed type. Not a show stopper, but something nice to have 🙂

Next, we will move to Visual Studio to tie the fields to a FacebookFeed View that we are about to create.

Look for Template.cs under Sitecore.Feature.Social project. Add the following struct into the class. Do note that the GUID in your instance will be different from the GUID displayed here if you are creating the fields by yourself. Hence, remember to replace the GUID in these properties. If you installing the Sitecore package provided at the end of this post, the GUID will remain the same.

This is the code that “tie” the fields we created earlier in Content Editor. The next thing we will do is to create a view that consume these properties’ GUID.

Add a new FacebookFeed View in Sitecore.Feature.Social project under Social folder.

FacebookFeed

Add in the following code to construct a Facebook Feed widget.

Note all the HTML attributes are the fields we created earlier in data template and the values are the fields we filled in FacebookDetail item in Content Editor.

Now, compile the solution (or project). Ensure that FacebookFeed.cshtml and Sitecore.Feature.Social.dll are copied over to your Sitecore instance. Again, if you are installing the package provided at the end of this article, you will not need to copy the files over.

At this point, we have successfully created created a Facebook Feed Widget. However the widget has not been configured to display on any page on our instance. Now let’s create and configure the necessary items in Content Editor for our Facebook Feed Widget.

Create a view rendering item to represent the View (.cshtml) we have created earlier.

Click on image to view full size

Click on image to view full size

Add FacebookFeed rendering view into Layout that we want to display the View from. For simplicity sake, we use “col-wide-1” as this layout is readily available on Social page.

col-wide-1

Click on image to view full size

Next, we will now move to Experience Editor to replace the Twitter Feed with Facebook Feed Widget we have just created.

Navigate to Social page using Breadcrumb and remove the existing Twitter Feed.

Remove-twitter

Add the new Facebook Feed into the page. Check the “Open the Properties dialog box immediately” option:

Add-Rendering

Click on Browse under Data Source:

Browse-DataSource

Select FacebookDetails and click OK

Associate-Content

On your browser, navigate to http://habitat.local/en/Modules/Feature/Social and Facebook Feed Widget will appear in your page:

Facebook-Feed-Widget

Obviously, this is the most perfect Facebook Feed Widget we can build (just kidding!). You are welcome to modify any steps to cater for your own Sitecore instance.

The objective of this article is to guide you through the steps in creating a Facebook Feeds Widget in Sitecore Habitat. If you would like to have a Facebook Feeds Widget in Sitecore Habitat without going through (most of) the steps, feel free to download and install the ready made package.

Download Sitecore-Habitat-Facebook-Feed-Widget package.

Sitecore Habitat

Standard

Sitecore released Habitat on 18th December 2015. Habitat is a Sitecore solution example built on a modular architecture. Habitat architecture and methodology focus on few main concepts which are Simplicity, a simple, consistent and discoverable architecture. Flexibility, the ability to change and add quickly and without worry. Extensibility, the ability to simply add new features without steep learning curve.

Sitecore Habitat is a new “demo” site provided by Sitecore. Prior to that there is LaunchSitecore.net which is another Sitecore “demo” site where public can evaluate what Sitecore is capable of. The major difference between LaunchSitecore and Habitat is that LaunchSitecore is designed to allow public to explore the core functionalities in Sitecore, while Habitat is a modular architectural solution to build features.

To get a quick look and feel what Habitat is about, you can download Sitecore Habitat package and install it on your local Sitecore instance. Once installed, your Sitecore instance home page will have the following look:

sitecore habitat homepage

Now, let’s dive a little deeper into Habitat code base. You can clone a new repository of Habitat from GitHub.

Once cloned, you will see from Visual Studio there are 37 projects in the solution. The “magic” of Habitat sits at the Feature folder. 14 features are provided out of the box (at the point of writing). They are: Accounts, Demo, Identity, Language, Media, Metadata, Multisite, Navigation, News, PageContent, Person, Search, Social and Teasers.

sitecore habitat feature

The idea of the Habitat architecture is you can develop or use a feature independently. If one feature is broken, you can simply not use it and it will not affect the rest of the feature. Each feature has the consistent architecture to allow development practices to be more predictable.

If you are into developing a feature in Habitat, I highly recommend you to look at the videos prepared Thomas Eldblom on:

Sitecore Habitat Architecture – Introduction

This video explains how Sitecore Habitat is solution framework that focuses on streamlining development process and optimizing productivity. It also explains the how productivity is increased when developers simplify coupling.

Sitecore Habitat Architecture – Modules

This video explains the closure principles in Sitecore. It digs down deeper into Habitat code base by using Navigation as an example. It explains the Views, Controllers, Data Model and how the items are structured and how do they get built.

Sitecore Habitat Architecture – Layers

Focus on the layered architecture and how to couple modules to build the entire solution. It also explains how different layers fit together as a solution and how different modules and layer interact with each other.

This is a quick introduction to Sitecore Habitat. If you would like to find out more, feel free to visit the Habitat Wiki page.

On the next post, we will dive a little deeper into the code base to build a Facebook widget in Sitecore Habitat.

Scrum Part 3 – The Team

scrum-team
Standard

SCRUM team members consist of 3 main roles: Product Owner, Scrum Master and The Team. Each role are closely inter-related. It is recommended for Scrum team members to be physically sitting together in the same location whenever possible.

Product Owner

product-ownerProduct Owner (PO) is the person who is ultimately responsible for the product, which could be a system, a small piece of software or even a component. He often understands the historical evolution of the product, well versed with the product current functionality, and define the vision of what the product will eventually be.

As the ideology of Scrum is to build a potentially shippable product at the end of every Sprint, PO is the person who ultimately decides whether or not to ship the product. Whatever decision PO made, he is answerable to the business in the larger scope. The decision made by Product Owner are often influenced by other Stakeholders (or Subject Matter Experts) and the business owner.

Ideally, PO is the person who is much closer to the customers (both internal customers or external customers). He understands the business challenges the customers are facing. He will communicate these business challenges to The Team by writing user stories so that The Team can build the necessary features.

A user story follows the following format:

As a <type of business user>,

I want <to perform certain task(s)>

so that I can <achieve some goals / benefits / values>

For example:

As a content editor

I want to be able to update my company website content without changing the source code

so that I can frequently let the visitors know the latest promotion

During Sprint Planning and Spring Grooming session, PO will provide as much details as possible to The Team related to the stories. At this point, PO would have completed writing each story and identify what is the priority for each story so that The Team can determine which story to pick first in Sprint Planning.

Some Product Owners who have a strong technical background might choose to write code together with The Team. However, he might not be able to commit on complicated stories or large number of stories due to his other obligations within the business. There are also some Product Owners who do not code but will participant in testing to work very closely with The Team.

In some organization structure, a business analyst might play a portion of Product Owner’s role due to the inaccessibility to Product Owner. Issues such as time zone difference or other business commitment might not allow the Product Owner to have high availability for The Team. Business analyst will step in to clarify the questions The Team has. Whenever business analyst is in doubt, he will clarify the matters with Product Owner.

Scrum Master

scrum-masterScrum Master is responsible to resolve any problem for the team members throughout a Sprint. Scrum Master is expected to understand different Scrum ceremonies better than the rest of the team members so that he can make the right call when conflicts arise. Scrum Master is the person who enforces the prescriptive guidelines in Scrum to ensure the Scrum team is able to reap the most values for adopting Scrum.

Having said that, Scrum Master does not have to be the most senior or most technically competent person in the Scrum team. In fact, Scrum Master might not even know technical at all. Scrum Master’s main responsibility is to clear out obstacles for The Team and enforce Scrum guidelines throughout the Sprint.

Now, let’s walk through Scrum Master’s specific responsibilities throughout the Sprint…

Sprint Grooming

Prior to the Sprint Grooming, Scrum Master has to ensure Product Owner has completed writing every story and priority has been placed on every story.

Scrum Master has to call for a Sprint Grooming few days prior to Sprint Planning. A few days of buffer should be allocated before Sprint Planning because Sprint Grooming sometimes cannot be completed in a single session. During Sprint Grooming Scrum Master needs to ensure The Team fully understands the stories written by Product Owner and ensure The Team has all necessary resources to kick start the Sprint.

After Sprint Grooming, Scrum Master has to call for another session for The Team to estimate the story point for each story. Often in Scrum team there are 1 or 2 team members who are more senior or has deeper domain knowledge with the product. They will indirectly influence the story point estimation given by other team members as the less senior team members tend to look up to them. Scrum Master has to spot this early enough and encourage the junior team members to independently provide story point estimation and justify why a story should have a higher (or lower) story point. Scrum Master has the responsibility to ensure all team members are working in harmony rather than domination by senior team members.

Sprint Planning

During Sprint Planning, Scrum Master has to facilitate the Scrum team to finalize the stories to be committed in the coming Sprint.

In the perspective of Scrum, stories to be committed are always in accordance to priority defined by Product Owner. Among all prioritized stories, together with their story point, The Team will commit to the stories with highest priority and make a cut off at the point of hitting The Team capacity (velocity). Occasionally, 1 or 2 stories will be reshuffled for the best match of priority and story point to fit the team capacity within one Sprint. This has to be done with the acknowledgement of Product Owner.

Scrum Master’s responsibility during Sprint Planning is to ensure the above guidelines are followed. During Sprint Planning, Scrum Master has no say over whether to commit or not to commit certain story. The decision is entirely depend on the discussion between Product Owner and The Team. Again, Scrum Master role is to facilitate this process and to conduct a smooth planning session.

Daily Stand Up

Daily stand up is a quick 15 minutes catch up for The Team. Scrum Master has to ensure the following questions are answered by each team member:

  1. What did I accomplish yesterday?
  2. What will I do today?
  3. What impediment am I facing?

On Question 1: Under healthy progress, everyday every team member would have completed certain task to contribute to the completion of the stories. However, if Scrum Master notices a team member has not been able to accomplish anything in the last working day, it’s a sign that the team member is stuck at certain task. Scrum Master has the responsibility to find out what cause it and assist the team member to resolve it.

On Question 2: Under healthy progress, everyday every team member will pick a new task or continue working on the remaining task from yesterday. However, if Scrum Master notices a team member do not have the next task to move to, it is a sign that the team has committed less story than The Team velocity allows. If such situation persists, Scrum Master has to consider increasing team velocity in the next Sprint and take on more stories. In some occasions, a team member dare not to pick up certain task because it is too difficult / not enough domain knowledge / not sure what is the solution, Scrum Master has to identify the appropriate resource and make necessary arrangement to help the team member to proceed with the story.

On Question 3: Under most circumstances, team members will report there is no impediment. When a team member raises an impediment, Scrum Master should not dismiss the impediment lightly and should pay serious attention to resolve the impediment. It is Scrum Master’s responsibility to make use of his resourcefulness to make the necessary arrangement to get the impediment resolved as soon as possible to allow The Team to continue their progress.

Scrum Master should not dominate Daily Stand Up. A good Scrum Master will maintain his silence during Daily Stand Up but he is also quick to voice up when there are team members diverting from objective of Daily Stand Up.

Sprint Review

Sprint Review is a session driven by The Team to demonstrate the deliverable (completed stories) to Product Owner. Scrum Master involvement during Sprint Review is often minimal.

Sprint Retrospective

Sprint Retrospective is the session to reflect what happened in the last Sprint. Scrum Master needs to get The Team to cover the following topics:

  1. What went well?
  2. What went wrong?
  3. What can be improved?

During the Sprint Retrospective, Scrum Master has to maintain a safe and harmonious environment for team members to speak up.

Scrum Master also has to ensure the team members understand the purpose of Sprint Retrospective is to provide continuous improvement for the coming Sprints rather than finding fault happened in the last Sprint. Therefore finger pointing should be avoided. Scrum Master has to ensure The Team carrying out the session to constructively identify the root cause of “what went wrong” and openly identifying “what can be improved” in the future Sprints.

Scrum Master has to take note of the items being discussed during Sprint Retrospective. Sometimes The Team will treat Sprint Retrospective as a good chat but forget about what has been agreed to do by the next Sprint. In this situation, Scrum Master has to step in to remind The Team that those are the items have been agreed upon and The Team should honor the agreement made during Sprint Retrospective.

In newly established Scrum team, The Team might be uncomfortable to speak up during Sprint Retrospective. A workaround for this scenario is for Scrum Master to request every team member taking turn to give at least 1 point for the 3 topics. This is also useful for matured Scrum team as The Team members often have many points to bring up. Having the team members to take turns to speak is an information flood control.

Of all the ceremonies in Scrum, a Scrum Master should NEVER dominate any session. In fact, Scrum Master is known as the servant leader. Scrum Master is the person to serve the team to ensure all team members are able to carry out their tasks efficiently.

In matured Scrum team, Scrum Master often play a passive role to monitor The Team is following Scrum guidelines. However in a newly established Scrum team (or team members who are less familiar with Scrum), Scrum Master has to play an active role to ensure The Team members are following the appropriate Scrum guidelines. If team members are diverting from Scrum guidelines, Scrum Master have to be quick to speak up to enforce Scrum guidelines and explain the purpose of the specific guidelines.

The Team

the-teamThe Team is the main driver in a Sprint. The Team typically consists of Developer, Quality Assurance, Architect, Business Analyst and sometimes even Graphic Designer, Database Administrator to work together to deliver the potentially shippable product at the end of Sprint.

The Team is cross functional. The Team is able to step into different functional areas to ensure an user story is fully implemented. For example if The Team is handling Accounting module in a system. When a user story involving Sales module, The Team will be able to step into Sales module do the necessary work,.

In an ideal Scrum team, everyone in The Team is equal. There is no difference in title, role or seniority. Scrum does not recognize whether a member is a Developer, Quality Assurance, Architect, Business Analyst, or Designer. Everyone in The Team is equivalently qualified to pick up any story. Everyone in The Team should be able perform designing work, coding, testing, writing documentation, etc. The idea is to take on the full stack of tasks in a user story. However, this is often impractical in today’s many organizations.

In today’s many organizations, hiring process is role base. For example, the hiring manager might put up a job vacancy looking for “Developer with 1-3 years of ASP.NET & C#.NET experience” or “Quality Assurance with 1 years experience in building automation using Selenium”. Many development teams are not ready to embrace the ideal Scrum ideology where every member can do full stack of tasks in the story.

With that, a less drastic and practical approach to embrace Scrum value of “everyone is equal” is by actively encouraging team members to step into different roles within The Team.

The idea is to allow team members to step into different roles and doing different tasks they are not “officially” responsible for. For example, an Architect is encouraged to actively code and implement user stories; a Developer is encouraged to help out in testing; a Business Analyst is encouraged to help out in testing, a Quality Assurance is encouraged to help out in documentation, etc.

When team members step into different roles within team, team members learn to see the development task from a different perspective and be able to communicate from the perspective of different roles. Not only this will keep team members’ job interesting, but also enhance team members’ career growth by having wider scope of development experience.

Let’s walk through the main responsibility for The Team during different phases:

Sprint Grooming – All members in the team should get good comprehension with every story. All questions related to user story implementation should be clarified by team members. Junior members tend to rely on the senior members to “figure out” the story. This should be avoided. Again, every team member is equal in a Scrum team. Every team member should understand every story the best they could because any team member is expected to be capable to pick up the story later to implement it.

Estimation – All team member has the equal right to decide what story point the story should have. Junior members tend to look up to senior members and follow their story point estimation. This should be avoided. Every team member is encouraged to provide independent estimation. When estimation is significant varied, the team member is encouraged to provide justification why the given story point is significant higher or lower. Junior team member might not necessary know less on specific scope of the story. The purpose of such explanation is to allow the rest of the team members to identify aspects they might have overlooked or unaware.

Sprint Planning – Assuming Sprint Grooming is fruitfully conducted and estimation is agreed earlier, Sprint Planning will be a smooth process for The Team to formally commit to a set of user story up to The Team capacity. Product Owner might shuffle around a few stories, The Team will have to provide their insight whether such arrangement is feasible.

Daily Stand Up – Once the Sprint started, team members will pick up story and implement the full stack story. On daily basis, team members will gather at the same time to report the following questions to the rest of the team:

  1. What did I accomplish yesterday?
  2. What will I do today?
  3. What impediment am I facing?

The purpose of Daily Stand Up is for team members to be aware of what each other are working on to avoid redundant effort. It is also a platform for The Team to expose any problem early enough for the whole Scrum Team to act on.

Sprint Review – is the time for team members to proudly demonstrate what they have accomplished in the Sprint. As the full stack of tasks for a particular story is implemented by 1 or 2 team members, the respective team members naturally have strong ownership on the story. It is best for the respective team member(s) to demonstrate the completed story to Product Owner during Sprint Review. Product Owner will provide feedback whether he is satisfied with the deliverable. Team members will listen to PO feedback and take necessary actions base on the feedback.

Sprint Retrospective – is the time for team members to reflect back what happened throughout last Sprint by answering:

  1. What went well?
  2. What went wrong?
  3. What can be improved?

Team members are encouraged to acknowledge other team members who has done well. This is not only subjected for the senior members to recognize the junior members effort. Junior members are also encouraged to identify other team members who have done a good job in the last Sprint.

When addressing what went wrong and what can be improved, team members are encouraged to be courageous to speak up yet be compassionate while speaking. A Scrum team is very much like a working “family” team. If there are problems need to be addressed in a team, it is best for the issues to be resolved as soon as possible in humane manner. Improvement such as processes, resources availability, better approach to deal with a issues are to be discussed during Sprint Retrospective.

This concludes the series of Scrum article. If you find the information useful, please share it. If you would like to discuss specific aspect of Scrum or have certain scenario would seek my advice, please leave me a comment or go to Contact Daniel form to send me a personal email.

Json Data in SQL Server

microsoft_sql_server_json
Standard

The rise of NoSQL database such as mongoDb is largely due to the agility to store data in non-structured format. A fixed schema is not required like traditional relational databases such as SQL Server.

However, NoSQL database such as mongoDb is not a full-fledged database system. It is designed for very specific use cases. If you don’t know why you need to use NoSQL in your system, chances are you don’t need to. For those who find it essential to use a NoSQL database, often they only use NoSQL database for certain portion of their system and then use another RDBMS for the remaining part of their system that have more traditional business use cases.

Wouldn’t it be nice if RDBMS is able to support similar data structure – having the ability to store flexible data format without altering database tables?

Yes, it is possible. For years, software developers have been storing various JSON data in one table column. Then, developers will make use of library such as Newtonsoft.Json within the application (data access layer) to deserialize the data to make sense out of the JSON data.

Reading / Deserializing JSON

This works. However “JsonConvert.DeserializeObject” method is working extremely hard to deserialize the whole JSON data to only retrieve a simple field such as Name.

Imagine there is a requirement for searching certain Genres on a table that has 1 million row of records, the application codes will have to read 1 million row of records, then perform filtering on the application side. Bad for performance. Now imagine if you have a more complex data structure than the example above…

The searching mechanism will be much efficient if developers can pass a query (SQL statement) for database to handle the filtering. Unfortunately SQL Server does not support querying JSON data out of the box.

It is impossible to directly query JSON data in SQL Server until the introduction of a library known as JSON SelectJSON Select allows you to write SQL statement to query JSON data directly from SQL Server.

How JSON Select Works

First you need to download an installer from their website. When you run the installer, you need to specify the database you wish to install this library at:

JsonSelect

What this installer essentially does is to create 10 functions in the database you have targeted. You can see the functions at:

SSMS > Databases > [YourTargetedDatabase] > Programmability > Functions > Scalar-valued Functions

functions

Next, you can start pumping in some JSON data in your table to test it out.

I create a Student table with the following structure for my experiment:

table

In my StudentData column, I enter multiple rows of records in the following structure:

For demonstrating the query purpose, I have entered multiple rows as following:

all-rows

If you want to write a simple statement to read the list of student names in JSON data, you can simply write:

You will get result as following in SSMS:

name

How about more complex query? Does it work with Aggregate Functions?

If you want to find out about how many students come from each city and what is their average age, you can write your SQL Statement as following:

You will get result as following in SSMS:

complex-query

It appears the library allows you to query any JSON data in your table column using normal T-SQL syntax. The only difference is you need to make use of the predefined scalar-valued functions to wrap around the values you want to retrieve.

Few Last Thoughts…

  1. The good about this library is it allows developers to have hybrid version of storage (NoSQL & relational database) under one roof – minus the deserialize code at application layer. Developer can continue using the classical RDBMS for typical business use cases and leverage on the functions provided in the library to deal with JSON data.
  2. The bad about this library is it lacks proven track record and commercial use cases to demonstrate the robustness and stability.
  3. Although the library is not free, the license cost is relatively affordable at $AU 50. However the library is free for evaluation.
  4. SQL Server 2016 provides native support for JSON data. This library is only useful for SQL Server 2005 to 2014 where upgrading to 2016 is not a feasible option.

Scrum Part 2 – Ceremonies

Standard

scrum-overview

In Scrum Development – Part 1, we discussed the fundamental of Agile development which covers 4 Agile Manifesto. Lacking understand of Agile Manifesto often lead team members to practice Scrum ceremonies for the sake of practicing – without realizing the purpose of each ceremony. If your intention is to run a successful Scrum team, I highly recommend you to grasp a solid understanding of the 4 Agile Manifesto before moving to the prescriptive ceremonies in Scrum.

Guidelines for Scrum Team

  1. Each Sprint typically runs for 2 weeks. The duration of a Sprint could be from 1 week to 4 weeks. However through experience, 1-week Sprint is too short that leads to rushing and too much overhead while 3-week or 4-week Sprint is too long that leads to unnecessary idle time and big-bang implementation and deployment which makes Sprint harder to manage. A 2-week Sprint is ideal in most project.
  2. An ideal size of a Scrum team consists of 1 Product Owner, 1 Scrum Master and 7 (with +/- 2) team members. A team with less than 5 team members is too small to build anything significant; while a team of more than 9 team members are little too large to manage. Depending on the scope of the project and resource availability in organization, 7 (+/- 2) team members is not a stone-carved law to follow but rather a best-practice guideline.

Prescriptive Ceremonies

Sprint Grooming

grooming

Sprint Grooming is a pre-sprint activity that takes place before sprint starts. During grooming session, team members would clarify the stories written by Product Owner. Completed stories (or at least the majority part of the story) must have been completed by Product Owner, so that team members could clarify issues regarding implementation (either functional or technical or both).

The following is the format of how a story should be written:

As a <type of business user>,

I want <to perform certain task(s)>

so that I can <achieve some goals / benefits / values>

During this stage, the team might need some time to carry out research to evaluate whether a story could be implemented and how much effort is required. On the other hand, Product Owner could encounter edge scenario asked by the team. Product Owner potentially need time to check with the respective stakeholders to finalize a decision. Although such situation is expected, a matured Scrum team and a seasoned Product Owner who understand business use cases well would keep such get-back-to-you-later discussion to minimal. Sprint grooming could take place for multiple sessions depending on how productive each session went.

Estimation

scrum-poker

Before Sprint Planning begins, team members would have all questions in stories answered and have finalized the story point for each story. Note that estimation should NOT be hour estimation. Estimation done in hours defeat the purpose of Scrum. Scrum estimation is to be done using relative size. Tools such as Poker card or T-shirt size are good options to measure the complexity of story. In order for relative estimation to work, team members must have a common ground on the complexity of base point of 1. For example, if 1 point is equivalent to X amount of complexity, a story that carry 5 points means 5 times more complex than the 1 point story. 5 times more complex does not necessary means it takes 5 times the effort to complete it. Again, the base point of 1 is vitally important to form the base understanding among team members. Without defining base point of 1, story point estimation would be meaningless.

If you have a smartphone, you can download Scrum Poker Cards app from Google Playstore or iTune.

Sprint Planning

sprint-planning

During Sprint Planning, team members and Product Owner would finalize which story to be committed in the Sprint. Depending on the team velocity, total story point and priority of each story, Product Owner (with the feedback and influence of team members) would select the best combination of stories to be committed for the Sprint. Note that requirement clarification should be kept minimum during Sprint Planning as requirement clarification should have been completed during earlier Sprint Grooming session. The more requirements was clarified during Sprint Planning session, the less accurate the story point estimation represents.

Daily Stand Up Meeting

daily-standup

The purpose of daily stand up provides a platform for the team to expose potential problem early enough. During stand up meeting, 3 questions would be covered by each team member:

  1. What did I accomplish yesterday?
  2. What will I do today?
  3. What impediment / obstacle am I facing?

Question #1 is designed to ensure team member accomplished what he has committed to the rest of the team on the previous working day. Having the team member to proactively report his progress is a much more motivating approach compared to having a traditional project manager to ask “Have you completed Task X?”. It gives a sense of pride to the team member for accomplishing tasks and announcing them to the rest of the team.

Question #2 is designed to ensure team members are moving forward everyday throughout the Sprint. By having team members to commit to story on their own will, they would naturally have a higher sense of ownership and accountability for the stories they have committed.

If everyone is moving forward together, then success takes care of itself. -Henry Ford

The purpose of Question #1 and #2 are to ensure the team members are being productive everyday. When Scrum Master notices a team member has been stuck with the same task for longer than expected, it is a potential signal that the team member is experiencing difficulty. A seasoned Scrum Master would spot this early to resolve it quickly rather than getting unpleasant surprises at the end of the Sprint.

Question #3 is designed for team members to raise any obstacle they are facing. Often in a less matured Scrum team, all members would say “No impediment” during stand up meeting. A “No impediment” response does not necessary mean no problem. The team members are sometimes too shy to raise the obstacle during stand up meeting. This is a common observation for newly established Scrum team or new team member joined the team. In more matured Scrum team, team member would proactively raise obstacles during stand up and quickly resolve them.

An important note to remember during daily stand up is the “people” element. The focus should be placed on the team members to allow them to keep moving forward. A poorly run Scrum team shift the focus away from the team members to the status of tasks completion. This is due to some traditional project managers are not familiar with Scrum approach and feel more comfortable getting tasks status update. Through experience, focusing too much on tasks status update is a less efficient approach to manage Scrum team which often lead to 11th hour additional tasks because team members are “trained” to close a task rather than to complete a task. A more efficient approach is to focus on the team members’ ability to progress. Highly motivated team members would go all the way to help each other and do whatever it takes to complete all committed stories. At times, the team would even resolve obstacles before the Scrum Master is aware of them.

One of the most important role of Scrum Master is to create a supportive and safe environment for team members to speak up during daily stand up. Blaming particular team member or finger pointing must be avoided at all cost. However, that does not mean being lenient while dealing with challenges. The guideline is be critical on issues, be kind to people. Scrum Master should not dominate daily stand up meeting discussion. Scrum Master is playing the role of a “referee” to ensure the team members are getting the most value out of Scrum ceremonies. A Scrum Master in a matured Scrum team would merely serve as an observer and only speak up when help needed. A Scrum Master is also known as a servant leader. (More on Scrum Master’s role will be covered in Part 3 of the Scrum article series.)

15 minutes

Daily stand up is designed to run for 15 minutes or less – that’s why team members should physically stand up during the meeting. Sitting comfortably encourages team member to drag the meeting to longer than require. The act of standing up to conduct the meeting encourages team members to finish the meeting in 15 minutes or less (because it would be too painful to stand for too long).

Often, team members are tempted to discuss specific issue with another team member during stand up, which potentially be a waste of other team members’ time. Story specific discussion should be taken offline among the team members working on the story.

Although all team members standing up together physically might not be feasible involving remote team members, the fundamental objectives of daily stand up should not be diverted due to physical limitation.

Sprint Review

Sprint Review

Sprint review is the activity where the whole Scrum team demonstrates the completed stories to Product Owner. The team member who implemented of the stories is responsible to demonstrate the respective stories. Stakeholders from various departments would often join the review / demo session to understand the newly implemented stories.

The objective of Sprint Review is to provide product incremental changes to Product Owner and stakeholders, so that there will be no big-bang surprises. Imagine in a non-Scrum approach of development (say Waterfall), a team works on a module for 6 months and when they finally demonstrate the working module to the customers, the customers feedback “No, this is not what we want”. 6 months of development effort would go down to drain. Sprint Review serves as a milestone checkpoint to allow Product Owner and stakeholders to acknowledge the team is moving towards the right direction and they are happy with each milestones.

At the end of Sprint Review, Product Owner would provide feedback whether he is happy with the work and decide whether to ship out the product. Remember, in Scrum, at the end of every Sprint there must be a potentially shippable product. However, whether to ship the product or otherwise is entirely up the Product Owner.

Sprint Retrospective

SprintRetrospective

Sprint Retrospective is the meeting where team members reflect back how well did the Sprint go and discuss if there is any room for improvement.

The retrospective includes 3 main questions for discussion:

  1. What went well?
  2. What went wrong?
  3. What can be improved?

Question #1 is designed to acknowledge the good doing happened during the Sprint. A team member who has gone the extra miles to help another team member should be acknowledged. An unforeseen obstacle during the Sprint has been proactively resolved by the team should be acknowledged. The idea is to acknowledge the team members who has done an outstanding job during the Sprint and encourage the team to continue the good teamwork.

Question #2 is designed to identify the areas for improvement. No problems should ever be swiped under the carpet. If there was an unpleasant incident due to process inefficiency should be highlighted. If there was an external interruption that caused the reduction in team productivity should be highlighted. The idea is to identify what are the pain points for the Scrum team, so that the team could optimize and increase the team velocity.

Question #3 is a continuation of Question #2. The issues identified earlier should be discussed and identify a solution by the team, so that the same problem would not happen in the coming Sprint. The idea of Question #2 and #3 are to strike for continuous improvement in the Scrum team. No matter how good a Scrum team is, there is always room for improvement. This concept also known as Kaizen (改善) which means improvement for the better.

Often in less matured Scrum team, team members are less likely to speak up what’s in their mind. Scrum Master could consider enforcing every team member to take turn answering all three questions above to get team members be more comfortable speaking during Sprint Retrospective. Even if the team member says something like “Same with the other guy” is a good start. In more matured Scrum teams, team members would proactively prepared a list of topics (along the line of 3 questions) to bring into Sprint Retrospective for discussion.

One Last Thought…

While Scrum ceremonies are the activities that make a Scrum team, team that practices Scrum ceremonies for the sake of practicing gains little value for running project in Scrum. It is important to remember the underlying objective of each ceremony to gain the most out of these carefully designed ceremonies. In Part 3, we will discuss more about The Tools and The Team (people) in Scrum.

Scrum Part 1 – Agile Manifesto

Standard

agile-manifesto

Scrum is an iterative and incremental agile software development methodology for managing product (software) development. We cannot discuss about Scrum without understanding Agile. In fact, many people could not run Scrum successfully because they do not have a clear fundamental understanding about Agile manifesto. When we understand the idea of Agile, all the prescriptive activities / ceremonies in Scrum would make better sense. First let’s look the 4 manifesto for Agile software development:

  1. Individuals and interactions over processes and tools
  2. Working software over comprehensive documentation
  3. Customer collaboration over contract negotiation
  4. Responding to change over following a plan

Individuals and interactions over processes and tools

Individuals and interactions over processes and tools

In development team, Agile encourages team members to interact with each other such as talking face to face, instant messaging, or email communication – instead of heavily depend on project management tool such as Trello, Asana, JIRA, etc. For example, if a team member notice there is a defect in the code written by another team member, he is encouraged to talk the other team member directly (maybe to discuss the root cause and possible solutions) instead of creating a ticket, assign to him in JIRA and call it a day.

joke

When individual interaction take precedence, team members build the bonding to cover each other back. When processes and tools take precedence, team members would only look at their own plate and teamwork rarely happens. Having said that, it does not mean processes are not important. They are important to facilitate the operation but processes and tools should not be the central of development team. It is the team members who are building the product (software), the processes and tools are merely there to facilitate them to get their job done.

 

Working software over comprehensive documentation

working-software-over-comprehensive-documentation

Logically speaking, would you rather to have a working software, or a pile of comprehensive documentation? Theoretically it is best to have both. Unfortunately projects often lacks the most precious commodity – time. If the team has to choose either one, it should always be working software (duh?). First, documentations are often outdated and team members are unconsciously trained to not rely on documentations because the perception is documentations are inaccurate anyway. Second, it is much easier to navigate a working software than to navigate a 200 pages functional specification document. Documentations are often the poor fella sitting at the shared drive that no one looks at. Always choose to build a working software over writing comprehensive documentation when facing the challenge of limited time. After all, what is the purpose of documentation if the software is not working?

Customer collaboration over contract negotiation

Customer collaboration over contract negotiation

This might not be directly relevant to the team members who are actively building the product (software). It is a mindset the business decision makers should have. When business negotiates a contract with customer (regardless internal or external), a set of objective or functional requirement is often agreed upon. However with today’s speed of change in market, circumstances tend to change faster than business could anticipate. It is impractically for business to re-negotiate a new contract whenever a change take place. Both the development team and business must understand that collaboration is what makes great software. Customer has the understanding of the market to describe the requirements; while development team has the skill to transform requirements into features.

Customer-collaboration-over-contract-negotiation-joke

Collaboration with customers is one of the fastest ways to make customers happy. There are some customers from hell but those are exceptions. Generally customers do find greater satisfaction if customers are aware that development team put their business success in their best interested over making money from them with a contract.

Responding to change over following a plan

Responding to change over following a plan

If Plan A doesn’t work, don’t worry, the alphabet has 25 more letters. The ability to respond to change is the core of Agile. Thanks to today’s speed of change, the only constant in the market is change. If internal speed of change is slower than external, the business is doom to fail. So if things tend to change so often, why bother to have a plan? A plan is made base on best knowledge and known facts at that point of time. It provides a guideline to keep team members on track. Team members might learn higher quality knowledge and identify invalid facts along the way. The team would make use of the new knowledge and more accurate facts to deal with the situation more efficiently.

Responding to change over following a plan.joke

Managers should be the leaders to head the change response but ironically it is also the managers who are often very reluctant to change and claim “We have a plan, we need to follow our plan”. This could happen for various reasons and the justification could be valid or otherwise. Not all changes happen for a better outcome however if the team could identify a change is happening (or has happened) and an appropriate response is required, the team should respond by introducing a new plan. There is little value in protecting the ego and refusing to change. Responding to change might be painful in the beginning, but it always yield better return for the project on the long term.

This is a quick introduction on the Agile. On Part 2, we will discuss more on the prescriptive activities in Scrum.

Dependency Injection for Web Service

Standard

Dependency Injection allows developers to cleanly inject a portion of concrete code implementation into the bigger scope of the system base on certain logical condition. Dependency Injection has been a preferred way of implementing code. In fact, ASP.NET 5 (Visual Studio 2015) supports Dependency Injection as 1st class citizen. Dependency Injection removes the need of having tons of if-else statement in code implementation and keep the codes clean. Some developers implement Dependency Injection even there isn’t such need at the moment as a standard practice because they might be needed in the future. Personally I’d prefer not to implement Dependency Injection unless there is a “good reason” for implementing it.

One of the “good reasons” is while designing a web service. Web service often serve multiple clients having various needs base on certain logical condition. It is web service responsibility to handle the various logic implementation. For example a web service is serving clients from various countries on the same endpoint might need to execute different codes to produce localized result depending on who is triggering the end point.

Consider this simple scenario:

We need to design a WCF web service that serve multiple countries on an eCommerce platform. There is an endpoint to accept a Product Id and the endpoint will return the formatted local price. The local price calculation depend on various factors such as tax, shipping, marketing promotion, and other business consideration to lower or higher the price for each country through custom discount mechanism.

requirement

Here is a good scenario to create a WCF web service that implements Dependency Injection to separate different calculation formula for respective country.

In the sample code, the Dependency Injection library that we are using is WCF (C#.NET) with Autofac.

Create a BaseService class. In BaseService class, we define a static container to register and store a list of logic. In this example, IProduct is an interface that get registered with different logic classes (UsProduct, UkProduct, MyProduct) depending on the logical condition.

We create a BaseService class for this purpose so that all the services in WCF could inherit BaseService class and access to BuildContainer method, which will be common among all services. In the following example, all public API method would be required to build the container to initialize the logic classes.

Base service class

Catalog service class

  1. Create a new Catalog service in WCF.
  2. Add a method GetProductPrice. It has UriTemplate as following which means Country and ProductId are parameters to be constructed in the Url endpoint.
  3. Inherits BaseService. Implements IProduct.
  4. BuildContainer is a method defined in parent class.

Define the respective logic class for each country…

United State product logic class

UsProduct class is also the parent class for rest of the country class. For example, the method GetProductBasePriceById is applicable for all countries implementation, hence this method stays at the parent class (UsProduct) so that it could be accessed by the child classes (UkProduct & MyProduct).

United Kingdom product logic class

UkProduct class inherits UsProduct and implements IProduct. GetProductBasePriceById could be accessed by UkProduct as UsProduct is the parent class.

Malaysia product logic class

MyProduct class inherits UsProduct and implements IProduct. GetProductBasePriceById could be accessed by UkProduct as UsProduct is the parent class.

All the product logic classes implement IProduct interface so that they all could be registered into Autofac container builder. (The following code is part of BaseService class shown earlier)

By registering the interface with the appropriate concrete logic class (base on the logical condition of country), Autofac would build a static container that allows the whole application know which concrete logic implementation to call.

If we put this into test using Postman, we would get the following result

Product price for United State

Country is specified by replacing {Country} parameter to “Us”. Result is returned base on UsProduct logic class implementation.

Us-test

Product price for United Kingdom

Country is specified by replacing {Country} parameter to “Uk”. Result is returned base on UkProduct logic class implementation.

Uk-test

Product price for Malaysia

Country is specified by replacing {Country} parameter to “My”. Result is returned base on MyProduct logic class implementation.

My-test

Some consideration…

  1. Performance for building a container and inject dependency on runtime instead of direct initialization of concrete class. From the above 3 examples, we could see each request completed within 15-16ms. I ran a few more test switching between countries, most of the request completed in less than 20ms, which shows there isn’t any major overhead for builder the container.
  2. What are some other reasons to implement Dependency Injection? Answer is unit test. With Dependency Injection in place, the codes would be much testable. (If you currently have codes that is not testable, consider using Shim under Microsoft.Fakes. However this reason is arguable as we do not necessary need Dependency Injection to test our code. All we need is appropriate interfaces in the codes.

One last thought…

Dependency Injection is a clean way of separating codes implementation which conforms to a standard set of Interface. With Dependency Injection developer no longer need to separate which line to execute using complex if-else statement. It makes the code base much cleaner and easier to implement different codes base on logical condition. The drawback is Dependency Injection makes debugging more complicated as the developer need to first figure out which class has been injected during runtime. It is a recommended approach if you have a standard set of interface but requires different codes to be executed base on logical condition.