“Attach to process…” tricks

Standard

When you are working on a large system, where your code base (project or solution) is only a subset of the entire system you will often use “Attach to process…” to tap into the execution of the application pool for debugging.

“Attach to process…” might not the most convenient approach for debugging, however at times it can be more efficient than launching whole application from your code base. There are still value in using “Attach to process…” although this approach could be painful occasionally.

Here are a few tricks to make the debugging process less painful.

Trick No.1 – Identify your AppPool

When you are working with a little more complex application where your application have dependencies on other applications, you might have multiple w3wp.exe running like following:

Which process should you attach to?

All of them? Of course that would work but obviously that’s not the kindest thing you can do to the poor machine memory and CPU…

The following command help you identify the AppPool name and the process ID so that your Visual Studio will only attach to the relevant w3wp.exe

C:\Windows\System32\inetsrv>appcmd list wp

Base on the AppPool name, you can now know the process ID you should attach to in your Visual Studio.

Trick No.2 – ReAttach (Visual Studio extension)

Imagine you have to use “Attach to process…” for debugging purpose in your project. You modify your code a little to change the logic or add some validation and want to continue debugging. It is annoying to keep launching the above “Attach to Process” dialog to select the w3wp.exe.

ReAttach is a Visual Studio extension to help you to attach to the process ID that you have attached a moment ago.

Download and install it then you will have a new option in your Visual Studio menu, Reattach to Process… instead of just “Attach to Process…”

The extension basically helps you to reattach your debugging to the process that you attached earlier. It will continue to work as long as your process ID did not change / disappear.

If for whatever reason you need to run iisreset and your process ID changed. By using “Reattach to process…” it will help filtering out all other irrelevant process and only show you the available w3wp.exe

Hope these 2 tricks help your debugging by attaching to your AppPool. Until next time!

Powershell PathTooLongException

Standard

PathTooLongException could be triggered in various occasions when the path you are dealing with exceed 260 characters. For example while copying an innocent file to a destination with super long path. I hit PathTooLongException when I need to clean up a build in my Powershell build script.

For simplicity sake, I extracted the script that trigger PathTooLongException as following:

Here is our friendly error message:

PathTooLongException

Here is an example of what a (really) long path looks like:

C:\inetpub\wwwroot\sc82rev160610\Data\packages\MyApplication.Experience.Profile.8.2.0.60r160607\serialization\core\MyApplication\client\Applications\ExperienceProfile\Contact\Main\PageSettings\UserTabs\Activity\ActivitySubtabs\Minor\Automations\AutomationsPanel\EngagementPlans\Item\EngagementPlan.item

We will not argue whether it is right or wrong to have such a long path. Often, we are given a piece of software to work on. We have little choice but to find the most efficient solution to deal with the situation.

PathTooLongException is known since 2006. It has been discussed extensively by Microsoft BLC Team in 2007 here. At the point of writing this, Microsoft has no plan to do anything about it. The general advice you will find is is: do not use such long path. (Wow! Really helpful!)

I have been digging around the web for solution but unfortunately there was little luck. Well, the truth is I wasn’t happy with many of the solutions or they are just unnecessary complicated. I eventually wrote my own “hack” after some discussion with Alistair Deneys.

The trick is to setup a short symbolic link to represent the root of the working directory. Pass the short symbolic link as target directory to Remove-Item. We will use mklink command to set up the short symbolic link.

The bad news is, mklink is a Operating System level command. It is not recognized in Powershell. I will have to open Command Prompt to set up the symbolic link before proceeding my Powershell build script. As my build script is supposed to be fully automated without manual intervention, manually setting up the symbolic link using Command Prompt is obviously an ugly option.

The following is the command we need to execute to set up the short symbolic link using Command Prompt:

The good news is we can call Command Prompt from Powershell to set up the symbolic link fairly easily. How easy?

Of course, we will need to ensure there isn’t a folder or another symbolic link which is already being named the same as our target symbolic link. (Else you will get another exception.)

Once we created the symbolic link by executing the above command, there will be a “shortcut” icon.

symbolic-link

Prior to deleting the directory with a really long path, we will set up the symbolic link on the fly to “cheat” Powershell that the path isn’t really that long 😉

Obviously we don’t want to leave any trace for another person to guess what this “shortcut” is about especially the symbolic link is no longer required after our operation. Hence, once we are done doing what we need to do (in this case, I’m deleting some files), we will need to clean up the symbolic link.

There isn’t any dedicated command to remove a symbolic link, we will use the bare bone approach to remove the symbolic link / shortcut created earlier.

The interesting thing about removing a symbolic link / shortcut is it will remove it as a shortcut but the shortcut becomes a real directory (What?!). Hence we will need to remove it one more time! I don’t know what is the justification for this behaviour and I do not have the curiouscity to find out for now. What I ended up doing is calling Remove-Item twice.

Then, things got a little more sticky because Remove-Item will throw ItemNotFoundException if the symbolic link or the actual directory is not there.

ItemNotFoundException

Theoretically this should not happen because all we need to do is create a symbolic link, delete the symbolic link 2 times and we are clear. However reality is not always as friendly as theory 🙂 So, we need to script our Remove-Item defensively. I ended up creating a function to handle the removing task really carefully:

The full script looks as following:

Here, my Powershell happily helped me to remove the files which the path exceed 260 characters and I have a fairly simple “hack” that I can embed into my Powershell build script.

One Last Thought…

During my “hacking” time, I have also tried Subst and New-PSDrive. You might be able to do the same using them. However in my case, some of the dependencies didn’t work well with the above. For example, AttachDatabase could not recognize the path set by Subst. That’s why I settled with Mklink.

This obviously this isn’t a bullet proof solution to handle PathTooLongException. Imagine if the “saving” you gain by representing the actual directory is still not sufficient – as in the path starting from the symbolic link is still more than 260 characters. However I have yet to encounter such situation and this “hack” works for me everytime so far. If you have a simpler way to get this to work, feel free to drop me a comment. Enjoy hacking your Powershell!

Continuous Integration and Continuous Delivery with NuGet

Standard

Continuous Integration (CI) is a development practice that requires developers to integrate codes into a shared repository. Each commit will then be verified by an automated build and sometimes with automated tests.

Why Continuous Integration is important? If you have been programming in a team, you probably encountered situation where one developer committed codes that cause every developer’s code base to break. It could be extremely painful to isolate the codes that broke the code base. Continuous Integration serves as a preventive measurement by building the latest code base to verify whether there is any breaking changes. If there is, raise an alert perhaps by sending out an email to the developer who last committed the codes or perhaps notify the whole development team or even to reject the commit. If there isn’t any breaking change, CI will proceed to run a set of unit test to ensure the last commit has not modify any logic in an unexpected manner. This process sometimes also known as Gated CI, which guarantees the sanity of the code base in a relatively short period of time (usually within few minutes).

CI

The idea of Continuous Integration goes beyond validating the code base in a team of developers working on. If the code base utilizes other development teams’ components, it is also about continuously pulling the latest components to build against the current code base. If the code base utilizes other micro-services, then it is about continuously connecting to the latest version of the micro-services. On the other hand, if the code base output is being utilized by other development teams, it is also about continuously delivering the output so that other development teams can pull the latest to integrate with. If the code base output is a micro-service, then it is about continuously exposing the latest micro-service so that other micro-services can connect and integrate to the latest version. The process of delivering the output for other teams to utilize leads us to another concept known as Continuous Delivery.

Continuous Delivery (CD) is a development practice where development team build software in a manner the latest version of software can be released to production at any time. The delivery could mean the software being delivered to a staging or pre-production server or simply a private development NuGet feed.

Why Continuous Delivery is important? In today software development fast pace of change, stakeholders and customers wanted all the features yesterday. Product Managers do not want to wait 1 week for the team to “get ready” to release. Business expectation is as soon as the codes are written and functionalities are tested, software should be READY to ship. Development teams must establish an efficient delivery process where delivering software is as simple as pushing a button. A good benchmark is the delivery can be accomplished by anyone in the team. Perhaps to be done by a QA after he has verified the quality of the deliverable or by Product Manager when he thinks the time is right. In complex enterprise system, it is not always possible to ship codes to production quickly. Therefore complex enterprise system is often broken into smaller components or micro-services. In this case, the components or micro-services must be ready to be pushed to a shared platform so that other components or micro-services can consume the deliverable as soon as available. This delivery process must be at READY state at all time. The decision of whether to deliver the whole system or the smaller component should be a matter of business decision.

Note that Continuous Delivery does not necessary mean Continuous Deployment. Continuous Deployment is where every change goes through the pipeline and automatically gets pushed into production. This could lead to several production deployments every day, which is not always desirable. Continuous Delivery allows development team to do frequent deployments but may choose not to do it. In today’s standard for .NET development, NuGet package is commonly used for either delivering a component or a whole application.

NuGet is the package manager for the Microsoft development platform. A NuGet package is a set of well-managed library and the relevant files. NuGet packages can be installed and be added to .NET solution from GUI or command line. Instead of referencing to individual library in the form of .dll, developers can reference to a NuGet package which provides much better management in handling dependencies and assemblies versions. In a more holistic view, a NuGet package can even be an application deliverable by itself.

Real life use cases

Example 1: Micro-services

In a cloud based (software as a service) solution, domains are encapsulated in the respective micro-service. Every development team is responsible for their own micro-services.

microservices

Throughout the Sprint, developers commit codes into TFS. After every commit, TFS will build the latest code base. Once the building process is completed, unit tests will be executed to ensure existing logic are still intact. Several NuGet packages are then generated to represent several micro-services (WCF, Web application, etc). These services will be deployed by a deployment tool known as Octopus Deploy to a Staging environment (hosted in AWS EC2) for QA to perform testing. This process continues until the last User Story is completed by the developers.

In a matter of clicks, the earlier NuGet package can also be deployed to Pre-production environment (hosted in AWS EC2) for other types of testing. Lastly, with the blessing from Product Manager, DevOps team will use the same deployment tool to Promote the same NuGet packages that were tested by QA earlier into Production. Throughout this process, it is very important that there is no manual intervention (such as copying a dll, changing a configuration, etc) by hands to ensure the integrity of the NuGet package and deployment process. The entire delivery process must be pre-configured or pre-scripted to ensure the process is consistent, replicatable, and robust.

Example 2: Components

In a complex enterprise application, functionalities are split into components. Each component is a set of binary (dll) and other relevant files. A component is not a stand-alone application. The component has no practical usage until it sits on the larger platform. Development teams are responsible for their respective component.

Throughout the Sprint, developers commit codes into a Git repository. The repository is monitored by Team City (build server). Team City will pull the latest changes and execute a set of Powershell script. From the Powershell script, an instance of the platform is setup. The latest code base will be built and the output is placed on top of the platform. Various tests are executed on the platform to ensure the component functionality is intact. Then, a set of NuGet package will be generated from the Powershell script to be published as the artifacts. These artifacts will be used by QA to run other forms of tests. This process continues until the last User Story is completed by the developers.

When QA gives the green light and with the blessing from Product Manager, the NuGet packages will be promoted to ProGet (an internal NuGet feeds). This promotion process happens in a matter of clicks. No manual intervention (modifying the dependencies, version, etc) should happen to ensure the integrity of the NuGet package.

Once the NuGet package is promoted / pushed into ProGet, other components update this latest component into their components. In Scaled Agile, a release train is planned on frequent and consistent time frame. Internal release happens on weekly basis. This weekly build will always pull all of the latest components from ProGet to generate a platform installer.

Summary

From the examples, we can tell that Continuous Integration and Continuous Delivery are a fairly simple concepts. There is neither black magic nor rocket science in both the use cases. The choice of tools and approaches to accomplish largely depend on the nature of the software we are building. While designing software, it is always a good idea to keep Continuous Integration and Continuous Delivery in mind to maximize team productivity and to have quick and robust delivery.