Richard Campbell from the DotNetRocks podcast often says that DevOps isn't a title, it's something that you do. DevOps is the intersection between operations, testing, and development, and rather than having one person specializing in that intersection, it's better to have people in each area gain some familiarity with what DevOps means to them.

I've written quite often about Team Foundation Server (TFS), and the various components that it offers, the ways in which it makes development and deployment easier, and even some of the pain points of the applications. Recently, the School of Medicine spun up a new machine on which we could work with and deploy our new medical education testing software. We had already decided to go all-in on TFS for project management and source control, and flirted with deployment, while also utilizing some of the build functionality to test integration builds, but this was the first time we were going to be able to map out a build and deployment solution from the ground up.

There are a lot of technologies being used in the new medical education testing solution: Paket, Bower, Grunt, Less, TypeScript, ASP.NET MVC 5, and ASP.NET WEB API 2 to name a few. Some of these items represent modern tooling that helps with development, but isn't necessarily the final solution: Less transpiles to CSS, TypeScript transpiles to JavaScript, etc. One of the primary pain points of such tooling is that the development environment is different from the final production environment, so what do you put in source control? What do you deploy to the servers?

When it comes to Bower, Less, and TypeScript, some people are tempted to put it all in source control, including the generated files. This seems redundant, as each developer machine will transpile and replace the source control version, causing it to be eternally checked out and merged. With Bower, most of those components are used in combination with Grunt to combine or minify output, meaning that although you can check in the Bower components, it could mean a lot of unnecessary files.

There's also the problem of unwanted files on the server. Less and TypeScript files are useless unless you are debugging using source mapping on a testing server. There should be no need for them in production. Same holds for Bower components. On top of that, have you ever had a Bower-intensive project and needed to push it to production? All those files take time to push.

Continuous integration (CI) solutions such as Jenkins and deployment solutions such as Octopus deploy certainly exist, and are very successful, but Microsoft has invested heavily in Team Foundation Server over the last few updates to produce a CI and deployment solution that rivals anything currently on the market, and we were able to take full advantage of it in our situation. In fact, TFS Update 2 finally allows for modern extensions to on-premise TFS (for those extensions that offer it), so the extensibility of TFS is now through the roof, and many of our build tasks take advantage of this.

When you create a build definition in TFS, you have many options for when to build, what source to take, how long to keep the build, etc. We set up a CI build definition that builds the code any time someone checks into the source control. In the build definition, you select steps or tasks to perform in succession. The first operation that is performed is that the code is checked out onto the build server, so every other step that happens occurs within the context of that code set.



NuGet is often a source of never-ending frustration for many developers. It was causing enough pain in our development that we switched to Paket--a package management solution that uses NuGet under the covers, but corrects a lot of the long-standing issues with Microsoft's baked in solution. TFS offers NuGet restore by default, but luckily, Paket is available in the F# extension in the TFS marketplace.


The next step in our build definition is downloading the appropriate bower components. We do not check these files into source control since they won't be the finalized version of our source files. These components will get operated on later by the Grunt task runner.

Something to bear in mind with Bower, Grunt, etc. is that these items need to be installed on the build server. This isn't a big deal, but remember that installing them globally installs them globally for the current user. If your builds are done under a different user account on that server, you'll get build errors telling you that Bower or Grunt or npm can't be found. You may need to install these items locally, but then include them in the system wide environment path.

The Bower build task lets you specify the bower.json file containing the packages you need installed. If you've been doing bower install PACKAGE_NAME --save then your bower.json file should have everything you need. The build step will execute install by default.


Once Bower is done downloading the various components needed, the next step we have set up is for the node package manager or npm. This means that node.js needs be installed on the system and npm must be globally accessible from the user account that performs the builds.

With this build step, npm is essentially downloading the local Grunt package, as well as the various Grunt task packages needed for the task runner. Much like Bower, npm is looking for a file, only this one is called packages.json. By default, the task will run the install command.


With the node packages installed, Grunt can now run. We have a grunfile.js in the root of our application, and the Grunt task runner will perform the tasks defined inside. For testing purposes, this currently transpiles the Less files into CSS. For a production environment, Grunt will be configured to combine the various JavaScript files and CSS files for the project, minify their contents, and run a replacement within the ASP.NET *.cshtml files to replace the individual calls to the JS and CSS assets with the combined and minified versions.

MSBuild (Visual Studio Build)

With all the assets published out, MSBuild can run to build the solution and all related files. This works identical to building with Visual Studio, and will report back any errors or warnings. This has the added benefit of uncovering any environmental issues. Does the build work locally, but fail on the build server? If so, then it'll probably fail in production.

Unit Tests

Although unit tests do a fine job at evaluating small units of code, they generally become less useful with most data-in/data-out processes because you end up testing mock data rather than real data. Still, some unit tests are better than none. Running these after the build process gives us an added layer of protection before deployment.

Publish Build Artifacts

Now it's time to put the built code in a safe place. TFS can deploy through a machine copy with release management, and you can point to the built code as the files to push if you publish them out as an artifact. We have our administrative code in one project, but our student-facing testing system in another project--both under the same solution--so we have two tasks for building artifacts: one for each project.


If this were a production build definition, you would want to clean up any unnecessary files, such as the Less and TypeScript files or the Bower components. PowerShell is your friend. TFS makes it simple to create a custom PowerShell script that you can pass arguments to, and then execute. This is the easiest way to finalize your custom build definitions. You'll probably want to run this prior to building out the artifacts.

Release Management

Since this is our continuous integration build, we can add to it by using the release management tools in TFS to push each build onto the server for testing. When you set up a release, you can point to the specific artifacts to push, decide how often to push, and whether or not the release needs approval (attaching the appropriate people who need to approve the release).

Machine File Copy

The machine file copy release management step uses RoboCopy behind the scenes to push the artifacts from the build directory to a directory on the deployment machine. Once complete, the application is up-to-date and ready to use. We currently use this machine file copy task, but there are extensions for IIS publishing that might be a better option for some people.

Continuous Integration

Currently, our builds and releases are set up on a CI schedule, so any code that's checked in triggers a build, and if that build is successful, it'll trigger a release. If the build fails for any reason, TFS allows you to automatically have a bug added to the project management portal, assigning it to the person who triggered the build. With this current setup, we've been able to program, build, release, and test at a faster pace than any standard development setup, increasing efficiency, as well as communication amongst various teams.