(Some) VSTS Release Management concepts

VSTS Release Management is a large system with a lot of concepts. It’s wonderful to work with, but there’s a lot of stuff there to understand. While trying to clarify my thinking on the topic, I generated this diagram. Maybe it will be of use to someone.OnPremReleaseManagementConcepts.png


Systems and dependencies

Note that this is mostly just stream-of-consciousness thinking…there’s nothing immediately valuable/learnable here.


I’m currently transitioning from a pure software-development position to a DevOps position. From the research/practice I’ve been doing, a big part of DevOps is defining your infrastructure as code. So rather than buying a physical server, putting in a Windows Server USB stick, clicking through the installer, and then manually installing services/applications, you just write down the stuff that you want in a text file. Then a program analyzes that file and “makes it so”.

As a result, you can easily create an unlimited number of machines with the same configuration. The system has several dependencies (such as SQL Server, IIS, etc). By making those dependencies explicit in a file, a whole new range of capabilities opens up – no longer do you have to click-click-wait 5 minutes, etc in order to construct the system. It’s all automatic.


In software development, dependency injection is a really useful technique. It helps on the path to making a software system automatically testable. It allows the application to be configured in one place. Combining dependency injection with making your code depend on interfaces, it’s easy to swap in/out different components in your system, such as mock objects. Ultimately, this means that the system is much easier to test by a 3rd party. Injecting dependencies throughout the application exposes several “test points” that can be used to modify components of the system without having to rewrite it.

Project management?

I’ve never worked in project management, but projects do have dependencies. “For task X to be complete, task Y has to be completed first.” What would centralized management of a project’s dependencies look like?


So all this brings to mind a few thoughts/questions…is there any kind of “dependency theory” in the world? Clearly dependencies are important when producing things. If there existed a general theory of dependencies, could we create tools that help us manage dependencies across all levels of a project, rather than keeping infrastructure dependencies in one place, project dependencies in another, and code dependencies in another? The pattern I’m observing so far is that (at least across devops and software dev) it’s a Good Thing to centralize your dependencies in a single location. Doing so makes your application/server much more easily configurable.

I don’t have any answers…interesting to think on, though. Maybe I’ll write a followup later on after some more time stewing on the topic.

Visual Studio tip: multi-monitor window setup

For the last couple years, I’ve been using my second monitor as basically a second place to throw code files in Visual Studio, if I want to view files side-by-side for example.

However, over the last few months, I’ve been adopting a different workflow which offers some nice advantages. Basically the idea is to throw all IDE/property windows on the right monitor, and a full-screen code window on the left monitor.

Benefits of this layout:

  • Less window-juggling. Greatly reduces the need to resize tool windows in order to make more space for code, or resize code to make more space for tool windows.
  • No more guessing where your code is. With two monitors, code’s always on the left monitor, and options are always on the right monitor. With three monitors, tools are on the right monitor, and the left two monitors are used for code.
  • More space for code on-screen. This isn’t a huge deal, but having 10 or 12 extra lines of code on-screen is handy.
  • It’s a system. With free-floating tool windows, there’s a lot of ad-hoc moving stuff around, reshuffling windows, etc. There’s no ambiguity with this setup – really simple.

Old layout:


Left monitor – Crowded, icky


Right monitor – Nice full-screen code window

New layout:


Left monitor – Nice full-screen code window


Right monitor – *All* of my options…not just a random collection of some options, etc.


Testing multiple application types with the Repository pattern and dependency injection

I’ve been doing a good amount of reading and learning recently about some new (to me) programming techniques: dependency injection and the Repository pattern.

A thought I had today is that these techniques could be combined to create a great way of testing different versions of a database-backed application. The idea is this:

  1. Dependency injection makes code externally configurable. Meaning, if DI is used throughout a code base, then there is only one spot in the code base where dependencies are defined. For example: Amazon needs a ProductList in order to display products on its site. If that ProductList is injected to various points in the application from the root level, then the ProductList can be swapped out for any number of other ProductLists.
  2. The Repository pattern lets us abstract away the notion of interacting with a database. Instead, with a Repository, the application works directly with instances of objects – where those objects came from doesn’t matter.
  3. Combining these two, it would be possible to inject Repositories throughout an entire application, so that the data sources the application uses can be configured in a single place.

The cool thing about this, IMO, is that a huge number of Repositories could be created for testing. For Amazon, an InternationalProductRepository could be created, or an AustralianProductRepository, or an ExpensiveProductRepository, or an AutomotiveProductRepository.  So a tester could 1. Define a series of Repositories which exhibit characteristics that they care about. 2. Mix and match different Repositories in different combinations. 3. Test how the application behaves in response to those different Repositories/Repository combinations.

Even more cool, I could see a workflow develop as such: 1. Programmer defines a series of IRepositories which need to be used by the application. 2. Manual testers use some type of visual tool (does this exist? I’ve never heard of such a thing) to create their own repositories of objects for testing. 3. Testers define various scenarios, which consist of a set of related Repositories. Further, automated testing could be set up so that unit tests are run over every combination of every defined Repository, for example.

Publishing an Angular project’s node_modules folder in an ASP.NET project from Visual Studio

This is a quick guide to a huge source of frustration over the last few months, with a way to solve it. This assumes that your Angular project is based on the angular quickstart project, that your Angular project is hosted inside an ASP.NET web app, and that you’re working in Visual Studio.

To skip the exposition, head to the “How it works” section.


An angular project depends on certain JavaScript libraries in order to run. These libraries are loaded from 2 locations (that I’ve discovered) in the angular quickstart project:

  1. src/index.html, in the <head> section. See: https://github.com/angular/quickstart/blob/master/src/index.html
  2. src/systemjs.config.js, in the map section. See: https://github.com/angular/quickstart/blob/master/src/systemjs.config.js

In the angular quickstart project, the libraries in these sections are by default located in the node_modules/ folder. However, the node_modules/ folder is gigantic (like 120MB and 17,000 files in my case), and all of those files don’t need to be deployed to the production server to run the angular app.

Publishing setup

To publish the site to my IIS server, I’m using Visual Studio’s built-in publishing support. Thus, to push the site to a server, you right-click the ASP.NET website project, click “Publish”, and go from there.

Now, clicking “Publish” will only publish the files that exist in the project. Since node_modules/ is humongous, we don’t track it in our VS project. Therefore, the node_modules/ folder isn’t published to the web server. At the beginning of our project, we manually copied the node_modules/ folder over to the web server to fix this. But no longer!

How it works

  1. Create a dist/ folder somewhere in the website project.
  2. Set up an MSBuild task to, after every build, copy the JavaScript dependencies over into the dist/ folder.
  3. After every dependency has been copied into the dist/ folder, update all references to JS dependencies in the project to point to the dist/ folder, rather than node_modules
  4. In Visual Studio, add all of the dependencies in the dist/ folder to the solution, under a corresponding dist/ folder.

Why do this?

  • You don’t have to track the entire node_modules/ folder in the solution.
  • You don’t have to publish the entire node_modules/ folder to the web server.
  • Since updating the dist/ folder happens after every build, anytime you update your npm modules, the updated libraries will be copied to the dist/ folder.


Create dist/ folder

Create a folder structure which is something like this:

dist folder setup

dist contents.PNG

Setup MSBuild tasks

Here’s the set of MSBuild tasks I’m using [this should look much nicer pasted into a text editor…darn blog formatting]:

    <!– Move NPM modules into our dist/ folder. –>


This should, after every build, copy all JS dependencies to the dist/ folder.

Update JS dependency references


Something like this:


default.aspx (or index.html, etc)

Something like this:


Add dist/ folder to solution

Solution explorer.PNG

And that’s it! Now if you Publish the site, the dist/ folder should be copied to the IIS server, and the site should use the files in that folder.

For comparison:

  • dist/ folder: 4.08MB, with 356 files
  • node_modules/ folder: 118MB, with 13,015 files

Why are programs written with plain text?

Currently, we write programs by typing text into files, and running a compiler over those files to interpret the text we typed. This seems like it should be a historical accident.

A programming language is structured. Writing invalid code is rejected by the compiler. However, text files are inherently unstructured. Structure has to be imposed on the text file by external tools – compilers, IDEs, etc.

Why not represent programs as databases of statements/functions/etc? This would lead to a ton of benefits. For one, invalid programs would be impossible. This saves a huge amount of time where programmers currently fix typos in code, saves resources where compilers/static analysis tools read those files to find errors, etc.

It seems like we’re approaching the problem from the wrong angle. Right now, we write down a program in an unstructured format, and build tools to see if the thing we wrote down is valid. Instead of all of that, we should just write the programs in a structured format (read: database/something else which isn’t a text file) in the first place. This is akin to writing a bunch of instructions in a Word document, then writing tools to parse that Word document and produce a program, rather than just designing a program which lets you directly generate programs.


People often mention that it’s good to be grateful for what you have in life, etc. This isn’t a topic I spend much time thinking about. What use is there in noting what you think is good about the world, when that time could be spent experiencing the world/making it better? However, today I had a thought: Instead of appealing to vague statements like, “Be happy you’re alive”, etc, I think the notion of gratitude is a lot more impactful when you consider the size of the universe.

The universe is ridiculously large. In the universe, there is a gigantic number of atoms. My body represents an extremely small fraction of all atoms in the universe. Based on my (layman’s) understanding of current science, it should be assumed that intelligent life (read: life which can perceive the universe, reflect on it, etc) also represents a tiny fraction of all atoms in the universe.

My atoms could just as easily be a blade of grass, a spec of dirt, or a chip of porcelain on a toilet. The odds that any given atom is going to be part of a being which can perceive the universe are (I’d assume – I haven’t done an actual calculation) very, very low.

The odds of being a perceiving entity are overwhelmingly low. Any day in which a person is alive, sensing, and perceiving the universe, is a day that their atoms are not hanging around as a motionless clump of stone. And that’s something to be very grateful for.