Uncommon Windows shortcuts

While working in Windows, there are a few uncommonly-used, but useful, keyboard shortcuts that I’ve found. I’ll keep track of them here as I discover more.

  • Shift+Delete: Cuts the entire line that the cursor is currently placed on.
  • Ctrl+Enter: Inserts a newline below your cursor and moves your cursor to it, no matter where you cursor is currently placed.

Similarities between a record label and a software development department

While working on my personal music project (shameless promotion), I’ve begun to think about how to structure the project for long-term success. This is leading me to see what parallels there could be between a record label and a software development team.

I think this is interesting because, compared to software development, music production is an incredibly unstructured field. Everyone that I talk to is winging it and not thinking about process, optimization, etc. The opposite is the case in software development.

Another interesting note is that, in software development, there is a huge array of tools for creating/managing projects – bug trackers, wikis, version control, CI servers, release management tools, ALM tools like TFS/VSTS, etc. I know of no ready-built analog of this setup in the music production world, and I think it’s an area ripe for experimentation/improvement.

Here’s a graph of how these two organizational structures could be compared.


I think there’s a ton of unexplored territory in the intersection between music and software. Example questions: programs are versioned, why not musical compositions? What would continuous delivery of music look like? 20-commits to production a day of music? Music factories? Etc.

A couple of guidelines for smoothly automating builds/releases for external teams

If you are working with an external software development team to automate their builds/releases, the team and you have different goals. They want to deliver features, you want to deliver a solid build/release experience. Doing these two things conflict with each other – due to merge conflicts, botched deployments to their testing servers, etc. Hence, a couple of quick guidelines here to make the process run smoothly. This could be the start of some kind of general framework for build/release automation, we’ll see.

  1. Find out which branch the team is actively developing on. Base your pipeline creation off of this branch, to reduce the need for an expensive merge at the end of the process.
  2. Branch off of the team’s active development branch. Periodically, merge the team’s changes into your branch, to make sure everything is working, etc. This way, the delivery team can work independently with no worries about your build/release code impacting them. When your work is done, you can possibly run your changes by the team all at once, in a short code review, to verify everything looks ok.
  3. Clone the team’s existing build definitions/release definitions on their CI server and do your work on the clones. This gives you free reign to make the builds/releases work however you want without impacting the team. This also prevents problems coming from automated builds (CI, scheduled nightly builds, etc) triggering incorrectly – IE: the wrong build was kicked off, the nightly build was kicked off using the build definition you’re modifying, a CI build was accidentally deployed, etc.
  4. Clone the team’s deployment environment. If you’re working in Azure, do your deployments to a resource group outside of any of the team’s existing resource groups. This ensures that if you botch a deployment, you don’t accidentally wipe the team’s database, mess with their web servers, etc.

Nothing too complicated, but these seem like a good baseline set of practices to follow for doing this type of work.


Systems and dependencies

Note that this is mostly just stream-of-consciousness thinking…there’s nothing immediately valuable/learnable here.


I’m currently transitioning from a pure software-development position to a DevOps position. From the research/practice I’ve been doing, a big part of DevOps is defining your infrastructure as code. So rather than buying a physical server, putting in a Windows Server USB stick, clicking through the installer, and then manually installing services/applications, you just write down the stuff that you want in a text file. Then a program analyzes that file and “makes it so”.

As a result, you can easily create an unlimited number of machines with the same configuration. The system has several dependencies (such as SQL Server, IIS, etc). By making those dependencies explicit in a file, a whole new range of capabilities opens up – no longer do you have to click-click-wait 5 minutes, etc in order to construct the system. It’s all automatic.


In software development, dependency injection is a really useful technique. It helps on the path to making a software system automatically testable. It allows the application to be configured in one place. Combining dependency injection with making your code depend on interfaces, it’s easy to swap in/out different components in your system, such as mock objects. Ultimately, this means that the system is much easier to test by a 3rd party. Injecting dependencies throughout the application exposes several “test points” that can be used to modify components of the system without having to rewrite it.

Project management?

I’ve never worked in project management, but projects do have dependencies. “For task X to be complete, task Y has to be completed first.” What would centralized management of a project’s dependencies look like?


So all this brings to mind a few thoughts/questions…is there any kind of “dependency theory” in the world? Clearly dependencies are important when producing things. If there existed a general theory of dependencies, could we create tools that help us manage dependencies across all levels of a project, rather than keeping infrastructure dependencies in one place, project dependencies in another, and code dependencies in another? The pattern I’m observing so far is that (at least across devops and software dev) it’s a Good Thing to centralize your dependencies in a single location. Doing so makes your application/server much more easily configurable.

I don’t have any answers…interesting to think on, though. Maybe I’ll write a followup later on after some more time stewing on the topic.

Visual Studio tip: multi-monitor window setup

For the last couple years, I’ve been using my second monitor as basically a second place to throw code files in Visual Studio, if I want to view files side-by-side for example.

However, over the last few months, I’ve been adopting a different workflow which offers some nice advantages. Basically the idea is to throw all IDE/property windows on the right monitor, and a full-screen code window on the left monitor.

Benefits of this layout:

  • Less window-juggling. Greatly reduces the need to resize tool windows in order to make more space for code, or resize code to make more space for tool windows.
  • No more guessing where your code is. With two monitors, code’s always on the left monitor, and options are always on the right monitor. With three monitors, tools are on the right monitor, and the left two monitors are used for code.
  • More space for code on-screen. This isn’t a huge deal, but having 10 or 12 extra lines of code on-screen is handy.
  • It’s a system. With free-floating tool windows, there’s a lot of ad-hoc moving stuff around, reshuffling windows, etc. There’s no ambiguity with this setup – really simple.

Old layout:


Left monitor – Crowded, icky


Right monitor – Nice full-screen code window

New layout:


Left monitor – Nice full-screen code window


Right monitor – *All* of my options…not just a random collection of some options, etc.


Testing multiple application types with the Repository pattern and dependency injection

I’ve been doing a good amount of reading and learning recently about some new (to me) programming techniques: dependency injection and the Repository pattern.

A thought I had today is that these techniques could be combined to create a great way of testing different versions of a database-backed application. The idea is this:

  1. Dependency injection makes code externally configurable. Meaning, if DI is used throughout a code base, then there is only one spot in the code base where dependencies are defined. For example: Amazon needs a ProductList in order to display products on its site. If that ProductList is injected to various points in the application from the root level, then the ProductList can be swapped out for any number of other ProductLists.
  2. The Repository pattern lets us abstract away the notion of interacting with a database. Instead, with a Repository, the application works directly with instances of objects – where those objects came from doesn’t matter.
  3. Combining these two, it would be possible to inject Repositories throughout an entire application, so that the data sources the application uses can be configured in a single place.

The cool thing about this, IMO, is that a huge number of Repositories could be created for testing. For Amazon, an InternationalProductRepository could be created, or an AustralianProductRepository, or an ExpensiveProductRepository, or an AutomotiveProductRepository.  So a tester could 1. Define a series of Repositories which exhibit characteristics that they care about. 2. Mix and match different Repositories in different combinations. 3. Test how the application behaves in response to those different Repositories/Repository combinations.

Even more cool, I could see a workflow develop as such: 1. Programmer defines a series of IRepositories which need to be used by the application. 2. Manual testers use some type of visual tool (does this exist? I’ve never heard of such a thing) to create their own repositories of objects for testing. 3. Testers define various scenarios, which consist of a set of related Repositories. Further, automated testing could be set up so that unit tests are run over every combination of every defined Repository, for example.