Unreal Engine: Error: GiveAbility called with an invalid Ability Class

Recently I ran across this error message while working with the Gameplay Ability System in Unreal Engine 5. Thanks to the Unreal Slackers discord community for helping me get unstuck on this one.

In my case, the issue was that the AbilityClass I was using (the TSubclassOf<UGameplayAbility> variable on my class) was set to NULL when I called my Ability System component’s GiveAbility() method. As a result, UE wasn’t able to determine which gameplay ability class I wanted it to use – hence the error message.

A solution to this problem

One solution to this problem is to:

  1. Create a Blueprint class, derived from your C++ character class.
  2. Open that Blueprint class in the Unreal editor.
  3. Set the AbilityClass variable in the Details view to the class that you want to use for your ability.
  4. From this point forward – spawn instances of your Blueprint class in the game world, instead of your C++ parent class.

And voila, this will let you initialize the AbilityClass before it’s given to the Ability System – fixing this error.

Getting started with Azure networking

Learning Azure can be overwhelming. There are a massive number of services available, and everyday a new service is announced or a new capability is released. It’s difficult to keep up.

In that vein, over the last months/years, I’ve been piecing together parts of how networking works in Azure. However, up to now I’ve been working with isolated bits of knowledge. I think the problem is that it’s not easy to know where to start when learning networking. So in that spirit, here’s a first, easy project that someone can build off of in order to learn Azure networking.

  1. Research the difference between point-to-site/site-to-site/ExpressRoute connectivity.
  2. Set up a point-to-site connection from a local machine to your Azure instance.
  3. Connect to the P2S connection.
  4. Verify that the connection is working – that you are connected using a private IP address pulled from the VPN Gateway’s address pool.
  5. Deploy a Windows Server (or other server OS) VM into the same Virtual Network that the VPN Gateway lives in.
  6. Allow ‘ping’ through the server’s firewall.
  7. Verify that you can ping the server from your local machine.
  8. Install IIS (or another web server) on the server VM.
  9. Verify, while logged into the VM, that the web server displays a default page on localhost.
  10. Navigate to the server VM’s IP address in your browser. Verify that the default web server page is displayed.
  11. Disconnect the P2S VPN connection on your local machine.
  12. Attempt to ping the server VM and navigate to its IP address in a browser.

After doing the above, you will have a fully-functional, private connection to a VM running in Azure. This will prove that devices that are not connected to the Azure VNet cannot contact the server VM.

After this, you’ll know how to create and connect to a basic private network in Azure. The same principles apply to Site-to-Site connectivity and ExpressRoute connectivity. From this point, it’s easy to see that you can branch off into playing with firewall rules, peering your VNet to other VNets, and all of the other more advanced features that Azure has to offer.

Can’t delete Azure SQL Managed Instance

Recently while testing out SQL Managed Instance in Azure, I’ve run into problems when trying to delete the MI. After deleting the MI, a few resources hang around in the MI’s resource group – a Route table, a Network security group, a virtual network, and a Network intent policy, at least.

None of these resources can be deleted because they all depend on each other in some way.

To solve this, the current solution I’ve found is to try to delete the resource group, and then wait a few days. Here’s what happened in the resource group after I did that, viewed from Activity Log:

NetworkIntentPolicy Deletion

First, I submit a request to delete the managed instance. Then, 5 days later, “Azure SQL Managed Instance to Microsoft.Network” comes around and actually deletes the resource.

Hopefully this will be fixed in an upcoming update, or at least hopefully more visibility will be provided in the portal to show what’s going on.

Problems with current build systems

Note: I only have a fair amount of experience with VSTS, so it’s possible that these problems are solved elsewhere.

Hopefully this will be a running tally.

  • When I rename a class/method/etc. in C#, my program breaks unless I fix all references to that class. In build systems, this does not happen. I have hard-coded references to specific files in my build configuration. If that file is renamed/moved/removed, I receive no warning of this until I run a build. Ideally, this would work the same as compiling code – I would know immediately.
    • Possible solution – Define build/release definitions offline, and have a compiler that verifies references to files?
    • Ultimately, the problem is that build definitions have a dependency on a codebase. And that codebase can change. Therefore, it makes sense to create a tool that scans all of a build definition’s dependencies on the codebase (each file/project/etc referenced), and fails if any dependencies are missing. This would give you an early, clear error message about what’s going on.

 

Similarities between a record label and a software development department

While working on my personal music project (shameless promotion), I’ve begun to think about how to structure the project for long-term success. This is leading me to see what parallels there could be between a record label and a software development team.

I think this is interesting because, compared to software development, music production is an incredibly unstructured field. Everyone that I talk to is winging it and not thinking about process, optimization, etc. The opposite is the case in software development.

Another interesting note is that, in software development, there is a huge array of tools for creating/managing projects – bug trackers, wikis, version control, CI servers, release management tools, ALM tools like TFS/VSTS, etc. I know of no ready-built analog of this setup in the music production world, and I think it’s an area ripe for experimentation/improvement.

Here’s a graph of how these two organizational structures could be compared.

SoftwareDevMusicProductionComparison.png

I think there’s a ton of unexplored territory in the intersection between music and software. Example questions: programs are versioned, why not musical compositions? What would continuous delivery of music look like? 20-commits to production a day of music? Music factories? Etc.

A couple of guidelines for smoothly automating builds/releases for external teams

If you are working with an external software development team to automate their builds/releases, the team and you have different goals. They want to deliver features, you want to deliver a solid build/release experience. Doing these two things conflict with each other – due to merge conflicts, botched deployments to their testing servers, etc. Hence, a couple of quick guidelines here to make the process run smoothly. This could be the start of some kind of general framework for build/release automation, we’ll see.

  1. Find out which branch the team is actively developing on. Base your pipeline creation off of this branch, to reduce the need for an expensive merge at the end of the process.
  2. Branch off of the team’s active development branch. Periodically, merge the team’s changes into your branch, to make sure everything is working, etc. This way, the delivery team can work independently with no worries about your build/release code impacting them. When your work is done, you can possibly run your changes by the team all at once, in a short code review, to verify everything looks ok.
  3. Clone the team’s existing build definitions/release definitions on their CI server and do your work on the clones. This gives you free reign to make the builds/releases work however you want without impacting the team. This also prevents problems coming from automated builds (CI, scheduled nightly builds, etc) triggering incorrectly – IE: the wrong build was kicked off, the nightly build was kicked off using the build definition you’re modifying, a CI build was accidentally deployed, etc.
  4. Clone the team’s deployment environment. If you’re working in Azure, do your deployments to a resource group outside of any of the team’s existing resource groups. This ensures that if you botch a deployment, you don’t accidentally wipe the team’s database, mess with their web servers, etc.

Nothing too complicated, but these seem like a good baseline set of practices to follow for doing this type of work.

 

Systems and dependencies

Note that this is mostly just stream-of-consciousness thinking…there’s nothing immediately valuable/learnable here.

Ops

I’m currently transitioning from a pure software-development position to a DevOps position. From the research/practice I’ve been doing, a big part of DevOps is defining your infrastructure as code. So rather than buying a physical server, putting in a Windows Server USB stick, clicking through the installer, and then manually installing services/applications, you just write down the stuff that you want in a text file. Then a program analyzes that file and “makes it so”.

As a result, you can easily create an unlimited number of machines with the same configuration. The system has several dependencies (such as SQL Server, IIS, etc). By making those dependencies explicit in a file, a whole new range of capabilities opens up – no longer do you have to click-click-wait 5 minutes, etc in order to construct the system. It’s all automatic.

Dev

In software development, dependency injection is a really useful technique. It helps on the path to making a software system automatically testable. It allows the application to be configured in one place. Combining dependency injection with making your code depend on interfaces, it’s easy to swap in/out different components in your system, such as mock objects. Ultimately, this means that the system is much easier to test by a 3rd party. Injecting dependencies throughout the application exposes several “test points” that can be used to modify components of the system without having to rewrite it.

Project management?

I’ve never worked in project management, but projects do have dependencies. “For task X to be complete, task Y has to be completed first.” What would centralized management of a project’s dependencies look like?

 

So all this brings to mind a few thoughts/questions…is there any kind of “dependency theory” in the world? Clearly dependencies are important when producing things. If there existed a general theory of dependencies, could we create tools that help us manage dependencies across all levels of a project, rather than keeping infrastructure dependencies in one place, project dependencies in another, and code dependencies in another? The pattern I’m observing so far is that (at least across devops and software dev) it’s a Good Thing to centralize your dependencies in a single location. Doing so makes your application/server much more easily configurable.

I don’t have any answers…interesting to think on, though. Maybe I’ll write a followup later on after some more time stewing on the topic.

Visual Studio tip: multi-monitor window setup

For the last couple years, I’ve been using my second monitor as basically a second place to throw code files in Visual Studio, if I want to view files side-by-side for example.

However, over the last few months, I’ve been adopting a different workflow which offers some nice advantages. Basically the idea is to throw all IDE/property windows on the right monitor, and a full-screen code window on the left monitor.

Benefits of this layout:

  • Less window-juggling. Greatly reduces the need to resize tool windows in order to make more space for code, or resize code to make more space for tool windows.
  • No more guessing where your code is. With two monitors, code’s always on the left monitor, and options are always on the right monitor. With three monitors, tools are on the right monitor, and the left two monitors are used for code.
  • More space for code on-screen. This isn’t a huge deal, but having 10 or 12 extra lines of code on-screen is handy.
  • It’s a system. With free-floating tool windows, there’s a lot of ad-hoc moving stuff around, reshuffling windows, etc. There’s no ambiguity with this setup – really simple.

Old layout:

VSWindowLayout-OneMonitor-Left

Left monitor – Crowded, icky

VSWindowLayout-OneMonitor-Right

Right monitor – Nice full-screen code window

New layout:

VSWindowLayout-SeparateMonitors-Left

Left monitor – Nice full-screen code window

VSWindowLayout-SeparateMonitors-Right

Right monitor – *All* of my options…not just a random collection of some options, etc.