Optimizing Your Optimization
I’ve been working on a game called Conquering Ciros, a 2D action roguelite heavily inspired by Vampire Survivors. I, along with other students at Indiana University will publish this game as part of our senior project. In our game we opted for some procedural generation in our levels. This meant we had lots of different environmental props that had to be scattered around the game world. I was assigned the task of getting that system up and running.
Scattering objects around a space sounds simple in theory, but as with most large systems in games, it quickly became clear there would have to be a lot more thought put into this. The first question was “how should the objects be scattered?” If they were truly randomly scattered it looked awful, props would overlap, there could be large sections of very sparse land which wasn’t interesting to the player, and there was the possibility of the player or our enemies being stuck between props. Thankfully, this type of problem has been solved many different ways already and I knew of a few of them. Poisson disk sampling immediately stood out as a great candidate. There were lots of online resources about it, it prevented overlapping objects, and gave a nice looking uniform distribution of points.
This last point however, wasn’t necessarily a good thing. Props being evenly spaced out was, in fact, a bad thing! First off, it looked too even. This wasn’t as big of an issue – modulating the point density based on some smooth noise broke up the uniformity enough for our tastes. However, there was an even bigger issue (literally). Big props. Not all environment props are the same size, we had big fields that were many times larger than the player, and also tiny mushrooms that were barely a fourth of the player’s height.
Maybe a mushroom overlapping a field isn’t a bad thing, but what about two fields overlapping? Or a rock overlapping with a mushroom? Or a tree overlapping a rock? It quickly became clear that we had a problem. Some props cannot overlap other props. Okay, now that we have a clear idea of what needs to be solved, we can start implementing. I started off this “no overlapping props” system as I believe all complex systems should start: hacky and naive. I did just about the simplest thing I could – iterating all props and checking their bounds for all other props. If any props overlapped, I would cull one of them. For a time, this worked great! We didn’t have a very big world at the time of implementing this, so performance wasn’t a concern and we got to see exactly what this kind of system would make our game look like.
Fast forward a few weeks, and we now want an infinite world. At this point I knew the first go at implementing the overlap culling system wouldn’t cut it, it was time to optimize. My first thought was “ooo it’s time to pull out some cool game dev algorithms I’ve learned about. We’ll need spatial partitioning to break up the world, and maybe I can make the partitions sparse for fast lookups and to keep memory usage low… I think I should use a quadtree”. While this might have been a great way to go about it, and I’m sure our performance and memory usage would have been great, there’s something to take away from the two mindsets I’ve had at this point. During the first go at implementing this system I was not thinking about performance, memory, or complexity. I just wanted to see this system in game – that’s all. This time, I immediately jumped to thinking about performance, memory, and complexity. This is reasonable, considering the rewrite was born of a need for better performance because of a possibly infinite world, but one other thing that I think is ultimately more important than any of those three things is time. When making a game (especially a small game with a small number of people), we are trying to produce the highest quality game in the smallest amount of time. This is the real optimization small teams face. Because of the lack of working hours, iterating is key. We need to see our ideas in the game as fast as possible to throw out bad ones and iterate on good ones. If I were to have spent an extra week implementing a fantastic prop culling algorithm from the getgo – we wouldn’t have gotten to see what our prop spawns looked like in game. What if we then decided against an infinite procedural world? Then all that work would have been essentially wasted. Taking smaller steps, doing only what we need in this moment helped our team make creative decisions quickly.
Instead of implementing that quadtree algorithm, I opted for something even simpler and yet even better. Unity’s built in systems. This seems obvious, and like something one wouldn’t really need to talk about. Of course you should use Unity’s built in features… duh. But this is something programmers (of course myself included) fall into sometimes. I like programming. I like these cool game development algorithms and how they work and how they make games better. I would have enjoyed implementing that quadtree algorithm. Because of this, my mind often jumps to doing things that are performant, sure – efficient, sure… but also sometimes unnecessary, time consuming, and overly complex. Unity has a built-in function that does literally exactly what I would have implemented myself (Physics2D.OverlapBoxNonAlloc). So while I miss out on the opportunity for a bit of “handmade” programming, I save myself and my team a whole boatload of time and frustration.
My, and hopefully your takeaway from all this is to keep it simple. Plan for tomorrow, but implement for today. Your crazy cool idea for this super flexible system that will stand the test of time and be super useful forever sounds great, and may in fact be great, but it’ll take time. And it’s really important when working with small teams on small projects with limited time to optimize your optimizations. Make sure any optimizations or big systems you’re implementing are really, genuinely necessary. If you can simplify and build them today and iterate on them tomorrow, prefer that over doing both of those steps at once.