2024 In Review

As my first year of solo development comes to a close, now feels like a good time to look back and reflect on the milestones achieved in 2024.

Introduction

As my first year of solo development comes to a close, now feels like a good time to look back and reflect on the milestones achieved in 2024. I’ve accomplished a lot this year, with all of my efforts put towards building a custom game engine I intend to use as the foundation of everything I build in the future.

To better understand my goals for this year, and why I took on the work of building my own engine, it helps to step back a little to the final few months of 2023.

When I started pre-production of my next game in the second half of 2023, I had every intention of using Unity Engine once again. This quickly came to an end in September when Unity Technologies broke my trust in them. I made the difficult decision to switch to a different engine before starting actual production.

Between September and December 2023, I evaluated multiple alternatives to Unity Engine by spending two or three weeks in each to get a feel for their workflow and capabilities.

Finally, I picked up my own Deep Engine to play with again. This is an XNA-powered game engine I built in 2012 for a Daggerfall total conversion called Ruins of Hill Deep. While that concept never came to light, it paved the way for Daggerfall Unity a few years later.

I quickly ported Deep Engine over to MonoGame to get it working again (practically a 1:1 swap from XNA) and bashed a few scenes together. As I worked, I realised a custom engine provided more control and ownership than I could find with any other solution. Not only would I have total ownership over its design and direction, I could optimise for the style of game I want to create. I already had a vision to create a highly modable game like Daggerfall Unity, and with a custom engine I could explore this in ways not possible with other engines.

However, the original 2012 Deep Engine was still very limited and held back by its XNA roots. It was stuck with older shader models, lacking compute shaders, and its asset pipeline could not support the modding potential I had in mind.

Over the Christmas through New Year break of 2023, I came to a decision how I would move forwards. I decided to put my game on hold and rebuild Deep Engine from the ground up, using everything I’ve learned over the last decade of development. I would step back to a fresh start in 2024 and build something new. Something I owned, made exclusively for my goals.

This is how that year unfolded.

January – Deep Engine Next

I started 2024 mapping out my goals for the engine – which parts I wanted to build from scratch, which parts to salvage, and what open source tools I could use to accelerate my work. The following rough outline emerged for what I had started thinking of as Deep Engine Next.

  • Cross-platform supporting Window, Linux, Mac. Other platforms a plus.
  • C++ wrapper for native libraries, C# for engine runtime.
  • .NET8/9 and modern C# language features.
  • SDL2 for window context, input handling, hardware events, and basic audio.
  • BGFX to abstract rendering APIs. DirectX11/12 and Vulkan to start with.
  • OzzAnimation for skeletal animation support, blending, IK, and more.
  • DotRecast and Detour for building navmesh and handling navigation.
  • BepuPhysics2 for physics simulation.
  • Reuse entity-component based scene graph from DeepEngine.
  • Reuse deferred rendering pipeline from DeepEngine.
  • Rebuild foundation types such textures, buffers, models, materials, fonts, shaders, etc.
  • Rebuild asset pipeline with support for foundation types.
  • Use .NET System classes wherever available, and high quality libraries everywhere else.
  • Tie it all together!

Real work commenced with the construction of a serializable project format and command line tools. The project defines how to locate and compile assets into engine formats. The following screenshots are of the initial version of asset builder at work.

Once compiled, the engine can load assets using code like below.

Of course, this already hides a lot of complexity. Before I could load something as a Texture2D, ModelAsset, or ShaderAsset, I first had to build those classes and make them functional.

Easily the driest part of creating a new game engine is the vast amount of work needed to service the most basic things I took for granted in other engines. The entirety of January is consumed building foundation types and integrating them to asset builder.

February – Materials & Rendering

With primitive foundation types now in place, I could start building compound assets with them. Materials are one example of such a type. Materials combine shader programs, texture inputs, and parameters to define how the surface of a material is rendered.

This required building a parameter system for materials and rebuilding shaders to convert my deferred rendering pipeline to BGFX. I also needed to determine how models can be imported, settling on GLTF/GLB format which can be readily exported by Blender.

By early February, I had model import working with my material and rendering system. Following are two of the very first screenshots of my new engine rendering something other than a flat cube.

At this stage, I had rebuilt the 2012 render pipeline in linear space for BGFX with new shaders and even basic PBR support. I decided on a metallic-roughness workflow and ORM texture packing. This is a good standard for texture management shared by many other products.

I would revisit PBR in the coming months, but I was happy with how the asset pipeline, materials, and rendering worked so far.

Also in February, I started learning how to use Blender more effectively from online courses. The goal was to empower myself to build assets to my own specification, even if they’re just prototypes to start with. Following is the result from one course I followed. Very basic stuff, but it’s a huge boost to my confidence knowing I can build something more advanced than a cube if I need it.

I would continue to expand my Blender knowledge in 2024, but hardly as much as hoped. I find working on modelling to be very relaxing compared to coding, and I plan to make more time for this activity in 2025.

March – Importing Scenes & Lighting

Now I needed a scene format to load in more complex environments. I started with GLTF/GLB here as this format describes not only single models, but entire scene layouts too. Blender and Unreal could both export GLTF readily, and I had a whole lot of test assets laying around to play with.

Following is the first scene imported into Deep Engine using assets I had on hand. This dive bar became one of my favourite little test scenes early on, as it was just complicated enough to be interesting without stressing my unoptimised engine too far.

Up until now, I had only a single directional light. I started work on formalising light objects in scenes so that I could render more detailed lighting.

The following screenshots display scene as visualised by the Albedo and Normal outputs of GBuffer. Together with the depth buffer, these outputs contain all the information required to render lights touching visible parts of the scene.


The Albedo buffer describes unlit colours in a frame

The Normal buffer describes the direction surfaces are pointing

Point lighting works by rendering a sphere to represent the light area. Whichever pixels are touched by these spheres are considered to be lit. This is combined with surface normals and other information about the light such as falloff, colour, and intensity to produce a lit pixel in lighting buffer. The following two images show the raw lighting buffer once all lights are rendered.

Combing the lighting and albedo buffers results in a fully lit scene.

Other work completed in March was to improve scene description of objects to include their bounding volumes. This is important to compute visibility for camera and lighting, handle scene picking, and more.

In the following screenshot, object bounds are represented by wireframe boxes. Bounding volumes combine upwards, and the hierarchical nature of scenes means that if a parent object is not visible, then neither will be its children. This enables fast culling as entire parts of the scene can be rejected quickly if not visible to a camera or a light.

I spend the remainder of March fixing bugs and making small optimisations. By the end of the month, my engine is able to render over 1000 random point lights floating through scene.

April – Skeletal Animation & Physics

With all the basics of assets, scenes, rendering, lighting, and more now up and running, I decide it’s time to work on animation and physics. This was easily the most fun I had all year.

I first integrated OzzAnimation with engine. To do this, I built a native C++ wrapper exporting methods that can be bound in C#. I found OzzAnimation a blast to use. It deftly handled all the hard parts of loading skeletons and driving their animation with minimal complexity on my end.

OzzAnimation is multithreaded and easily drives hundred of models.

Next up, I implemented physics using BepuPhysics2. The 2012 version of Deep Engine used the older BepuPhysics1, which despite being a totally different codebase gave me some experience and an existing roadmap I could follow in my new engine.

Within a few days, I’m joyfully spraying around physics confetti over my scenes.

I resume work on animation, now adding 1D blend trees to smoothly blend animations between states like idle > walk > run > sprint. Once again, OzzAnimation handles all of this perfectly.

With blend trees working, I built a third-person character controller. I put together everything I had so far with animation, physics, and rendering in a fun test scene where I can charge through a mess of physics confetti.

Before April wrapped up, I spent a little more time in Blender chopping the arms off models and working through the basics of cameras and creating FPS arm animations.

I didn’t make as much progress with this as I would have liked, but it gave me a novice’s understanding of 3D rigging and posing models to chain together animations. I have a lot more learning to do here in 2025.

May – Shapes, Navigation, Cameras, UI

I started the month building a unified shape system for Deep Engine Next. The intent here is to abstract objects in a way common to both physics and navigation systems. Because I’m integrating two completely different libraries here, they don’t natively support the same data structures. I overcome this by crafting a common layer above physics and navigation.

This allows me to define a scene’s physical space in a unified way with boxes, spheres, capsules, convex hulls, meshes, points clouds, and more. This unified representation can then be consumed by physics or navigation despite them being very different internally.


Scene as viewed by player with detailed meshes and materials

Scene as viewed by physics and navigation with optimal shapes

With this physical representation in place, I can now generate navigation meshes for agents to walk over. In the following screenshot, the green area represents the walkable parts of scene as generated by DotRecast. The engine can use this to steer animated actors around the scene, such as NPCs and enemies.

Next, I prepare camera layers and culling masks. This allows me to stack camera outputs together where each camera sees different elements of a scene depending on their culling mask.

For an example, there are two cameras in the scene below. Depending on the culling mask, a camera might see only the robot or only the environment, and these different views can be combined in any number of ways.

The use I have in mind for stacked cameras and culling masks is to render first-person arms that are visible to player and interact with lighting, but will not clip into the environment or inside of enemies. This is a common technique in FPS games.

To test this use-case, I built a first-person player controller with stacked cameras. The first camera renders the scene, the second camera renders the FPS arms. These combine together in the final output such that player’s virtual arms are physically part of the same scene and lighting, despite being rendered by a camera layer with a different FOV.

With all of the above now in place, I’m at a point where I need an editor to tie all of these systems together. Other than a tool for me to create my game, the editor will be central to mod creation, one of my core pillars.

I start with simple text rendering using MSDF fonts.

Then I start building my UI system from scratch again. I start with a docking UI frame where individual tools can be docked into view. Following is the first screenshot of the editor tool with some empty tool windows docked.

After a few days work on UI system, I have a docking window setup where I can drag around tools and dock them as tabs or as another window.

The first tool window I create is a searchable console view. This allows me to review output from the engine and enter console commands.

The console command input proves to be very useful over the coming months, as I can quickly write up and automate actions for engine to execute from command line.

June – Humanoids & Animation Retargeting

I felt like a return to animation, as I was having so much fun with it back in April. I decide to make a start on a humanoid system to shape, dress, and animate human characters. Other than the player character, this system will be used for NPCs and humanoid foes.

I start with defining a rigged humanoid template. The coloured areas represent parts that can be skinned by clothing, either by painting UVs or swapping out meshes.

Due to time and asset limitations, I couldn’t make as much progress on this as I hoped. I wrap up the month with some basic animation retargeting and move on.

July – Body Morphs & Improved PBR

I could write a devlog series on this process alone. The short version is the facial morphs are generated with Mixamo Fuse then exported. My asset builder can then repackage these down into tiny delta files the engine can apply as morphs to the humanoid template.

Other than facial features, the system can morph body shapes and retarget animations to the humanoid template. At this time, the system is only somewhat useful and mostly incomplete. But it makes for a good foundation I can pick up and improve on later.

After two months on the humanoid system and not very satisfied with my progress, I decide to redirect my time elsewhere. I take a short break to improve the PBR rendering in Deep Engine Next.

Working from the excellent tutorial code in the Nadrin/PBR repository, I implement more advanced PBR and image-based lighting to Deep Engine Next. Following are a couple of before/after images.

This work allows me to use HDRI maps as environment lighting and a source of reflections, instantly helping along the lighting of my scenes.

August – Project Arroa: Revenwood

In August, I finally announced the game project I’ve been planning since 2023. If you haven’t already, you can read the full announcement below.

I still have a substantial amount of engine work to get through, particularly on the tools front, before I can use my engine to build my game. But by this time, I was fielding a lot of questions about what I was planning and decided it was time to share something of my plans.

With that now out in the open, I redouble my efforts on the editor tooling so that I can start building scenes for my game. Here’s how the editor looks at the end of August. Things are slowly beginning to take shape.

September – Gizmos & Scene Interaction

In September, I began cutting code required for editing my scenes. This involved building a gizmo system to render tool handles inside of scene view to visualise and move around scene objects. The following shows scene picking at work.

To draw the gizmos, I needed a way of rendering objects separately to the main scene that can also intersect with it. I spend time improving my renderer to handle more render queues and even forward rendering, not just the deferred pipeline I’ve been using this whole time.

This will be useful later as I build more tooling and need to combine transparent objects into scenes.

At the end of September, a new puppy comes into our life. This good girl makes me happy on the slow coding days.

October – Terrain

I wrote a whole devlog post about this one! If you haven’t already, check it out below.

November – Grids, Thumbnails, Shadows, Instancing

I added a nice grid system gizmo to the scene view, along with grid snapping when dragging around objects. My editor is starting to look more like a useful tool every day now.

Still primarily working with test assets, I built a scrolling thumbnail view that allows me to select assets and drag them into the scene. This required building an editor-wide context aware drag-and-drop system. I’ll get a lot more use out of this in the future.

I then decide to switch gears a little by adding shadows to the renderer. I had actually made a small start on this earlier in the year but didn’t get it finished.

After a couple of days work, I had the basics of shadow maps working.

I then rounded out my linear HDR rendering pipeline with ACES filmic tone mapping. This really helps the contrast between light and dark areas pop while preserving colours.

Toward the end of November, I’m working with larger scenes to stress-test my engine ahead of further optimisations.

I also integrate full scene serialization. Deep Engine Next can now save/load scenes in its own custom format, a snippet of which is below.

This serialization work is all-important to building my own bespoke scenes moving forward. I’m no longer limited to creating scenes inside Blender and attaching components with the console, I can construct scenes natively in my editor tool and work on them there.

Finally, I upgrade my renderer to support instanced rendering. This reduces the number of drawcalls of a test scene from over 50k down to 4.4k.

December – More Optimisations, Storytime, UI Again

After a long year of hard work, I’m ready to make some actual progress on my game starting in 2025. I want this work to be performed entirely in my own editor tool, which means work must continue work a little bit longer to build out the required editor capabilities.

My plan at this time is to build a functional stamp-based terrain editor into my engine, then start constructing the first area of my game. Shown below is the test terrain I used in October.

The first step is to migrate out the model thumbnail viewer to become part of the Content view. This view has been awaiting full capabilities for some time now.

I also implemented modal and modeless windows so that I can draw things like tooltips, popup menus, draggable items, and more. Fonts get an upgrade with line wrapping, and my Content view starts taking shape.

Content view with thumbnails and line-wrapping labels
Dragging an asset out of Content view to drop somewhere, e.g. a material input

These facilities will be used to drag stamp assets into the terrain graph, compose materials, and much more. However, most of it won’t start taking shape until early next year.

On the optimisation front, I found combined savings that cut my frame time overheads by over 60%. Some of my heavier test scenes went from around 20ms down to <5ms per frame in areas.

Finally, I dropped the first of a series of short stories to introduce the world of Project Arroa and set the stage for the events in Revenwood. You can start reading the first story at the link below.

Looking Forward to 2025

This brings us to the present. I’m working madly to wrap up as much UI tooling as I can ahead of the new year. I hope to begin 2025 with a renewed focus of working on an actual game and not just a game engine.

However, work on the game engine and its tools are central to my vision. If you really enjoy reading about the engine side of things, I’ll continue posting about this for some time yet.

I do plan to take a break and spend time with my loved ones over the Christmas through New Year period. If you follow me on socials or on my Discord, I’ve already started slowing down. Work will resume at my usual pace again next year. I’ll be around though if you want to chat on Discord. I’m always happy to answer questions about the project.

Until then, I want to wish you all a very Merry Christmas and Happy New Year! Best wishes for 2025!

Special Thanks To My Patrons

I started this year working full-time on this project. I have big dreams and few means to realise them. So I want to express my heartfelt gratitude to everyone who believed in me enough to support me financially on Patreon this year.

Aina The Khajiit
anchorlight
B Murphy
Corentin Lamy
Drew Bachman
Helioavis
jeremy laumon
Joris van Eijden
Kab the Bird Ranger
Marcin Danysz
Michael Avent
Sharoy Veduchi
Simon Forsling
Squid
Trash Medellon

Your contributions have so far helped me acquire assets to use in my game and courses to accelerate my learning across multiple subjects.

You’re all rockstars in my eyes. Thank you! ❤️


Posted

in

,

by

Tags: