
Welcome back, and Happy New Year!
After a strong year building my own game engine (see 2024 In Review for more about that), I’ve finally arrived at a point where I can start executing on my vision for Project Arroa: Revenwood.
However, there’s still one vital piece missing from the line-up of work I completed last year. You can’t have a forest without trees, and I hadn’t devoted any engineering time towards efficiently drawing trees and foliage. I decided to fix this over the holidays and into January. Here’s how all that panned out.
Assets
I’m a solo developer working fulltime on a huge and complex game project with no outside financial support from investors or publishers. I am very fortunate to receive a small monthly boon from generous patrons over on Patreon.
From the outset, I committed to working within my means and use whatever resources I have available to me. For the moment, this means purchased assets and building the skillsets required to create my own.
When it comes to nature assets, I’m using purchased asset packs where possible and gradually reworking these into something more uniquely my own. If my budget expands in future, I can look into hiring artists to create bespoke assets for my project.
Until then, I’ll keep working within my means and skilling up with Blender and Substance.
Model Importing
Up until now, Deep Engine was only capable of importing GLTF/GLB models. This is more than adequate for bootstrapping and getting detailed test scenes out of Blender. But it’s still quite limited when trying to work with a broad variety of commercial-grade assets, especially animations, and the pipeline of converting everything into something my engine could use was becoming painful.
Performance is also a factor, as the asset pipeline didn’t build models into an engine-native binary format, instead parsing the model at load time.
Wouldn’t it be great if I could just drop in almost any model format and it was prebuilt into a custom binary that would load directly into my engine’s runtime? This would save me precious time and make everything faster to load.
The first step here is Assimp, which can import 40+ different model formats including FBX and GLTF/GLB. Once this was integrated to my asset pipeline, I could open almost any model file and receive a nicely structured intermediate format courtesy of Assimp.
Next, I had to create a new format so that asset pipeline could take files like this:

And process them into a custom model format designed for my engine:

Deep Engine’s .model format is a container storing the model scene (i.e. layout of all its nodes) and a binary blob of the model suited for engine internals. So when it comes time to load a model, all that needs to happen is marshalling the binary data into some buffers and the model is ready to draw!

The .model container can also pack additional information such as materials, textures, skeletal animations, and whatever else I might require in future.
Now that I can load practically any model format, all I have to do is drop in some tree assets and I have a video game, right? Here’s how that looks.

What you’re seeing above is the first tree asset I imported. Before this can look anything like a proper tree, I need to work through several other problems first:
- Level of Detail. Those are technically all the one tree model above, but each represents a different level of detail. LOD0 is the highest level of detail, LOD2 is the lowest. When doing this properly, LOD0 is drawn closer to the viewer and LOD2 farther away.
- Materials. Without the correct textures and surface properties, these trees will only ever look like some kind of vaporwave dream.
- Shaders. Materials need shaders to render correctly, and these tree assets need a few specific shaders.
- Performance. One does not simply walk into Mordor, and one does not simply render thousands of trees to make a forest. This will only melt your CPU and GPU. Special rendering pipelines need to be implemented to make this possible.
Materials
Deep Engine already has a very robust Material and Effect pipeline. What it lacked was a convenient front-end to compose materials quickly and easily. If I’m going to manage hundreds of commercial-grade assets, I need a faster way to set them up and iterate with them.
So the next round of work I completed was building a material editor tool where I could simply drag-and-drop in effects (shaders), set texture channels, and save changes back into the asset pipeline. Here’s how it turned out.

The Content view is now integrated with asset pipeline. It supports asset drag-and-drop, and populating the Details inspector with asset properties. For example, in the below screenshot I’ve adjusted the properties of a texture to let the engine know it belongs in a normal channel, which in turn changes how the texture is built. I can also set the compression format, max size, filter mode, rename file, etc. Each asset type can have its own inspector properties.

With all of this in place, I can very efficiently compose new materials by simply dropping in the appropriate effect and dragging texture into material slots. This is how it looks in action (click through for video).
I did have to cut a couple of corners to save time. The colour property is a hex value for now until I write a colour picker UI, and the render state flags are set as a value rather than with a friendly set of properties. As always, I’ll keep improving this over time.
Level of Detail
Now I have to teach Deep Engine how to understand model LODs and render the correct level of detail based on how large that model is in screen space. Using screen space better measures a model’s impact to viewer as opposed to distance alone, and is easier to tune for.
This required building a new mesh rendering component that draws models in LOD groups rather than individually. This is easier than it sounds, as Deep Engine has a very flexible setup for this kind of thing.
I started building an editor tool to adjust model properties and set LODs in more detail. Shown here is a prototype version of the tool visualising the model bounding box (blue box) and its screen area (yellow rectangle).

I haven’t progressed this tool very far yet, but it was enough to get LODs working and test some things out before moving on.
Shaders
These tree assets required some custom shaders to render correctly with all of their texture channels. For an engine like Unity, the content creator will also create the shaders and materials to draw a model correctly. Because my engine is totally unique, I have to do that work myself before I can render these assets correctly.


Most notably the trunk materials require masking and detail textures, and leaves require specific alpha cutout and culling properties so they render and light correctly when placed on billboard cards.
With all of the above now in place, I can finally now render some good-looking trees that produce lovely dappled shadows. They even self-shadow nicely.



Performance
Even with LODs, rendering trees and other foliage as standalone objects can be performance intensive. Other than drawing all the triangles, there are bottlenecks created by uploading vertex/index data and texture samplers to the GPU. Even if everything is batched perfectly, the overhead of drawing individual models hundreds or thousands of times over becomes impractical for games.
This is where I use two techniques to help improve performance. The first is instancing, which tells the GPU to draw the same object many times over. This doesn’t require any round trips back to the CPU so the instances can execute very quickly on GPU.
The next technique is called indirect drawing. This takes instancing even further by allowing the required parts of a scene, trees in this case, to be processed entirely with a compute shader and stuff draw commands into GPU buffer objects. These buffers are then executed in a subsequent draw indirect step. This way, not only are the models drawn purely on GPU, all of the setup and processing like visibility culling and LODs happen on the GPU also. Once setup, there’s almost no involvement of the CPU each frame beyond kicking off the process.
With the above working, I can now render more trees in real-time.


My indirect drawing pipeline only came online a few days ago and still needs substantial improvement before it reaches full potential. But I’m happy with it as a starting point and pleased with how neatly this all fit into my rendering pipeline.
Something not visible in the above screenshots is how this system works on the other side of the API. I can just hand it a bunch of model and say “draw these all over” and it can. This paves the way for a future editor tool to easily paint tree instances on terrain.
Grass & Small Details
As a bonus, the work done so far on drawing tree instances can also be applied to smaller ground details like grass and stones!


To do this correctly however requires a different management setup than trees. I still have more work to do before I have highly performant trees and ground details, but all the pieces are now in place for me to do that.
Conclusion
At time of writing, Deep Engine is finally at the point where I can start realising my vision for Project Arroa: Revenwood and move towards something that looks more like an actual game.
All of the core functionality I required is now working to one degree or another. My tools foundations is already proving to be very flexible and extensible.
From today, I’m changing gears to lean more into the creative work of roughing out my first gameplay map.
And it will be rough to start with. Please don’t expect a polished-looking environment right out of the gate. My focus will be more on creating playable spaces with good flow that serve gameplay. Even big-budget titles need several art passes before they look good, and I don’t expect my small game to be any different.
Thank you for reading! Until next time, you can also find me at the links below.