Hi,

Thanks for using The Machinery. We're excited to share with the world what we have cooked up and happy to have you among the people that are trying out the engine. This book is here to give you a little bit of background and information about what you're looking at.

Besides this book we have several other resources which might be good to checkout:

Also feel free to checkout our internal

The purpose of this Programming Guidebook is to lay down principles and guidelines for how to write code and work together at Our Machinery.

Enjoy!

The Machinery Team

This book is currently a work in progress: Feel free to contribute to it via OurMachinery/themachinery-books. Either create a Pull Request, or submit an issue.

ℹ️ Dropbox usage: This book makes use of Dropbox for our image storage. Therefore if you are using a agressive ad-blocker some images might not load.

Introduction to the Engine

What is The Machinery?

The Machinery is a framework for building different kinds of 3D software: editors, tools, pipeline components, games, visualizations, simulations, toys, experiments etc. You can think of it as a game engine, but its intended use stretches beyond games, covering a wide range of applications. What makes The Machinery special is that it is lightweight and completely plugin-based. This means that you are not limited to a single editor and runtime. Rather, you can mix and match components as you need (or write your own) to create your own unique experience. The Machinery can also be stripped down and run embedded, as part of a larger application.

A toolbox of building blocks

The Machinery is completely plugin-based. You can pick and choose the parts you need to customize it to your specific needs. You can extend the engine, and the editor, by writing your own plugins. You can even build completely new applications on top of our API, or embed our code into your existing applications or workflows.

Powerful editing model

The Machinery uses a powerful data model called The Truth. This model is used to represent assets and has built-in support for serialization, streaming, copy/paste, drag-and-drop as well as unlimited undo/redo. It supports an advanced hierarchical prototyping model for making derivative object instances and propagating changes. It even has full support for real-time collaboration, multiple people can work together in the same game project, Google Docs-style. Since all of these features are built into the data model itself, your custom game-specific data will get them automatically, without you having to write a line of code.

Easy to build tools

The Machinery uses an in-house, lightweight Immediate GUI (IMGUI) implementation, it is used for both the editor UI as well as any runtime UI the end-user needs. Extending the editor UI using custom plugins is simple and the possibility to Hot Reload code makes creating new UIs a breeze.

Using our UI APIs, it is easy to create custom UI controls. And everything has been optimized to feel snappy and responsive. In fact, the entire editor UI is rendered with just a single draw call.

Modern rendering architecture

The renderer has been designed to take full advantage of modern low level graphic APIs. We currently provide a Vulkan backend. A Metal 2 backend is in the works. You can reason explicitly about advanced setups such as multiple GPUs and GPU execution queues. Similar to the rest of the engine, the built-in rendering pipeline is easy to tweak and extend -- we ship the source code for the high-level parts of the rendering pipeline with all versions of The Machinery, including the Indie Free version.

High performance

The Machinery focuses a lot on making the engine fasts by focusing on data flows and cache friendly memory layouts, we strive towards using data-oriented design principles. Code that needs to be heavily parallelized can run on top of our fiber-based job system, taking full advantage of the parallel processing power of modern CPUs. We also have a thread-based task system for more long-running tasks.

Simplicity

The Machinery aims to be simple, minimalistic and easy to understand. In short, we want to be "hackable". All our code is written in plain C, a significantly simpler language than modern C++. The entire code base compiles in less than 60 seconds and we support hot-reloading of DLLs, allowing for fast iteration cycles.

Our APIs are exposed as C interfaces, which means they can easily be used from C, C++, Rust or any other language that has a FFI for calling into C code.

License

We provide three different liceneses:

  • The Indie Free license is completely free for indie developers and includes all of The Machinery, except for the source code. It does come with full SDK to make new plugins as well as some sample plugins and the source code for the high-level parts of our rendering pipeline.

  • The Indie Pro license is the same as Indie Free, but also includes the full source code of the engine and all associated tools. Price: $100 / user / year.

  • The Business version is aimed towards larger businesses. Feature wise it is the same as Indie Pro, but comes with prioritized support. Price: $900 / user / year.

The definition of "larger business" above is any company with a yearly revenue exceeding $100K, in which case you are not eligible to buying the Indie Pro license.

All of the licenses allow for full royalty-free use of The Machinery. You can build and sell commercial games, tools, applications, and plugins. For more details, see our pricing page.

When you download The Machinery, you will be on the Indie Free license. If you do not fall in the "indie" category, you can still use this license to evaluate The Machinery, but you need to buy a business license to put it into production.

2021 Early Adopter Program FAQ

I bought an early adopters license, will I continue to pay the discounted price when my subscription renews?

Yes, as long as you don't cancel your subscription, you will continue to pay the discounted price for the next 5 years.

Frequently asked questions

Table of Content

Do you have a public roadmap?

Yes, you can find the roadmap online on our website: Open Roadmap.

I found a bug, what do I do?

Have you checked: Troubleshooting if you cannot solve your issue with its help you can report a bug here: GitHub issues page

Do you have a Discord server?

Yes, here's an invite link: https://discord.com/invite/SHHSZaH

What counts as a "user"?

An individual that is using The Machinery as part of their work or working on a team building a game in The Machinery, whether they are a programmer, artist, animator, level designer, etc.

I have multiple machines and platforms, do I need separate licenses?

No, the license is per user. You can use your license on as many different machines and platforms as you like.

Who can use the indie license?

As a company, you can use the indie license for your employees if the total revenue and funding of your company is less than $100K/year. As an individual, you can use the indie license if your total revenue from The Machinery or related work is less than $100K/year.

Can I use the indie license for hobby/side project unrelated to my "day job"?

Yes, you can get an indie license for you as an individual (to use for hobby/side projects) even if you work for a a company that makes more than $100K/year. This license is tied to you as an individual and you may not use it as part of your "day job" work (if you want to do that, you need a Business license). Also, if your side project starts earning more than $100K/year, you will need a Business license for your side project.

Do you offer Academic licenses?

Yes. You can use our Indie Free license for academic purposes. If you are interested in source code access for academic purposes, see our Academic License page.

Do you offer discounts for emerging markets? (South America, Africa, etc.)

We'd like to. If you are located in one of these markets and want to offer us more insight into your needs, please contact us.

Do you offer volume discounts?

We offer pricing for large companies through our Enterprise license. Please contact us.

I'm not sure what pricing structure fits my studio/myself, but I don't consider myself an "Enterprise" can I contact you for a bespoke license?

Yes! We understand some studios and individuals might have different needs regarding what tools they need or even payment plans. Please contact us.

If I get a Pro license just for myself, can I build custom versions of the engine and distribute them to the rest of my team?

No. If you are working on a game as a team, everybody on the team needs to be on the same license. So in this case, everybody would need to be on the Pro license. If you have a loosely organized team with contractors or part-time workers, you need at least one license per full-time equivalent. So if you have 10 people working 50 % on the team, you need 5 licenses.

If I make a game with The Machinery, do I have to keep paying for the license to sell the game?

No. As long as you are actively working on the game (making updates, patches, etc) you need an active license, but you don't need a license just to distribute or sell a game that you are no longer actively working on.

Can I evaluate the engine without buying a license?

Yes, you can download the binary version of the engine and use it for evaluation purposes without obtaining a license. You may not use the engine in production while evaluating it. If you are interested in evaluating the source code, please contact us.

I bought an early adopters license, will I continue to pay the discounted price when my subscription renews?

Yes, as long as you don't cancel your subscription, you will continue to pay the discounted price for the next 5 years.

How does source code access work?

If you buy a license with source code included, you will be given direct access to our GitHub repository which contains all the source code of The Machinery.

To link your GitHub account, enter your GitHub account information on your Profile page. You should get an invite to the repository in a couple of hours.

I signed up for source code but didn't get access.

Make sure your GitHub account is correctly entered on the Profile page. It should be your account name, not your email.

GitHub invites frequently end up in the Spam folder. Check there or go to the repository to see your invite.

Can I sell tools developed using The Machinery APIs?

Yes, you can commercialize any game, tool, application, content, etc created with The Machinery as long as you have a valid license.

Can I sell plugins for The Machinery?

Yes, if you develop your own plugins for extending and enhancing The Machinery, you may sell those plugins for use by others.

Can I blog, stream, and tweet about The Machinery? Can I monetize that work?

Yes, you can make tutorials, videos, screen captures, etc about The Machinery and distribute them for free or sell them for money.

Is there anything I can't sell?

Yes. You can't sell The Machinery itself, a reskin of The Machinery or a game engine/editor built on The Machinery. Anyone using a derivative game engine/editor based on The Machinery would themselves need a The Machinery license.

Glossary

The following list provides the most common terms used in The Machinery.

TermDescription
The TruthThe Truth holds the authoritative data state of a The Machinery project and allows it to be shared between various systems. In particular, it supports the following features:

- Objects with properties (bools, ints, floats, strings).
- Buffer properties for holding big chunks of binary data.
- Subobjects (objects of other types belonging to this object).
- References to other objects.
- Sets of subobjects and references.
- Prototypes and inheritance.
- Change notifications.
- Safe multi-threaded access.
- Undo/redo operations (from multiple streams).

More information is available in the documentation.
Truth AspectAn "aspect" is an interface (struct of function pointers) identified by a unique identifier. The Truth allows you to associate aspects with object types. This lets you extend The Truth with new functionality.
Creation GraphThe Creation Graph system is a node-based way of creating various types of assets and primitives such as images, buffers, draw calls, etc. It can be used by both tech-artists to create shaders but also to setup data processing pipelines. More
Creation Graph GPU NodesCreation Graphs contain nodes that can execute on both the CPU and GPU. The GPU nodes usually become part of a shader program when the creation graph is compiled. There are nodes that transition data from the GPU and CPU and vice versa.
Entity GraphThe Machinery's Visual Scripting Language for game play. More
PrototypeA "template" The Truth object from which other The Truth objects can be instantiated, inherting the properties of the prototype. For more information on Prototypes. (Also called Prefabs in other Engines)
Properties TabWhenever an object gets focused in the asset browser or entity tree, properties of it can be edited from the properties tab.
Asset BrowserShows all assets in your project in an explorer-style fashion. An asset can be thought of as the equivalent of a file in the OS. Some assets such as entities can be dragged and dropped from the asset browser into the scene tab.
Graph Tab/ViewA tab that can either represent an Entity Graph or a Creation Graph.
Entity TreeHierarchical view of the Entity currently being edited in the Scene Tab.
In The Machinery, every entity has a parent except for the root entity of the world.
This Tab Allows the user to quickly reason about and re-organize the hierarchy by adding, removing, expanding and reordering entities.
Simulation EntryProvides gameplay to start, stop and update functions to the Simulation. This concept consists of two parts: The Simulation Entry Interface (tm_simulation_entry_i) and the Simulation Entry component. The interface is there to provide the functionality, component assigns the functionality to a specific Entity, so that it runs when that entity is spawned.
Engine (ECS)An engine is an update function that runs on all entities that have certain components. (Jobified)
System (ECS)A system is an update function that runs on the entire entity context.
Entity ContextThe object that hold the state of all entities. Can be thought of as holding the state of a "world"
EntityA world is built from different entities which in turn can contain child entities and components. The components create the building blocks of each entity and give them their meaning.
ComponentsProvides functionality and data to an Entity. Can be added / removed both from the editor as well as in runtime, during Simulation.
DCC AssetStands for Digital Content Creation Asset. This is our intermediate data format that contains meshes, materials and images. From these assets the user can extract entities and Creation Graphs that represent the materials and images.
PluginThe Machinery is built around a plugin model. All features, even the built-in ones, are provided through plugins. You can extend The Machinery by writing your own plugins. When The Machinery launches, it loads all the plugins named tm_*.dll in its plugins/ folder. If you write your own plugins, name them so that they start with tm_ and put them in this folder, they will be loaded together with the built-in plugins.
Plugin assetsA special type of asset that contains the library (executable code) for a plugin. This makes it possible to add compiled plugins without have to copy the DLL into a specific folder, instead you can just drag n drop DLLs into the project. The user will be asked for consent to run the code in the plugin.
ProjectIt is a self-contained directory or database file that holds all the game data used during the development.
tmsl.tmsl stands for The Machinery Shader Language. It is a data-driven json front-end to create a tm_shader_declaration_o. The tm_shader_declaration_o can be anything from a complete shader configuration (all needed shader stages, any states and input/output it needs, etc) that can be compiled and used when rendering a draw call (or dispatching a compute job), to just fragments of that. Typically a .tmsl file only contains fragments of what's needed and our shader compiler combines a bunch of them to build a functional shader. .tmsl files can define new Creation Graph nodes.

Troubleshooting

This section addresses common problems that can arise when using The Machinery.

System Requirements

Windows

NameRequirement
Operating System Version10, 11
GPUA Vulkan 1.2 capable GPU with the latest drivers.

Linux

NameRequirement
Operating System64-bit Linux machine running Ubuntu 20.04 or a recent ArchLinux.
GPUA Vulkan 1.2 capable GPU with the latest drivers.
Packagessudo apt-get install libxcb-ewmh2 libxcb-cursor0 libxcb-xrm0 unzip

Note: Other Linux distributions have not been extensively tested, but you are welcome to try.

A Crash happened

Did The Machinery Crash? There are a few first troubleshooting steps:

  1. Is your System (Window or Linux) up-to-date?
  2. Do you have the latest Graphics Driver installed?
  3. Does your system support vulkan 1.2?
  4. Does your system fulfill our system requirements?
  5. Did someone else have this issue before? Check on Discord or GitHub issues page

In case you cannot debug the crash yourself you should create a issue on our Issue Tracker: GitHub issues page. To obtain the logs or crash dumps follow the next steps:

Windows 10 & Windows 11 Editor

When it comes to crashes on Windows which you cannot debug yourself, you can enable a full crash dumb via the file utils/enable-full-dumps.reg that we ships as part of the engine. After enabling full dumps, you can be find them in %AppData%\..\Local\The Machinery and then in the CrashDumps folder. In case of an error report it can be very helpful to provide access to the crash dump. You can submit bugs on our public GitHub issues page. Please do not forget to mention your current Engine version.

In order to obtain log files you have to go to the same folder where you can find the CrashDumps (%AppData%\..\Local\The Machinery) but they will instead be in the Logs subfolder.

Linux

When it comes to crashes on Linux which you cannot debug yourself. The dumps can be found in the folder /home/YOU_USER/.the_machinery and then in the CrashDumps folder. In case of an error report it can be very helpful to provide access to the crash dump. You can submit bugs on our public GitHub issues page. Please do not forget to mention your current Engine version.

In order to obtain log files you have to go to the same folder where you can find the Crash Dumps (/home/YOU_USER/.the_machinery) but they will instead be in the Logs subfolder.

Graphics

If the crash happens on the GPU or in graphics related code, then the crash error message will say so. The first step in a Vulkan related crash is to update your graphics drivers. If this doesn't help then please report the issue to us with the following information:

  • The error message you got when the crash happened, this should include file information and a Vulkan error code, it's vital to share these.
  • The log file, see the previous section on how to obtain this.
  • A crash dump file, see the previous section on how to obtain this.

You can submit bugs on our public GitHub issues page. Please do not forget to mention your current Engine version and provide a copy of your logs.

tmbuild cannot find build tools

On Windows, make sure you have Visual Studio installed. If you do, but you did some sor of non-typical installation, setup some environment variable before running tmbuild: TM_VS2017_DIR or TM_VS2019_DIR. They need to point to the root directory of your Visual Studio installation.

tmbuild cannot find environment variables

Before we can build any project, you need to set the following environment variable:

  • TM_SDK_DIR - This should point to your The Machinery root folder, i.e. where the headers lives.

If the following variable is optional:

  • TM_LIB_DIR - This is where libraries we depen upon are downloaded and extracted. If not set, then libraries will be downloaded to the lib subfolder of TM_SDK_DIR.

You can also refer to this guide on tmbuild.

clang-format pollutes my git commits

When you do commits to the git repository, we automatically run clang-format as a git commit hook, it is an application that does some auto-formatting of the code. It only changes files that already have changes. However, you really need to make sure you have the correct version of clang-format installed, as different versions format differently, having the wrong version can result in changes to code you never touched (although, it will as mentioned only touch file you already touched). Therefore, we provide it as a library (put alongside the other libraries tmbuild downloads). Visual Studio comes with its own version, so make sure to go into the settings of Visual Studio and point it to our version. To make command-line git find it you may want to add it to your PATH environment variable.

Where to report bugs or feedback

  1. If you have any problems running the software, encounter bugs or crashes, etc, then please report them on our public bug tracker. We will fix bugs as soon as we can and provide updated executables for download on the website. If you have a source code license and fixed something yourself, we'd gladly review and accept Pull Requests.
  2. If you have other feedback or questions, ask them on our Discord Server or post them on our forum. We appreciate candid, honest opinions.

Contributing

We are welcoming all contributions to the Engine or the Books. Any kind of contributions must follow the following requirements.

Table of Content

Contributing to Books

If you want to make a contribution to this repository (other than a small spelling or formatting fix), please first create an Issue or a Discussion thread, discussing the addition you want to make. Otherwise, there is a chance your submission will be rejected if we decide it does not fit into the structure of this document.

We reserve the right to edit and reject contributions.

Contributor's License Agreement

By submitting pull requests to this repository you represent that:

  • You are the copyright owner of the text you are submitting.
  • You grant to Our Machinery and to recipients of our software and source code a perpetual, worldwide non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense and distribute your contributions and derivative work.
  • You are legally entitled to grant the above license. If your employer, if any, has rights to intellectual property that you create, you represent that you have received permissions to make this contribution on behalf of your employer.

If you do not agree with any of these representations, do not submit pull requests to the repository.

Contribution to The Machinery

If you want to make a contribution to the main repo, please first create an issue tracker or Issue,or a Discussion thread, discussing the addition you want to make. Besides you should read Code Guide Book before you start your Pull Request. Otherwise, there is a chance your submission will be rejected your Pull Request.

We reserve the right to edit and reject contributions.

Contributor's License Agreement

By submitting pull requests to this repository you represent that:

  • You are the copyright owner of the code you are submitting.
  • You grant to Our Machinery and to recipients of our software and source code a perpetual, worldwide non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense and distribute your contributions and derivative work.
  • You are legally entitled to grant the above license. If your employer, if any, has rights to intellectual property that you create, you represent that you have received permissions to make this contribution on behalf of your employer.

If you do not agree with any of these representations, do not submit pull requests to the repository.

Code of Conduct

  • Please be nice and respectful, don't be rude.

  • Any kind of hate speech leads to being banned.

  • Think before you type, especially when you feel that a discussion is becoming heated.

  • Listen to Our Machinery Team members.

  • Keep discussions in the correct forums.

Getting started

To run The Machinery you need:

  • A 64-bit Windows 10 machine with the latest Vulkan drivers
  • Or a 64-bit Ubuntu 20.04 Linux machine with the latest Vulkan drivers (ArchLinux should also work, no guarantees are made for other distros)
  • And an ourmachinery.com account. Sign up here!

On Linux, you also need to install the following packages for the runtime:

sudo apt-get install libxcb-ewmh2 libxcb-cursor0 libxcb-xrm0 unzip

This does not work on your distro? No problem, visit our Linux installation process across distributions guide.

Getting up and running

Quick steps to get up and running:

  1. Download The Machinery at https://ourmachinery.com/download.html.

  2. Sign up for an ourmachinery.com account here. (It's free!)

  3. Unzip the downloaded zip file to a location of your choosing.

  4. Run bin/the-machinery.exe in the downloaded folder to start The Machinery.

  5. Login with your ourmachinery.com account at the login screen and approve the EULA.

  6. To find some samples to play with go to Help > Download Sample Projects in the main menu.

  7. Pick one of the sample projects (for example Physics), click Get and then Open.

  8. Play around with it, try some other samples and read the rest of this document to find out what else you can do with The Machinery.

If you get errors that mention Vulkan or if you see weird rendering glitches, make sure to update your GPU drivers to the latest version. If that doesn't work, post an issue with our issue tracker or ping us on Discord and we will help you.

Related videos to these topics are:

Source Code access

You can make use of tmbuild (from the binary build) to download the engine (source code) and install all needed dependencies as well. This can be done via tmbuild --install in this case you may want to use --github-token as well and provide your token. Alternatively you can also manually clone the repo as you are used to from any other git repositories

I signed up for source code but didn't get access.

Make sure your GitHub account is correctly entered on the Profile page. It should be your account name, not your email.

GitHub invites frequently end up in the Spam folder. Check there or go to the repository to see your invite.

If you still have problems despite this, then contact us on [email protected]

Contents of the binary distribution

For those who use the binary distribution, i.e. downloaded a prebuilt packageo of the engine, this page describes that contents of that package.

FolderContent
headers/
lib/
bin/The headers, libraries, and DLLs that make up The Machinery SDK.
bin/the‑machinery.exeThe Machinery's main editor.
doc/SDK documentation.
samples/Sample code that shows how to extend the editor and SDK with plugins, as well as how to build new executables on top of The Machinery.
code/For reference: code for our utility programs.
bin/simple‑draw.exeA simple drawing program built on top of the SDK.
bin/simple-3d.exeA simple 3D viewport built on top of the SDK.
bin/tmbuild.exeOur build tool, that can be used to build samples and plugins.
bin/docgen.exeOur tool for generating documentation.
bin/hash.exeOur tool for generating static hash strings.
bin/localize.exeOur tool for generating localization data.
bin/runner.exeAn executable that can load a The Machinery project and do what Simulate Tab does, but stand-alone. It is copied whenever you Publish a project from within The Machinery.
utils/.clang-formatThe clang-format settings we use to format our code.
utils/pre-commitThe pre-commit hook we use in our git repository to auto-format the code.
utils/enable-full-dumps.regA registry file that enables full crash dumps (see below).

You can use the Download tab inside The Machinery to download sample projects for the engine.

Here's a list of the sample projects that are available:

ProjectDescription
AnimationA sample project that features an animated character.
Creation GraphsSample use of creation graphs.
Gameplay First PersonA sample first-person game.
Gameplay Interaction SystemA sample first-person game with intractable entities.
Gameplay Third PersonA sample third-person game.
Modular Dungeon KitA sample modular project that lets you compose dungeon scenes out of modular components.
PhysicsA sample project that demonstrates the use of physics.
PongA sample visual scripting and gameplay project.
Ray Tracing: Hello TriangleA sample project showing how to use the ray tracing APIs.
SoundA sample project demonstrating sound playback.
All Sample ProjectsA zip containing all sample projects.

The All Sample Projects download contains all these projects, and also some sample engine plugins that can get you started with extending the engine.

Note that some of the content in these projects was created by other people and licensed under Creative Common or other licenses. See the *-license.txt files in the projects for attribution and license information.

The editor that is included allows you to:

  • Import various types of DCC assets (FBX, GLTF, etc).
  • Create simple entities/scenes by placing and arranging assets.
  • Add scripted behaviors to entities using the visual scripting language in the Entity Graphs.
  • Add physics collision to objects and modify their physical properties.
  • Import animations and create advanced animation setups as an Animation State Machine.
  • Import WAV files and play them or place them on entities.
  • Run and simulate these behaviors using the Simulate tab.
  • Extend the engine and the editor with your own plugins that implement new editor tabs, entity components, gameplay code, etc.
  • Write your own applications, using The Machinery as an SDK.

Sign in

When you first run the-machinery.exe, you will encounter a login screen:

Login screen.

To use the editor, you must log in with an Our Machinery account. If you haven't done so already, press the Sign Up button to create an account, or go directly to https://ourmachinery.com/sign-up.html.

In addition, you also need to agree to our EULA.

Project Setup

In The Machinery, we have two kinds of projects: A database project and a directory project. It is possible to save a database project as a directory project and vice versa.

Note: When saving a Database project as a Directory project or vice versa, be aware that these are two different projects. Hence changes to one will not apply to the other.

The Directory Project

A directory project saves all your assets, etc., in a specified project folder. The assets that make up the project are saved as combination of JSON files and binary buffers. This file format is better suited for source control.

view of a directory project in the Windows explorer

The Asset database Project

In contract to the Directory project, the database project results in one file rather than multiple files. It has the file ending .the_machinery_db. This database will contain all your assets. It loads faster. It is the file format used by the Runner, the stand-alone application that is used to run games made within The Machinery.

database project in the file explorer

Project Management

Can I directly add Assets to my project from the File Explorer of my OS?

No, the editor will not import assets directly added to the project folder via your OS File Explorer. However you can modify The Machinery files (in a directory project) during runtime or before. If you do this the Engine will warn you.

Changes to the project on disk where detected: [Import] [Ignore]

More information on this topic here.

You handle the main project management steps through the File Menu. Such as Create and Save. By default, (1) The Machinery will save your project as a directory project. However you can save your current project as an Asset Database (2).

Possible folder structure for a project

The following project shows a possible folder structure of a game project. This is not a recommendation just a suggestion. At the end it depends on your needs and your workflows how your projects should be structured.

  1. game_project contains a The Machinery Directory Project
  2. plugins contains the dll of your plugins (should not be checked in into source control)
  3. raw_assets may contain the raw assets your DCC tool needs to process. Your the Machinery project can point here and you maybe able to just reimport things from there if needed
  4. src contains the source code of your plugins. They can be in multiple sub folder depending on your liking and need.

Example src folder

In here we have one single premake file and a single libs.json as well as the libs folder. This allows you to run tmbuild just in this folder and all plugins or the ones you want to build can be built at once. Besides it will generate one solution for Visual Studio. In this example all plugins will copy their .dll/so files into the ../plugins folder.

A possible .gitignore

plugins/*
src/libs/*

as well as the default Visual Studio, Visual Studio Code and C/C++ gitignore content.

Version Control

At Our Machinery we are using Git as our version control tool. This guide shall show you how you can use Git with both the binary version and the source version of the Engine. This guide will also give some insights about how a potential setup for The Machinery and Git, Perforce (Helix Core) and PlasticSCM could look like.

Table of Content

What are git, perforce and Plastic SCM?

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. It is also a decentralized source control tool. This means that there is no central repository that you have to use to either fetch the latest or push your changes too. All your changes are locally stored as well as the entire project history. In case you have set up a remote repository you can push your changes to the remote repository. This means you can work offline and and still make use of the benefits of version control. To support large files such as binary files you need to setup Git Large File Storage (LFS). The default workflow is CLI (command line based) but there are many different alternatives for a GUI (See later).

In contrast Perforce (Helix Core) is a centralized version control tool that needs a central repository always available and you check your changes in directly to the central repository. This also means that you can only get a different version (revision) of the repository if you are online. Also working offline and making use of the benefits of source control are not given. Helix core (Perforce) supports binary store build in you do not need to do anything other than check them in. The main workflow is GUI based and all bundled in a few applications.

In contrast to both Git and Perforce (Helix Core), PlasticSCM is designed to support both workflows decentralized (like git) and centralized (like perforce). Moreover Plastic SCM supports large files such as binaries build in. As with perforce plastic's worklfow is mainly GUI based.

The three source control systems in a overview

PlasticGitPerforce
Work Centralized Just checkin, no push/pullYesNoYes
Work distributed Push/pull + local repoYesYesNo
Can handle huge reposYesYesYes
Good with huge files Binary filesYesNo
unless LSF
Yes
File Locking Binaries, artYesNoYes
Comes with GUIYesNoYes
Special GUI & workflow for artists - And anyone not a coderYesNoYes
BranchesYesYesYes
but they work different than in Git or Plastic
Detecting merges between branchesYesYesNo
Cloud HostingYesYesYes
LicensePaidFreePaid

Where can I host my version control?

Git

ServiceNote
GitHubIt is generally free, but to get more space, Github Actions time etc you need to pay.
GitLabIt is generally free but to get more space, CI (GitLab CI) Hours etc. you need to pay.
GitLab Self HostedIt is possible to host it yourself in this case you need a Server. (Linux / Windows)
BitbucketIts generally free but to you get more space, CI (TeamCity) Hours etc. you need to pay.
Bitbucket Self HostedYou need to contact the Sales team
Helix TeamHubYour code repository software is where you store your source code. This might be a Mercurial, Git, or SVN repository.

Helix TeamHub can host your source code repository, whether it’s Mercurial, Git, or SVN. You can add multiple repositories in one project — or create a separate project for each repository. Its a paid solution.

Perforce (Helix Core)

ServiceNote
Helix CorePerforce can host Helix core for you.
Helix Core Self HostedYou can host your own Helix Core Server for free up to 5 Team members and 20 Workspaces.

Plastic SCM

ServiceNote
Official Plastic SCM Cloud ServiceFor free for up to 3 team members and 5GB cloud storage. You can only self host Plastic SCM if you have the Enterprise license.

UI Clients

Git

More UI's: https://git-scm.com/downloads/guis

Perforce (Helix Core)

  • There is only the official client.

Plastic SCM

  • There is only the official client but there is a separation between Plastic Gluon (Version Control for Artists) and the normal more for Programmers Plastic Client.

Git Setup for The Machinery

You should checkin (push/commit) your entire directory project into your Git Repository. When it comes to your plugins we recommend to checkin your source code not the binaries (unless they are project plugins therefore they live in the project).. For this we recommend the following .gitignore files:

The default C/C++ .gitignore file you can find online C gitignore or C++ gitignore with the following modifications:

plugins/*
src/libs/*
bin/*
build/*

We also suggest the VisualStudio gitignore file as well as the one for Visual Studio Code here. Also we do not recommend to check in the binary version of the Editor into git. We also recommend that you do not check in the TM_LIBS_DIR.

.gitattributes

Here is an example .gitattributes file for the Git LFS repository with The Machinery Directory project (we also use this for the normal github repo). If you are using Large Files Storage you can improve repository performance for asset files that are using binary format and tend to be big (models, textures, etc.).

* text=auto

/bin/** filter=lfs diff=lfs merge=lfs -text
**/*.tm_buffers/** filter=lfs diff=lfs merge=lfs -text

*.4coder text
*.bash text
*.c text
*.clang-format text
*.cpp text
*.gitattributes text
*.gitignore text
*.gltf text
*.h text
*.html text
*.inl text
*.js text
*.json text
*.lua text
*.m text
*.md text
*.rc text
*.reg text
*.shader text
*.the_machinery_dir text
*.tm_creation text
*.tm_dir text
*.tm_entity text
*.tm_meta text
*.yaml text

*.ico binary

# List all file extensions found in repository:
#
# git ls-files  | sed -e 's/.*\///' -e '/^[^.]*$/d' -e 's/.*\././' | sort -u

Perforce Setup for The Machinery

You should checkin (push/commit) your entire directory project to Perforce. When it comes to your plugins we recommend to checkin your source code not the binaries (unless they are project plugins therefore they live in the project). You should make use of the P4IGNORE functionality. This is similar to .gitignore files.

Note that you must be running a 2012.1 (or higher) Perforce Server in order to take advantage of the new P4IGNORE functionality.

In order to use this new functionality with P4V two configuration steps are needed:

  1. Create a text file in the client root containing a list of the filetype(s) to be ignored. The name of the file is not important, but we suggest using something meaningful like "p4ignore.txt". The User's Guide includes a section entitled "Ignoring groups of files when adding" which describes the use of P4IGNORE files.

  2. Set the P4IGNORE environment variable to the name of the file you created in step 1 above. For example:

p4 set P4IGNORE=p4ignore.txt

Restart P4V for the changes to take effect.

Windows Example

  1. Set up a P4V icon on your Windows desktop

  2. Right-click P4V icon, Properties

  3. Change Start in: directory to your client workspace root directory (or it can be a directory of your choice)

  4. Open a command prompt

  5. Set the P4IGNORE variable

p4 set P4IGNORE=C:\Perforce\p4ignore.txt

where C:\Perforce is an existing directory

  1. Create the p4ignore.txt file

In this example we will ignore the addition of any new .o files.

cd C:\Perforce

notepad p4ignore.txt

In notepad you add:

*.o
  1. Close and save the file

  2. Restart P4V

  3. Verify P4IGNORE is working

Attempt to add a .o file in P4V.

Note the message: "The following files were not marked for add, si nce they are 'ignored'.

Resource: This example and instructions above come from the offical documentation.

We recommend the following entries:

*.o
*.d
*.sln
*.vcxproj
*.make
*.xcworkspace
*.xcodeproj
*.hlsl.inl
*.aps
*.exe
*.mine-clang-format

Makefile
.DS_Store
.tags*

/.vscode/settings.json
/.vscode/ipch/
/.vs
/bin
/build
/lib
/tmbuild*
/*.dmp
/samples/bin
/samples/build
/samples/.vs
/samples/lib
/utils/bin
/utils/build
/utils/.vs
/utils/lib
/foundation/bin
/foundation/build
/foundation/.vs
/foundation/lib
/zig-cache
/zigbuild
/zig-out
/gitignore

feature_flags.json
.modules/

Note: This is similar to the Git Ignore file.

Plastic Setup for The Machinery

You should checkin (push/commit) your entire directory project into your Plastic Workspace. When it comes to your plugins we recommend to checkin your source code not the binaries (unless they are project plugins therefore they live in the project). We recommend you to create a ignore.conf located at the workspace root path.

You can just create the ignore.conf in your workspace root path and add the similar content as for Perforce and Git:

*.o
*.d
*.sln
*.vcxproj
*.make
*.xcworkspace
*.xcodeproj
*.hlsl.inl
*.aps
*.exe
*.mine-clang-format

Makefile
.DS_Store
.tags*

/.vscode/settings.json
/.vscode/ipch/
/.vs
/bin
/build
/lib
/tmbuild*
/*.dmp
/samples/bin
/samples/build
/samples/.vs
/samples/lib
/utils/bin
/utils/build
/utils/.vs
/utils/lib
/foundation/bin
/foundation/build
/foundation/.vs
/foundation/lib
/zig-cache
/zigbuild
/zig-out
/gitignore

feature_flags.json
.modules/

For more information check the offical Guide

The Machinery Source Code & Git Workflow

Our Machinery uses a more or less Trunk Based Git Workflow which is better described in our Code Guide Book: OMG-GIT: Git workflow.

If you have source code access to our GitHub Repository you can create Pull Requests. Pull Requests are changes you would like to contribute to the engine. You should read the Code Guide Book before you start your Pull Request.

Getting Started with a New Project

This walkthrough shows how to create a new project. It will also show you what comes by default with the Engine.

This part will cover the following topics:

  • Project Pipeline
    • What is the difference between a directory project and a database project?
    • What is a Scene in The Machinery?
    • What comes by default with a project?

Table of Content

About Scenes

In The Machinery, we do not have a concept of scenes in the traditional sense. All we have are Entities. Therefore any Entity can function as a scene. All you need is to add child entities to a parent Entity. The Editor will remember the last opened Entity. When publishing a Game, the Engine will ask you to select your "world" entity. You can decide to choose any of your entities as "world" Entity.

For more information on publishing, check here.

New Project

After the Machinery has been launched for the first time and the login was successful, the Engine will automatically show you a new empty project. If you do not have an account, you can create an account for free here, or if you had any trouble, don't hesitate to get in touch with us here.

At any other point in time, you can create a new project via Files → New Project.

A new project is not empty. It comes with the so-called "Core," a collection of valuable assets. They allow you to start your project quickly. Besides the Core, the project will also contain a default World Entity, functioning as your current scene. By default, the world entity includes 2 child entities: light and post_process. Those child entities are instances of prototypes.

A prototype in The Machinery is an entity that has been saved as an Asset. Prototypes are indicated by yellow text in the Entity Tree View.

For more information about Prototypes, click here.

the content of the world entity

The Core

Let us discuss the Core a bit. As mentioned before, the Core contains a couple of useful utilities. They are illustrating the core concepts of the Engine, and they are a great starting point. They can be found in the Asset browser under the core folder:

A content overview of the core folder and its subfolders may looks like this:

  • A light entity

  • A camera entity

  • A post-processing stack entity (named post-process in the world entity)

  • A default light environment entity

  • A post-processing volume entity

  • A default world entity (the blueprint of the world entity)

  • A bunch of helpful geometry entities. They are in the geometry folder.

    • A sphere entity
    • A box entity
    • A plane entity
    • as well as their geometry material
  • A bunch of default creation graphs is in the folder creation_graphs

    • import-image
    • DCC-mesh
    • editor-icon
    • drop-image
    • DCC-material
    • DCC-image

How to add some life to your project

All gameplay can be written in C or a C in any binding language such as C++ or Zig. You can also create gameplay code via the Entity Graph. The Entity Graph would live inside of a Graph Component. You can add that to an Entity.

You can find more information about gameplay coding in the "Gameplay Coding" Section

Project Structure Recommendation

It is recommended to separate your gameplay source code, plugin code from the actual project and store them in a separate folder.

my_project/game_project // the directory project
my_project/game_plugins // the main folder for all your gameplay code, plugins

First Gameplay Project

This walkthrough shows how you make a simple scene via the Entity Graph.

This part will cover the following topics:

  • How to create with the visual scripting language (The Entity Graph) a simple playable

Table of Content

How to add some life to your project

Let us create a new project and than add some gameplay to it. Before this, let us define a goal: This part aims to create a plane and a cube that we can move with W on this plane.

It requires multiple steps:

  • Add a plane to the Scene
  • Add a box to the Scene
  • Add a camera
  • Make the box and the plane physical so the box cannot fall through
  • Add an Entity graph for the movement

The first one is to add a plane to our Scene. As we remember in Core in the folder geometry, we have a plane entity. All we need to do is drag it into the Scene and scale it a bit up.

Open the core/geometry folder and add the plane.entity to the scene by dragging it into the scene

The next step is as straightforward as the first step. We repeat what we did before and drag and drop the box from the geometry folder into the scene tab. After this, we can see what happens in the simulation. To start the simulation and open the simulation tab, you need to click on the "play" button.

The moment the simulation tab opens, the simulation starts. As you might have guessed, nothing happens. You can navigate in the tab by using the AWSD keys if no custom camera is selected. Also, we can pause, reset the current simulation or increase the simulation speed.

Pause, reset and increase simulation speed

Let us go back to the scene tab and add some more exciting things. The next point of our task list is to add physics. To make our entities aware of physics, we need to add the Physics Shape and Physics Body Component to the corresponding entities. In this case, the plane should have the Physics shape component that we set as a plane shape, and the box should have the physics shape component and the physics body component. Adding a component to an entity can be done either through right-click on and then Add Component via the Entity tree or the property tab and the Add component button. It is also possible to select an entity in the Entity Tree and use Space.

box entity plane entity

To visualize the box shape or the plane shape, we can use the visualization menu in either the scene or simulation tab.

When the simulation starts, the box - if placed above the plane - will fall on the plane. Isn't this already more exciting?

Add some movement

The next step is to make the box move. There are two approaches we could do a physicals-based movement or just a placement change. We are starting with the simpler option: Placement Change.

We need to add Graph Component to the box entity. (We could also add it to any other entity or the world entity, but we add it to the box for simplicity and organization's sake add it to the box).

Add Menu opened via right click “Add Component”

When the Graph component has been added, all we need is to double-click the Component, and the Graph Editor opens.

The Graph Editor View is empty. We can add new nodes via pressing Space or right-click a new node. It opens the node adding a menu. In there, we can search for nodes.

The Entity Graph is an event-based Visual Scripting Language. It means events will trigger actions.

There are three main events we should use for a particular type of action:

  • Init Event - This gets called when the Graph component gets initialized. The perfect place to set up variables etc.
  • Tick Event - For all actions that need to happen on Tick (Be aware that this might influence your game performance if you do expensive things always on tick)
  • Terminate Event - For all actions that need to happen when the Component gets removed. It mainly happens when the Entity gets destroyed.

You can also create custom events and listen to them. You can define those events in the graph itself or in C.

Our first iteration will use changing the placement of our box whenever we use the key W. The Machinery input system is based on polling rather than on events. We can check if key was pressed via the "Key Poll" node needs to be checked every frame. Therefore a tick event is a need. The "Key Poll" node requires the specify the key and returns a Boolean (true/false) if a key has been pressed, is down, released, or up. It leads to the following setup:

In this setup on tick, it's being checked if the key "w" is down. If that is the case, the graph will execute the logic.

What is the actual logic? The actual logic is

  1. to get the Entity
  2. get its transform
  3. get the components of the transform (split the vector in x,y,z)
  4. manipulate the x value
  5. set the new transform to the Entity

In the Entity Graph, any entity of the current Scene can be referenced by name. It happens via the "Scene Entity" node.

If no name is provided, it will return the graph components Entity.

The return type is an Entity. When we drag the wire into the void, the Add Node menu will suggest only nodes which take the Entity as input:

With expanded Entity Category

We should connect the entity wire with a Get Transform node to get the world transform. Then the transform position should be split into its components. After this, component X should be modified by adding 10 * delta time.

To get the delta time, all that is needed is to add the delta time node and multiple its float value with 10.

When all this is complete, we should set the position of the Entity to the new value. First, a new position vector needs to be constructed by adding the old values to it and then the new modified value.

The next step is to tell the Entity to use this position instead of the old one. This happens through the Set Transform node.

Note that we leave all the other things as they were. It means they remain as before the change.

If we simulate the Scene now, the box moves, but it won't fall into the void if we move it to beyond the plane. It was to be expected because the physics system has not been updated, and therefore the velocity has not changed.

Add physics-based movement

To make the box move with the physics system, we can use the node "PhysX Push":

The node will move the box by applying velocity to it.

What is next?

If you are more interested in physics, download the physics sample via the download tab in that project, you will learn more about the use of physics in the Engine.

Sample Projects

You can download any sample project you like from our website: OurMachinery: Samples. Alternatively you find them under Help > Download Sample Projects in the main menu. After you have downloaded them you can open them via the Download Tab or directly from the hard drive.

A few sample projects are coming by default with the Engine, you find them in the root folder under samples/. If you are interested in how our tools work you can look their source code up in the code/ folder.

The Machinery for Unity Dev's

When migrating from Unity to The Machinery, there are a few things that are different.

Table of Content

Quick Glossary

The following table contains common Unity terms on the left and their The Machinery equivalents (or rough equivalent) on the right.

UnityThe Machinery
GameObjectsAre composed of Entities and Components and Systems
PrefabsPrototypes
Materials, Shaders, Textures, Particle Effects, Mesh, Geometry, Shader Graph, Material EditorCreation Graphs
UIUI
Hierarchy PanelEntity Tree
InspectorProperties Tab
Project BrowserAsset Browser
Scene ViewScene Tab
ProgrammingProgramming
BoltEntity Graph
C#C

UI Differences

Unity

The Machinery

  1. The Main Menu: It allows you to navigate through the Engine, such as opening new tabs or import assets
  2. The Entity Tree shows a tree view of the entity you are editing. It shows the entity's components and child entities. You start editing an entity by double-clicking it in the asset browser.
  3. The Scene shows an editable graphic view of the edited entity. You can manipulate components and child entities by selecting them. Use the Move, Rotate, and Scale gizmos for your desired action.
  4. The Simulate current scene button will open the Simulate tab that lets you "run" or "simulate" a scene.
  5. The Properties tab shows the properties of the currently selected object in the Scene. You can modify the properties by editing them in the properties window.
  6. The Console tab shows diagnostic messages from the application.
  7. The Asset Browser shows all the assets in the project and enables you to manage them.
  8. The Preview shows a preview of the currently selected asset in the asset browser.

The Editor is a collection of editing Tabs, each with its specific purpose. You can drag tabs around to rearrange them. When you drag them out of the window, a new window opens. Use the View menu to open new tabs.

Questions you might have

Where are my GameObjects?

The Machinery has no concept of GameObjects in the sense as Unity does. The Engine is based around Entities and Components. In the game world, not the editor, everything lives within the Entity Component System (ECS). To be exact, it lives within the Entity Context, an isolated world of entities. Your GameObjects are split into data and Behaviour.

You would usually couple your logic together with data in your C# MonoBehaviour scripts. In The Machinery, you separated them into Components and Systems / Engines. They represent your data and Systems or Engines that represent your Behaviour. They operate on multiple entities at the same time. Each Entity Context (the isolated world of Entities) has several systems/engines registered to them.

What is the difference between a System and an Engine?

An Engine update is running on a subset of components that possess some set of components. Some entity component systems are referred to as systems instead, but we choose Engine because it is less ambiguous.

On the other hand, a system is an update function that runs on the entire entity context. Therefore you can not filter for specific components.

Where are my Prefabs?

What Unity calls Prefabs is more or less what we call Prototypes. Our prototype system allows entity assets to be used inside other entities. Therefore, you can create an entity asset that represents a room and then creates a house entity that has a bunch of these room entities placed into it. For more information on Prototypes, check out its Prototypes.

How do I script?

The Machinery supports two ways of gameplay coding by default:

  1. using our Visual Scripting Language (Entity Graph)
  2. using our C API's to create your gameplay code. This way, you can create your Systems/Engines to handle your gameplay.

You do not like C? Do not worry! You can use C++, Zig, Rust, or any other language that binds to C.

Where are my Materials, Shaders, Textures, Particle Effects?

All of these can be represented via the Creation Graphs.

Project data?

The Machinery supports two types of Project formats:

  1. The Directory Project (Default)

A Source control and human-friendly project format in which your project is stored on Disk in separate files (text and binary for binary data)

  1. The Database Project

A single binary file project. It will contain all your assets and data. This format is mainly used at the end to pack your data for the shipping/publishing process.

Where do I put my assets?

At this point in time, you can only drag & drop your assets via the Asset Browser as well as via the Import Menu. See more in the section about importing assets. How to import assets

Import difference between Unity and The Machinery: .dcc_asset

When importing assets created in e.g. Maya they will be imported as dcc_asset. A dcc_asset can hold all types of data that was used to build the asset in the DCC-tool, such as objects, images, materials, geometry and animation data.

During the import step, The Machinery only runs a bare minimum of data processing, just enough so that we can display a visual representation of the asset in the Preview tab. Imports run in the background so you can continue to work uninterrupted. When the import finishes the asset will show up in the Asset Browser.

Note that import of large assets can take a significant amount of time. You can monitor the progress of the import operation in the status bar.

For more information see How to import assets or checkout our Example Workflow for Importing an Asset and create an Entity

What are common file formats supported?

Asset TypeSupported Formats
3D.fbx, .obj, .gltf
Texture.png, .jpeg, .bmp ,.tga, .dds
Sound.wav

Our importer is based on Assimp. Therefore we support most things assimp supports. (We do not support .blend files)

See the Import Chapter for more details

Where do my source code files go?

In the Machinery, all we care about is your plugins. Therefore if you want your plugins (tm_ prefixed shared libs.) to be globally accessible, please store them in the /plugins folder of the Engine. An alternative approach is to create plugin_asset in the Engine then your plugin becomes part of your project.

Please check out the introduction to the Plugin System as well as the Guide about Plugin Assets.

Using Visual Scripting

Visual Scripting is a perfect solution for in-game logic flow (simple) and sequencing of actions. It is a great system for artists, designers, and visually oriented programmers. It is important to keep in mind that the Visual Scripting language comes with an overhead that you would not pay in C (or any other Language you may use for your gameplay code).

The Machinery for Unreal 4 Dev's

When migrating from Unreal Engine 4 (UE4) to The Machinery, there are a few things that are different.

Table of Content

Quick Glossary

The following table contains common UE4 terms on the left and their The Machinery equivalents (or rough equivalent) on the right.

UE4The Machinery
Actor, PawnAre composed of Entities and Components and Systems
Blueprint Class (only the inheritance aspect and that they can represent actors)Prototypes
Material Instance, Shaders, Textures, Particle Effects, Static Mesh, Geometry,Skeletal Mesh, Material EditorCreation Graphs
UIUI
World OutlinerEntity Tree
Details PanelProperties Tab
Content BrowserAsset Browser
ViewportScene Tab
ProgrammingProgramming
BlueprintsEntity Graph
C++C

Questions you might have

Where are my Actors?

The Machinery has no concept of Actors in the sense as UE4 does. The Engine is based around Entities and Components. In the game world, not the editor, everything lives within the Entity Component System (ECS). To be exact, it lives within the Entity Context, an isolated world of entities. Your Actors are split into data and Behaviour.

You would usually couple your logic together with data in your Actor classes. In The Machinery, you separated them into Components and Systems / Engines. Different behaviour can be achieved via composition rather than via Inheritance.

Components represent data while Systems or Engines represent your behaviour. They operate on multiple entities at the same time. Each Entity Context (the isolated world of Entities) has several systems/engines registered to them.

Where are my Blueprints?

The Machinery supports two ways of gameplay coding by default:

  1. using our Visual Scripting Language (Entity Graph)
  2. using our C API's to create your gameplay code. This way, you can create your Systems/Engines to handle your gameplay.

You do not like C? Do not worry! You can use C++, Zig, Rust, or any other language that binds to C.

What is the difference between a System and an Engine?

An Engine update is running on a subset of components that possess some set of components. Some entity component systems are referred to as systems instead, but we choose Engine because it is less ambiguous.

On the other hand, a system is an update function that runs on the entire entity context. Therefore you can not filter for specific components.

Where are my Material Instance, Shaders, Textures, Particle Effects, Static Mesh, Geometry, Skeletal Mesh, Material Editor?

All of these can be represented via the the Creation Graphs.

Project data?

The Machinery supports two types of Project formats:

  1. The Directory Project (Default)

A Source control and human-friendly project format in which your project is stored on Disk in separate files (text and binary for binary data)

  1. The Database Project

A single binary file project. It will contain all your assets and data. This format is mainly used at the end to pack your data for the shipping/publishing process.

Where do I put my assets?

At this point in time, you can only drag & drop your assets via the Asset Browser as well as via the Import Menu. See more in the section about importing assets. How to import assets

What are common file formats supported?

Asset TypeSupported Formats
3D.fbx, .obj, .gltf
Texture.png, .jpeg, .bmp ,.tga, .dds
Sound.wav

Our importer is based on Assimp. Therefore we support most things assimp supports. (We do not support .blend files)

Where do my source code files go?

In the Machinery, all we care about is your plugins. Therefore if you want your plugins (tm_ prefixed shared libs.) to be globally accessible, please store them in the /plugins folder of the Engine. An alternative approach is to create plugin_asset in the Engine then your plugin becomes part of your project.

Please check out the introduction to the Plugin System as well as the Guide about Plugin Assets.

Using Visual Scripting

Visual Scripting is a perfect solution for in-game logic flow (simple) and sequencing of actions. It is a great system for artists, designers, and visually oriented programmers. It is important to keep in mind that the Visual Scripting language comes with an overhead that you would not pay in C (or any other Language you may use for your gameplay code).

The Machinery for Godot Dev's

When migrating from Godot to The Machinery, there are a few things that are different.

Table of Content

Quick Glossary

The following table contains common Godot terms on the left and their The Machinery equivalents (or rough equivalent) on the right.

GodotThe Machinery
NodesAre composed of Entities and Components and Systems
Materials, Shaders, Textures, Particle Effects, Mesh, Geometry, Shader Graph, Material EditorCreation Graphs
UIUI
SceneEntity Tree
InspectorProperties Tab
FileSystemAsset Browser
ViewportScene Tab
ProgrammingProgramming
VisualScriptEntity Graph
C#, GDScript, C++C

Questions you might have

Where are my Nodes?

The Machinery has no concept of Nodes in the sense as Godot does. The Engine is based around Entities and Components.

Everything within the Game World lives within the Entity Component System (ECS). To be exact, it lives within the Entity Context, an isolated world of entities. Your Nodes are split into data and behaviour.

You would usually couple your logic together with data when working in Godot. This coupling happens because Godot uses the Object-oriented approach while The Machinery uses the Data-Oriented approach. Hence you would inherit classes to compose different kinds of behaviour.

In The Machinery, you separated them into Components and Systems / Engines. They represent your data and Systems or Engines that represent your Behaviour. They operate on multiple entities at the same time. Each Entity Context (the isolated world of Entities) has several systems/engines registered to them. In the following image we broke down an example player in Godot into separate parts (components) and its methods into separate Engines.

Important to understand is that the Player Class on the left does not equal the Entity on the left! Since the Entity on the left is just a weak reference to the components, it does not own the data, unlike the Player Class. The Components together form the player and the Systems/Engines on the far right just consume a few of those components, they do not need to understand them all!

This setup allows you to compose entities that are reusing the same engines/systems! For example all your other entities that can move can use the Movement Engine and Jump Engine to get the jump function. All you need to do is compose entities with the:

  • Transform Component
  • Physics Mover Component
  • Jump Component
  • Movement Component

The Movement Engine and Jump Engine will pick them up and apply the same logic to them!

What is the difference between a System and an Engine?

An Engine update is running on a subset of components that possess some set of components. Some entity component systems are referred to as systems instead, but we choose Engine because it is less ambiguous.

On the other hand, a system is an update function that runs on the entire entity context. Therefore you can not filter for specific components. For more information see the chapter about the Entity Component System.

How do I script?

The Machinery supports two ways of gameplay coding by default:

  1. using our Visual Scripting Language (Entity Graph)
  2. using our C APIs to create your gameplay code. This way, you can create your Systems/Engines to handle your gameplay.

You do not like C? Do not worry! You can use C++, Zig, Rust, or any other language that binds to C.

Where are my Materials, Shaders, Textures, Particle Effects?

All of these can be represented via the Creation Graphs.

Project data?

The Machinery supports two types of Project formats:

  1. The Directory Project (Default)

Source control and human-friendly project format in which your project is stored on Disk in separate files (text and binary for binary data)

  1. The Database Project

A single binary file project. It will contain all your assets and data. This format is mainly used at the end to pack your data for the shipping/publishing process.

Where do I put my assets?

At this point in time, you can only drag & drop your assets via the Asset Browser as well as via the Import Menu. See more in the section about importing assets. How to import assets

What are common file formats supported?

Asset TypeSupported Formats
3D.fbx, .obj, .gltf
Texture.png, .jpeg, .bmp ,.tga, .dds
Sound.wav

Our importer is based on Assimp. Therefore we support most things assimp supports. (We do not support .blend files)

Where do my source code files go?

In the Machinery, all we care about is your plugins. Therefore if you want your plugins (tm_ prefixed shared libs.) to be globally accessible, please store them in the /plugins folder of the Engine. An alternative approach is to create plugin_asset in the Engine then your plugin becomes part of your project.

Please check out the introduction to the Plugin System as well as the Guide about Plugin Assets.

Using Visual Scripting

Visual Scripting is a perfect solution for in-game logic flow (simple) and sequencing of actions. It is a great system for artists, designers, and visually oriented programmers. It is important to keep in mind that the Visual Scripting language comes with an overhead that you would not pay in C (or any other Language you may use for your gameplay code).

Introduction into C Programming

This page will link to useful resources about the C Programming language.

Table of Content

Reference

Tutorials

Videos

The Machinery specials

These are things that differ from std C practices. This is what you find in the sub chapters.

Memory Management

Table of Content

While programming plugins for the Machinery, you will encounter the need to allocate things on the heap or generally speaking. In standard C code, you might tend to use malloc, free or realloc. Since we try to be as allocator aware as possible, we pass allocators actively down to systems. This means that wherever you need a long-life allocator (such as malloc), we give a tm_allocator_i object down. This allows you to allocate memory like you would with malloc. Like in std C you need to free the memory allocated via a tm_allocator_i at the end of its use. Otherwise you may leak. Using our built-in allocators gives you the benefits of automatic leak detection at the end of your program. Since all allocations are registered and analyzed at the end of the application, you will be notified if there is a leak.

Note: more about leak detection and memory usage check the chapter about the Memory Usage Tab

Child Allocators

In case you check our projects you will find that we are making extensive use of child allocators. This allows us to log the use of their memory in our Memory Usage Tab.

In case you check our Write a custom Tab example you will find in its create function this code:

static tm_tab_i* tab__create(tm_tab_create_context_t* context, tm_ui_o *ui)
{
    tm_allocator_i allocator = tm_allocator_api->create_child(context->allocator, "my tab");
    uint64_t* id = context->id;

    static tm_the_machinery_tab_vt* vt = 0;
    if (!vt)
        vt = tm_global_api_registry->get(TM_CUSTOM_TAB_VT_NAME);

    tm_tab_o* tab = tm_alloc(&allocator, sizeof(tm_tab_o));
    *tab = (tm_tab_o){
        .tm_tab_i = {
            .vt = (tm_tab_vt*)vt,
            .inst = (tm_tab_o*)tab,
            .root_id = *id,
        },
        .allocator = allocator,
    };

    return &tab->tm_tab_i;
}

static void tab__destroy(tm_tab_o* tab)
{
    tm_allocator_i a = tab->allocator;
    tm_free(&a, tab, sizeof(*tab));
    tm_allocator_api->destroy_child(&a);
}

The tm_tab_create_context_t context gives you access to the system allocator. This one again allows you to create a child allocator. We can now use the new allocator in our tab. This is why we store it within our tm_tab_o object. In the end, we need to destroy our tab and, therefore free the tab object and destroy the child allocator. tm_allocator_api->destroy_child(&a); If we now shut down the engine and we forgot to free any of the allocations done between create and destroy, we will get a nice log:

D:\git\themachinery\plugins\my_tab\my_tab.c(100): error leaked 1 allocations 4112 bytes
D:\git\themachinery\foundation\memory_tracker.c(120): error: Allocation scope `application` has allocations

Rule of the thumb

Like in std C any allocation done with tm_alloc or tm_alloc_at and tm_realloc should be followed by a tm_free!

Temporary Allocator

Sometimes we need to allocate data within a function. Sadly we do have access to the default allocator or do not want to give access to an allocator. Do not worry. The temp allocator comes to the rescue and the frame allocator. These are two concepts for quick allocations that you can forget about because the memory will free them at the end of the function or the frame!

How to use the temp allocator?

The temp allocator is part of the foundation and lives in its header file: foundation/temp_allocator.h Some APIs require temp allocators as the input. They will use them to allocate data that is needed for processing. At first, we need to create an object by using the following macro: TM_INIT_TEMP_ALLOCATOR(ta)

A temp allocator is created and can be used now! Importantly do not forget to call TM_SHUTDOWN_TEMP_ALLOCATOR(ta) at the end; otherwise, you have a memory leak! Do not worry. You won't be able to compile without calling this function! Back to our example. Let's ask the Truth for all objects of a specific type:

TM_INIT_TEMP_ALLOCATOR(ta);
tm_tt_id_t* all_objects = tm_the_truth_api->all_objects_of_type(tt, type, ta);
// do some magic
TM_SHUTDOWN_TEMP_ALLOCATOR(ta);

The truth API will now use the temp allocator to create the list. We do not need to call tm_free anywhere, this is all done at the end by TM_SHUTDOWN_TEMP_ALLOCATOR!

Sometimes APIs require a normal tm_allocator_i, but you are not interested in creating an actual allocator or have access to a memory allocator! No worries, we have your back! The Temp Allocator gives you the following macro: TM_INIT_TEMP_ALLOCATOR_WITH_ADAPTER(ta, a); It generates a normal tm_allocator_i uses the temp allocator as its backing allocator. Hence all allocations done with the allocator will be actually done via the ta allocator! Again at the end, all your memory is freed by TM_SHUTDOWN_TEMP_ALLOCATOR(ta);

Note: You also have tm_temp_alloc() they expect a tm_temp_allocator_i instead of the tm_allocator_i.

What is the Frame Allocator?

The tm_temp_allocator_api has a frame allocator. A frame allocator allocates memory for one frame and then at the end of the frame the memory is wiped out. This means any allocation done with it will stay in memory until the end of the frame! The way of using it is: tm_temp_allocator_api.frame_alloc() or tm_frame_alloc()

Great use case: Formatted Strings

Both the frame as well as the temp allocator are great for using when you need to have a string with formatting! Infact the tm_temp_allocator_api provided an extra function for this: tm_temp_allocator_api.printf() /tm_temp_allocator_api.frame_printf()

Note: About formatting checkout the tm_sprintf_api or the logging chapter

Arrays, Vectors, Lists where is my std::vector<> or List<>

Table of Content

In the foundation we have a great header file or better inline header file the foundation/carray.inl which contains our solution for dynamic growing arrays, lists etc. When used you are responsible for its memory and have to free the used memory at the end! Do not worry, the API makes it quite easy.

Let us create an Array of tm_tt_id_t. All we need to do is declare our variable as a pointer of tm_tt_id_t.

tm_tt_id_t* our_ids = 0;

After this we can for example push / add some data to our array:

tm_carray_push(our_ids,my_id,my_allocator);

Now the my_id will be stored in the our_ids and allocated with my_allocator! In the end when I do not need my array anymore, I can call: tm_carray_free(our_ids,my_allocator), and my memory is freed!

This doesn't look very pleasant when I am working with a lot of data that only needs to be a temporary list or something. For this case, you can use our temp allocator! Every tm_carray_ macro has a tm_carray_temp_ equivalent.

Note: It is also recommended to make use of tm_carray_resize or tm_carray_temp_resize if you know how many elements your array might have. This will reduce the actual allocations.

Going back to our previous example:

TM_INIT_TEMP_ALLOCATOR(ta);
tm_tt_id_t* all_objects = tm_the_truth_api->all_objects_of_type(tt, type, ta);
// do some magic
TM_SHUTDOWN_TEMP_ALLOCATOR(ta);

tm_the_truth_api->all_objects_of_type actually returns a carray and you can operate on it with the normal C array methods: e.g. tm_carray_size() or tm_carray_end(). Since it is allocated with the temp allocator you can forget about the allocation at the end as long as you call TM_SHUTDOWN_TEMP_ALLOCATOR.

How to access an element?

You can access a carray element normally like you would access it in a plain c array:

TM_INIT_TEMP_ALLOCATOR(ta);
tm_tt_id_t* all_objects = tm_the_truth_api->all_objects_of_type(tt, type, ta);
if(all_objects[8].u64 == other_objects[8].u64){
    // what happens now?
}
TM_SHUTDOWN_TEMP_ALLOCATOR(ta);

Iterate over them

You can iterate over the carray like you would iterate over a normal array:

TM_INIT_TEMP_ALLOCATOR(ta);
tm_tt_id_t* all_objects = tm_the_truth_api->all_objects_of_type(tt, type, ta);
for(uint64_t i = 0;i < tm_carray_size(all_objects);++i){
    TM_LOG("%llu",all_objects[i].u64);// we could also use TM_LOG("%p{tm_tt_id_t}",&all_objects[i]);
}
TM_SHUTDOWN_TEMP_ALLOCATOR(ta);

An alternative approach is a more for each like approach:

TM_INIT_TEMP_ALLOCATOR(ta);
tm_tt_id_t* all_objects = tm_the_truth_api->all_objects_of_type(tt, type, ta);
for(tm_tt_id* id = all_objects;id != tm_carray_end(all_objects);++id){
    TM_LOG("%llu",id->u64);// we could also use TM_LOG("%p{tm_tt_id_t}",id);
}
TM_SHUTDOWN_TEMP_ALLOCATOR(ta);

Hashmap and set

In The Machinery we make use our own hashmap that is implemented in the hash.inl file. Our Hashmap is the fundation for our Set implementation as well.

Table of Content

Hashmap

Example:

#include <foundation/hash.inl>
struct TM_HASH_T(key_t, value_t) hash = {.allocator = a};

tm_hash_add(&hash, key, val);
value_t val = tm_hash_get(&hash, key);

The hashes in The Machinery map from an arbitrary 64-bit key type K (e.g. uint64_t, T *, tm_tt_id_t, tm_entity_t, ...) to an arbitrary value type V.

Only 64-bit key types are supported. If your hash key is smaller, extend it to 64 bits. If your hash key is bigger (such as a string), pre-hash it to a 64-bit value and use that as your key.

If you use a pre-hash, note that the hash table implementation here doesn't provide any protection against collisions in the pre-hash. Instead, we just rely on the fact that such collisions are statistically improbable.

Note: such collisions become a problem in the future, we might add support for 128-bit keys to reduce their probability further.

The hash table uses two sentinel key values to mark unused and deleted keys in the hash table: TM_HASH_UNUSED = 0xffffffffffffffff and TM_HASH_TOMBSTONE = 0xfffffffffffffffe. Note that these values can't be used as keys for the hash table. If you are using a hash function to generate the key, we again rely on the statistical improbability that it would produce either of these values. (You could also modify your hash function so that these values are never produced.)

Commonly hash types.

Our implementation comes with some predefined commonly used hash types:

NameDescription
tm_hash64_tMaps from an uint64_t key to an uint64_t value.
tm_hash32_tMaps from an uint64_t key to a uint32_t value.
tm_hash64_float_tMaps from an uint64_t key to a float value.
tm_hash_id_to_id_tMaps from an [tm_tt_id_t](https://ourmachinery.com//apidoc/foundation/api_types.h.html#structtm_tt_id_t) key to a [tm_tt_id_t](https://ourmachinery.com//apidoc/foundation/api_types.h.html#structtm_tt_id_t) value.

How to iterate over the map?

You make iterate over a hashmap:

for (uint64_t *k = lookup.keys; k != lookup.keys + lookup.num_buckets; ++k) {
    if (tm_hash_use_key(&lookup, k)){
    //..
    }
}

Sets

Example:

#include <foundation/hash.inl>
struct TM_SET_T(key_t) hash = {.allocator = a};

tm_set_add(&hash, key);
if(tm_set_hash(&hash, key))
{
//    ...
}

The set in The Machinery map from an arbitrary 64-bit key type K (e.g. uint64_t, T *, tm_tt_id_t, tm_entity_t, ...) to an arbitrary value type V.

Only 64-bit key types are supported. If your set key is smaller, extend it to 64 bits. If your set key is bigger (such as a string), pre-hash it to a 64-bit value and use that as your key.

Commonly set types.

Our implementation comes with some predefined commonly used hash types:

NameDescription
tm_set_tRepresents a set of uint64_t keys.
tm_set_id_tRepresents a set of [tm_tt_id_t](https://ourmachinery.com//apidoc/foundation/api_types.h.html#structtm_tt_id_t) keys.
tm_set_strhash_tRepresents a set of hashed strings.

How to iterate over the map?

You make iterate over a set:

for (uint64_t *k = lookup.keys; k != lookup.keys + lookup.num_buckets; ++k) {
    if (tm_set_use_key(&lookup, k)){
    //..
    }
}

Concurrency in Machinery

In The Machinery we are making use of our Fiber Job System as well as of our Task System. We have a blog post that explains our fiber job system in its core ideas Fiber based job system. Also for more details in how they are implemented see our source code or check the api docs: Job System and Task System. This guide introduces you on how to use them and how you can syncronize your data.

Table of Content

Syncronization Primitives

TBA

Task System

TBA

Job System

TBA

String processing

Types: void* and struct padding

The Editor

After opening the Engine, you should see the Editor's interface with menus along the top of the interface, and the basic tabs opened. The following image will show you the default engine layout. Here's a brief description of what you can see:

  1. The Main Menu: It allows you to navigate through the Engine, such as opening new tabs or import assets
  2. The Entity Tree shows a tree view of the entity you are editing. It shows the entity's components and child entities. You start editing an entity by double-clicking it in the asset browser.
  3. The Scene shows an editable graphic view of the edited entity. You can manipulate components and child entities by selecting them. Use the Move, Rotate, and Scale gizmos for your desired action.
  4. The Simulate current scene button will open the Simulate tab that lets you "run" or "simulate" a scene.
  5. The Properties tab shows the properties of the currently selected object in the Scene. You can modify the properties by editing them in the properties window.
  6. The Console tab shows diagnostic messages from the application.
  7. The Asset Browser shows all the assets in the project and enables you to manage them.
  8. The Preview shows a preview of the currently selected asset in the asset browser.

The Editor is a collection of editing Tabs, each with its specific purpose. You can drag tabs around to rearrange them. When you drag them out of the window, a new window opens. Use the View menu to open new tabs.

Note that you can have multiple tabs of the same type. For example, you can open various Asset Browser tabs to drag assets between them easily. Tabs docked in the same window work together. Therefore if you dock a Preview tab in the same window as an Asset Browser, it will show a preview of the selected asset in that browser. You can create multiple windows to view numerous assets simultaneously:

Editor layouts

You can rearrange the Editor layouts by dragging tabs around in the Editor. The best structure for the Editor depends on what you are doing and your personal preferences. You can save layouts via the Window Menu. It is also possible to restore your layout to the default one or load a custom-defined one in this menu.

About Tabs

The Machinery is based around a collection of editing Tabs, each with its specific purpose. You can drag the tabs around to rearrange them. Use the Tab menu to open new tabs. It is possible to have multiple tabs of the same type.

Table of Content

About Tab-Wells

Windows in The Machinery have a root tab-well covering the whole window. A tab-wells are rectangular areas containing one or more tabs. You can split them either horizontally or vertically to form two-child tab-wells. Also, you can switch around tabs within a tab-well via the keyboard using Ctrl + 1-9 or via Ctrl Page Up/Down.

Pinning tabs

You can also pin tabs to the current content or other settings with the pin icon.

It is also possible to use the context menu if you click on the tab label:

right click on the tab label will show this.

Pinning options

In the context menu, you have more options for pinning. These allow you to manage and arrange the window layout in a way that suits your workflow. The following table will show all the possible options:

OptionDescription
Pin via icon ⚲Will pin the tab to the current shown content
Pin to View 🗖This Pins the tab's content to another tab view that is currently open in the current window. You can pin a tab to multiple other tabs at the same time.
Pin to Window 🗗It pins the current tab to the selected window. For example, if you pin the Properties tab to Window 2. and choose an asset from the Asset Browser, the properties tab in Window 1. will display the selected asset.

Besides, it is possible to extend the Engine with custom tabs. You can do this via the File → New Plugin → Editor Tabs. How to write your custom tab is out of the scope of this article but is covered here.

Keyboard bindings

KeyDescription
Ctrl + TabSwitch between tabs
Ctrl + 1-9Switch between tabs in current tab well
Ctrl + Page up/DownSwitch between tabs in current tab well

Entity Tree Tab

The Entity Tree shows the hierarchy of the entity you choose to edit. This view allows you to organize and add new entities as well as new components to the entities.

Note: Just be aware that the Entity Tree does not reflect the current runtime state.

Besides, it is essential to remember is that The Machinery does not have specific Scene assets. A scene in The Machinery is just an entity with a lot of child entities.

Table of Content

How to edit a Scene / Entity

In this tab, you can edit any entity from the current project. To start editing, you can either use Right Click → Open or Double Click the entity to open them. This action will update the Scene Tab and the entity tree view tab. If you close the Scene Tab, the Entity Tree Tab will display an empty tree.

Managing Entities

A Scene is composed of child Entities and Parent Entities. When you create a new project, it starts with a "world" entity. This entity can be edited and lives as world.entity in your Asset Browser. You can create other Entities that function as your Scene. Any entity in the Asset Browser can serve as your Scene. Later, when publishing your game, you can choose the world entity you like.

Click a parent entity's drop-down arrow (on the left-hand side of its name) to show or hide its children. This action is also possible via the Keyboard: Arrow keys down and up let you navigate through the tree, while Arrow keys left and right allow you to show or hide the entity children.

Prototypes in the Entity Tree View

The Entity Tree view, prototype instances are shown in yellow to distinguish them from locally owned child entities (which are shown in white).

Prototypes: Prototypes are just entities that live in the Asset Browser as assets (.entity and the current entity in the Entity Tree View is based upon those entity assets. You can overwrite them in the Entity Tree View and make them unique, but any change to the asset will propagate to the entities based on the original prototype. More here

If you expand an instance, you will notice that most of its components and child entities are grayed out. They cannot be selected because they are inherited from the prototype, and the prototype controls their values.

The light entity is an instance of a prototype and the Render Component is inherited

If the prototype is modified — for example, if we scatter some more props on the floor — those changes reflect everywhere the prototype is placed.

Adding new child entities

When you want to reorder one entity as a child to another, you can drag and drop them on them onto each other.

You can add new children to an Entity through right-click and "Add Child Entity" or dragging an Entity Asset from the Asset Browser into the Entity Tree View.

Searching and changing the visibility of Entities

You can search for entities via the filter icon, and you can hide all the components with the little gear icon next to it. You can also change the visibility of each entity via the little eye icon next to its name. If you change the visibility, you will hide the entity in the Scene View.

The Lock Icon makes sure you cannot select the entity in the Scene Tab. It can help to avoid miss-clicking.

Keyboard bindings

KeyDescription
Arrow Up & DownNavigate through the tree
Arrow Left & RightExpand or collapse the selected Entities
FSelected Entity will be framed in Scene Tab
HSelected Entity will be hidden in Scene Tab
F2Start renaming the selected Entity
F3Adds child entity to parent
SpaceOpens Add Component menu to selected Entity
Space + ShiftOpen Add Entity Menu to add child entity
CTRL + LMoves selected entities one level up in the Entity Hierarchy
CTRL + FOpens the filter / search menu
CTRL + HDon’t show components in the Entity Tree
CTRL + DDuplicates selected Entities
CTRL + CCopies Entities
CTRL + VPastes Entities
CTRL + XCuts Entities

Scene Tab

The Scene tab is the view into your world. You can use the Scene tab to select, manipulate assets, and position new ones in your world.

Table of Content

The Scene Tab allows for different ways of navigating through your world. The primary method of navigating through the Scene is via the mouse and the keyboard.

Movement

  • Middle Mouse Button: Keep pressed down to move through the scene.
  • Left Mouse Button: Keep pressed down to rotate in the Scene by rotating the mouse. If you keep the mouse pressed so you can also use WASD to move through the Scene. To increase or decrease the movement speed, you need to move the mouse wheel.

Zoom in

  • Mouse Wheel: To zoom in, you can zoom in or out via the mouse wheel.

Frame Entities or the scene

  • Press F: To frame the currently selected entity or if you have nothing selected the Scene. Alternatively, you can double click on an entity the Entity Tree Tab.

Opening an Entity Asset

Through a double click on an Entity Asset in the Asset Browser, you will load the asset. If you want to move between previously loaded entities, the toolbar provides a back and forth navigation option.

Alternatively, you can use the context menu of the tab label and navigate through the previously focused entities.

Working in the Scene

The Scene tab comes with tools that allow for editing and moving entities in the current Scene.

The main tools you will be working with in to edit the scene are the

  • Select Tool: To select Entities in the Scene
  • Move Tool: For moving Entities in the Scene
  • Rotate Tool: Rotates selected Entities
  • Scale Tool: Scales Entities
  • Snapping: Enable or disable snapping and the snap distance

You can manipulate the Grid via Main Menu → Scene → Grid Settings.

If you do not like the layout of the current toolbar, you can change its layout by dragging them around.

Box Select in the Scene Tab

The Machinery now supports a long awaited feature — box selection.

To select multiple items in the scene, simply drag out a selection rectangle with the mouse:

Box dragging to select multiple entities.

The touched entities will become selected in the scene:

The resulting selection.

Simulate your Scene or change visualization modes

You can simulate your current Scene and manipulate the way your Scene's visualization with this toolbar:

  • The simulation button ▶: Simulates your scene in a new tab if no simulation tab is open.
  • The camera button 📷: Allows you to change the camera in your viewport.
  • The light button 💡: Use Lighting Environment Asset. Will create a lighting environment in the scene. Automatically enabled for assets with no light.
  • Visualize button: Allows to enable more visualization modes.
    • Lighting Model
      • Visualize Albedo
      • Visualize Normals
      • Visualize Specular
      • Visualize Roughness
      • Visualize Shadow Maps
      • Visualize Pixel Velocity
      • Visualize NaN / INF
      • Show as Overlay
    • Exposure
      • Visualize EV100
      • Visualize Histogram

Keyboard bindings

KeyDescription
FFrames either the current scene if nothing is selected or the selected objects
GEnables and disables the Grid
ESCDeselect objects
CTRL + DDuplicates selected objects
Shift + Drag with mouseDuplicates selected objects
CTRL + CCopies object
CTRL + VPastes object
CTRL + XCuts object

Properties Tab

The Properties tab shows the properties of the currently selected object. You can modify the properties by editing them in the properties window.

Use mathematical expression

We’ve also have support for mathematical expression to our property editor. So you can now type both numerical values and expressions.

You can use x in the expression to mean whatever value the property had before, so if you type x + 1 you will increase the current value by 1.

Using expressions in the property editor.

Multiple tabs with different properties

You can have multiple tabs of different properties open if you wish. In this case, it comes very handily that you can pin Properties Tabs to a specific Object. Otherwise, the property tab will reflect the next selected object, and you would have multiple times the same thing open. You can pin content by clicking the Pin icon on Properties Tab. It will bind the current object to this instance.

Pin the properties tab for the prop floor barrel (module dungeon kit example)

Moreover, if you dock a Preview tab in the same window as an Asset Browser, it will show a preview of the selected asset in that browser.

About Prototypes

The Machinery has a prototype system that allows entity assets to be used within each other. Therefore you can create an Entity-Asset that represents a room and then create a house Entity with a bunch of these room entities placed within. We call the room asset a prototype, and we call each placed room entity an instance of this prototype.

Any Entity-Asset can be a prototype, with instances of it placed in another entity asset. Note that prototypes are not special assets. More about this here.

The overridden entities and components are drawn in blue. We change the x and z components of the position to move the box. Note how the changed values are shown in white, while the values inherited from the prototype are shown in grey.

Missing

  • link somehow somewhere the prototype content

Asset Browser

The Asset Browser shows all your project's assets and enables you to manage them. It has multiple views at your exposal, which can make managing assets easier. The asset browser also allows you to search for assets in the current folder.

The asset browser’s grid view

As the image shows, the asset browser contains two components: The Directory Tree View of your project and the actual Asset View Panel.

Table of Content

Structure

The Directory Tree View reflects all directories and subdirectories of your project. You can use it to navigate quickly through your project. The navigation works either via mouse or via keyboard. You have the same management functionality as in the Asset View Panel:

  • Add a new folder or assets to a folder
  • Rename, Delete, Copy, Cut, Paste, Duplicate folder
  • Drag Drop folder and reorganize the folder structure
  • Show in explorer if the current project is a directory project
  • Copy the path if the current project is a directory
  • Change Views: Change to Grid, Details or List view
  • Change the Filter & Sorting

All of those actions can be either done via the Main Menu or via the context menu.

The Asset View Panel reflects all directories and sub directories as well as their assets in your project. This is the main place to organize your assets and folders. You have the same management functionality as in the Directory Tree View.

The Asset View Panel comes in three different views: Grid (default), Detail, and List-View. You can change the views via the context menu or the Change View button next to the search field. Besides, there are also shortcuts to change the views: Shift + C will switch to the Grid View, Shift + L will change to the List view, and Shift + D will change to the details view.

Detail View

Different View Modes

The Grid View will display your assets in a grid, which will shrink or grow depending on the tab size. In Details View allows you to see the details of your assets: What type are they easily? How big are they, and where on the disc are they located (if it's a directory project). The List-View will display your project's content in a list form.

About filtering, sorting, and changing the view There you have the option to filter via file extension or Asset-Labels. You can filter all assets by file extension type / Asset-Labels via the little filter icon or the context menu. You can also mix and match, as you require.

You can sort them via the context menu or the sort arrow up/down buttons (▲▼). Besides, there are also shortcuts to change the views: Shift + N will sort by Name Shift + S sort by Size, and Shift + T sort by type. The sorting will be applied in all view modes.

You can search in your project, and the default search is local. If you want to search globally, you want to click on the button with the Globe. This search will search for you in the entire project. Be aware your filters will influence your search!

Switch from local to global view

When searching globally, you can right-click on any search result and open the location of the selected asset.

Asset management

Editing specific assets Double clicks on an asset may open the asset in their corresponding tab or window. Not all assets have a specific action associated with them.

AssetAction on double click
Animation State Machine (.asm)Opens a new window with the Animation state machine layout.
Creation (.creation)Opens a graph tab if no graph tab is open. If a graph tab is open it will load this graph. Whenever we have a .creation file the graph is called Creation Graph and is used for working on graphics related workflows such as materials.
Entity (.entity)Opens a scene tab if no scene tab is open. If a scene tab is open it will change the view to this entity.
Entity Graph (.entity_graph)Opens a graph tab if no graph tab is open. If a graph tab is open it will load this graph. Whenever we have a .entity_graph file the graph is called Entity Graph and is used to make Entity Graph functionality reusable and shareable between multiple graphs.

A single click will always focus an associated Properties Tab on the selected asset.

Dragging assets into the Scene

You can drag assets around in the asset browser. It allows for quick reorganization of assets. It is also possible to drag assets from the asset browser directly into the Scene. Assets can be dragged from the Asset Browser Tab to other tabs if they support the asset type. You can also drag assets from the Windows-Explorer into your project. This action supports the same formats as the regular importer via File → Import Assets.

Asset Labelling

To organize your project, you can use Asset Labels. An asset can be associated with one or more different asset labels. You can use them to filter your asset in the asset browser or plugins via the asset label api.

There are two types of Asset labels:

  • System Asset Labels: They are added by plugins and cannot be assigned by the user to an asset.
    • User Asset Labels: You add them, and they are part of the project with the file extension: .asset_label and can be found in the asset_label directory.

The user can manage them like any other asset and delete user-defined Asset Labels via the asset browser. There you can also rename them, and this will automatically propagate to all users of those asset labels.

Add an Asset Label to as Asset

You can add asset Labels via the property view to any asset:

  • You select an Asset.
  • You expand the Label View in the Property View Tab.
  • You can type in any asset label name you want. The system will suggest already existing labels or allow you to create the new one.

1. Asset Label View in the Asset Property View Tab 2. The home of all your asset labels.

Keyboard bindings

KeyDescriptions
Arrow KeysAllow you to navigate through the Asset browser
EnterIf selected a folder or an asset will open the folder or asset
F2Will rename asset or folder in Asset View Panel and Directory Tree View
Ctrl + FSearch in the current project
CTRL + DDuplicates selected Asset
CTRL + CCopies Assets
CTRL + VPastes Assets
CTRL + XCuts Assets
Ctrl + Alt + OOpens Location of the asset in the asset browser view if in search view.
Ctrl + Shift + NNew folder
Ctrl + Shift + EOpen in Explorer
Shift + DChange to Details View
Shift + LChange to List View
Shift + CChange to Grid View
Shift + NSort by Name
Shift + SSort by Size
Shift + TSort by File Extension
Shift + OOpens a Directory or Asset

Simulate Tab

In The Machinery, we make a distinction between simulating and editing. When you are editing, you see a static view of the scene. (Editing the scene with everything moving around would be very tricky.) all the runtime behaviors like physics, animation, destruction, entity spawning, etc., are disabled. If you are building a game, the simulation mode will correspond to running the game. In contrast, when you are simulating or running, all the dynamic behaviors are enabled. It allows you to see the runtime behavior of your entities. To simulate a scene, open a scene in the Simulate tab.

Control over your simulation While your simulation is running, you can Stop, reset or speed up the simulation.

If your scene contains multiple cameras, you can pick between them via the camera toolbar.The default camera is a free flight camera.

In the same toolbar, you can enable Debug-Rendering-Tags from various components. For example, it will render a box around the Volume Component from the Volume Component if enabled.

Within this toolbar, you also find the statistic button to open several overlays, such as Frame Time.

Besides those options, you have the Render option, which allows for the same options as in the Scene Tab.

Preview Tab

The Preview Tab displays selected objects for you.

Camera Controls It allows for the same Free-Camera controls as the Scene Tab.

Movement

  • Middle Mouse Button: Keep pressed down to move through the Scene.
  • Left Mouse Button: Keep pressed down to rotate in the Scene by rotating the mouse. If you keep the mouse pressed so you can also use WASD to move through the Scene. To increase or decrease the movement speed, you need to move the mouse wheel.

Zoom in

  • Mouse Wheel: To zoom in, you can zoom in or out via the mouse wheel.

Interface Customizations

In this guide, you will learn about how to customize the Engine's interface.

  • Change a theme
  • Export/Import a theme
  • Change the window scale.
  • Add layouts.
  • Modify the Global Grid settings.

Change Theme

Sometimes we do not like the default theme or, based on reasons such as color blindness, and we cannot use the default theme. In the Machinery, you can change the default theme via Window -> Theme. You will find a list of themes the user can select and use.

The Engine comes by default with some base themes you can build your themes on top of:

Theme
Dark
Light
High Contrast Dark
High Contrast Light

Custom Theme

If you like to customize the default themes or create a new theme, click on the "New Theme" menu in the same menu as the theme selection. After clicking this, the current Theme will be used as your base, and the Theme Editor Tab opens.

If you do not like the base, you can choose a different theme as a base.

change base of theme

All changes are applied and saved directly. All changes will be immediately visible since your new Theme is selected as your current Theme.

changes are directly visible

Export / Import a theme

You can export a custom theme from the Window -> Theme and later import it there as well. The Theme will be saved as a .tm_theme file. These files are simple json like files.

Change the Scale of the UI

In the Window menu, you have a menu point Zoom.

zom option

This allows you to zoom in or out. You can also use the key bindings:

MeaningKeys
Zoom InCTRL + Equal, CTRL + Num + Plus
Zoom OutCTRL + Minus, CTRL + Num + Minus

Custom Layout

In case you do not like your current layout, you can always restore the default layout by using the Window -> Restore Default Layout menu point.

restore default layout

If you want to store your current layout, it would be very useful for the later time you can save your current window layout.

save current layout

You can create a new window or workspace with a layout in case you need it.

Note: The Engine should restore the last used layout when it shutdown.

If you need to change some details of your Window layout you can do this via the Edit layout menu.

edit layout

This will open the settings of the current Layout:

World Grid

In case you need to adjust the World Grid, you can do this at two places:

  1. Via the Application Settings

  1. Via any Scene or Preview Tab Application Menu entry.

Changes made there will only be applied to the specific tab.

Basic editing workflow

The basic scene editing workflow in The Machinery looks something like this:

  1. Import some asset files into the Asset Browser using File > Import… If you don't have any models to work with you can find free ones at Sketchfab for example.

  2. Organize the files by right-clicking the Asset Browser and choosing New > New Folder and by dragging and dropping.

  3. Any imported model, for example fbx or gltf, will appear as a dcc_asset.

  4. Rig an entity from the dcc_asset by selecting it and clicking Import Assets in the property panel. This also imports the materials and images inside the dcc_asset.

  5. Double click the imported entity to open it for editing.

  6. Add components for physics, animation, scripting, etc to the entity.

  7. Open a scene entity that you want to place your imported entity inside. A new project has a scene entity called world.entity that you can use. Or you can create your own scene entity by right clicking in the Asset Browser and choosing New > New Entity.

  8. Drag your asset entities into the scene entity to position them in the scene.

  9. Use the Move, Rotate, and Scale tools in the Scene tab to arrange the sub-entities.

  10. Holding down shift while using the move tools creates object clones.

  11. Select entities or components in the Entity Tree and Scene tabs to modify their properties using the Properties Tab.

  12. Drag and drop in the Entity Tree Tab to re-link entities.

  13. Each tool (Move, Rotate and Scale) has tool-specific settings, such as snapping and pivot point, in the upper-left corner of the scene tab.

  14. Use Scene > Frame Selection and Scene > Frame Scene to focus the camera on a specific selected entity, or the entire scene. Or just use the F key.

  15. When you are done, use File > Save Project… to save the scene for future work.

Known issues: When you drag a tab out of the current window to create a new one, you don’t get any visual feedback until the new window is created.

Entities

The Machinery uses an entity-component based approach as a flexible way of representing “objects” living in a “world”.

An entity is an independently existing object. An entity can have a number of components associated with it. These components provide the entity with specific functionality and features. The components currently available in The Machinery are:

ComponentDescription
Animation Simple PlayerPlays individual animations on the entity.
Animation State MachineAssigns an Animation State Machine to the entity which can be used to play and blend between animations.
CameraAdds a camera to the entity. The Scene > Camera menu option lets you view the scene through this camera.
Cubemap CaptureUsed to capture cubemaps to be used for rendering reflections.
Dcc AssetRenders an asset from a DCC tool. For more advanced rendering you would use the Render component.
Entity RiggerUsed to "rig" an imported DCC Asset as an entity. See below.
GraphImplements a visual scripting language that can be used to script entity behaviors without writing code.
LightAdds a light source to the entity.
Physics BodyRepresents a dynamic physics body. Entities that have this component will be affected by gravity.
Physics JointRepresents a physics joint, such as a hinge or a ball bearing.
Physics MoverRepresents a physics character controller. Used to move a character around the physics world.
Physics ShapeRepresents a physics collision shape. If the entity has a physics body, this will be a dynamic shape, otherwise a static shape.
RenderRenders models output by a Creation Graph.
Scene TreeRepresents a hierarchy of nodes/bones inside an entity. Entities with skeletons, such as characters have scene trees.
SculptComponent used to free-form sculpt with blocks.
Sound SourceComponent that will play a looping sound on the entity.
SpinSample component that spins the entity.
TagAssigns one or more "tags" to an entity, such as "bullet", "player", "enemy", etc. When implementing gameplay, we can query for all entities with a certain tag.
Tessellated PlaneSample component that draws a tessellated plane.
TransformRepresents the entity’s location in the world (position, scale, rotation). Any entity that exists at a specific location in the world should have a Transform Component. Note that you can have entities without a Transform Component. Such entities can for example be used to represent abstract logical concepts.
VelocityGives the entity a velocity that moves it through the world.

In addition to components, an entity can also have Child Entities. These are associated entities that are spawned together with the entity. Using child entities, you can create complex entities with deep hierarchies. For example, a house could be represented as an entity with doors and windows as child entities. The door could in turn have a handle as a child entity.

In The Machinery entity system, an entity can't have multiple components of the same type. So you can’t for example create an entity with two Transform Components. This is to avoid confusion because if you allowed multiple Transform Components it would be tricky to know which represented the “actual” transform of the entity.

In cases where you want multiple components (for example, you may want an entity with multiple lights) you have to solve it by creating child entities and have each child entity hold one of the lights.

Note that The Machinery does not have specific Scene assets. A scene in The Machinery is just an entity with a lot of child entities.

Import assets

This walkthrough shows how to import assets into a project and how to use them.

This part will cover the following topics:

  • How to import Assets into the Editor
  • How to import Assets into the Editor via url
  • Import Asset pipeline

Table of Content

Import differences between Meshes and Textures

Assets that are created by e.g. Maya will be of type after they are imported. Where DCC stands for Digital Content Creation. Materials wont be imported untill the dcc_asset has been dragged into the scene or used otherweise.

A dcc_asset can hold all types of data that was used to build the asset in the DCC-tool, such as objects, images, materials, geometry and animation data.

During the import step, The Machinery only runs a bare minimum of data processing, just enough so that we can display a visual representation of the asset in the Preview tab. Imports run in the background so you can continue to work uninterrupted. When the import finishes the asset will show up in the Asset Browser.

Note that import of large assets can take a significant amount of time. You can monitor the progress of the import operation in the status bar.

Textures on the the otherhand will be imported as creation graphs.

Major supported formats

At the time of writing this walkthrough, The Machinery is supporting the following formats:

Note: Not all formats have been tested that extensivly.

FormatFile Ending
fbx.fbx
GLTF Binary.glb
GLTF.gltf
wav.WAV
dae.dae
obj.obj
stl.stl
jpeg.jpeg
pg.pg
png.png
tga.tga
bmp.bmp
Windows Shared lib dll.dll
Linux shared lib so.so
The Machinery Theme.tm_theme
The Machinery Spritesheet.tm_spritesheet
The Machinery database project.the_machinery_db
The Machinery Directoy Project.the_machinery_dir
zip.zip
7z.7z
tar.tar

A complete list can be found on the bottom of this page

How to import assets into the project

The Machinery has three different ways of importing assets. The Import local files, Import remote files, Drag and Drop.

Import via the file menu

The first method of important an asset is via the File menu. There, we have an entry called Import File, which opens a file dialog. There you can import any of the supported file formats. Import File allows for importing any supported asset archive of the type zip or 7zip. This archive will be unpacked and recursively checked for supported assets.

Import from URL

It is possible to import assets from a remote location. In the File menu, the entry Import from URL allows for importing any supported asset archive of the type zip or 7zip. This archive will be unpacked and recursively checked for supported assets.

Note: The url import does not support implicitly provided archives or files such as https://myassetrepo.tld/assets/0fb778f1ef46ae4fab0c26a70df71b04 only clear file paths are supported. For example: https://myassetrepo.tld/assets/tower.zip

Drag and drop

The next method is to drag and drop either a zip/7zip archive into the asset browser or an asset of the supported type.

Adding the asset to our scene

In The Machinery a scene is composed of entities. The engine does not have a concept of scenes like other engines do. A dcc_asset that is dragged into the scene automatically extracts its materials and textures etc. into the surrounding folder and adds an entity with the correct mash etc. to the Entity view.

Another way of extracting the important information of a DCC asset is it to click on the DCC asset in the asset browser and click the button "Extract Assets" in the properties panel. This will exactly work like the previous method, but the main difference is that it creates a new entity asset that is not added to the scene.

Entity assets define a prototype in The Machinery. They are distinguished in the Entity Tree with yellow instead of white text. This concept allows having multiple versions of the same entity in the scene but they all change if the Prototype changes.

About Import Setting

You can define the import creation graph prototype there as well.

  • For Images
  • For Materials
  • For Meshes

Every DCC asset allows changing of the extraction configuration. Therefore it is possible to define the extraction locations for outputs, images and materials.

Instead of importing assets and change their the configuration per asset , it is possible to define them per folder. All you need to do is add an "Import Settings Asset" in the correct folder. This can be done via the asset browser. Right Click -> New -> Import Settings

Note: worth noting that this is somewhat of a power-user feature and not something you need to have a detailed understanding of to get started working with The Machinery.

Video about importing and creating an Entity

Complete list of supported file formats

At the time of writing this walkthrough, The Machinery is supporting the following formats:

Note: Not all formats have been tested that extensivly.

formatfile ending
3d.3d
3ds.3ds
3mf.3mf
ac.ac
ac3d.ac3d
acc.acc
amf.amf
ase.ase
ask.ask
assbin.assbin
b3d.b3d
bvh.bvh
cob.cob
csm.csm
dae.dae
dxf.dxf
enff.enff
fbx.fbx
glb.glb
gltf.gltf
hmp.hmp
ifc.ifc
ifczip.ifczip
irr.irr
irrmesh.irrmesh
lwo.lwo
lws.lws
lxo.lxo
m3d.m3d
md2.md2
md3.md3
md5anim.md5anim
md5camera.md5camera
md5mesh.md5mesh
mdc.mdc
mdl.mdl
mesh.mesh
mesh.xml.mesh.xml
mot.mot
ms3d.ms3d
ndo.ndo
nff.nff
obj.obj
off.off
ogex.ogex
pk3.pk3
ply.ply
pmx.pmx
prj.prj
q3o.q3o
q3s.q3s
raw.raw
scn.scn
sib.sib
smd.smd
stl.stl
stp.stp
ter.ter
uc.uc
vta.vta
x.x
x3d.x3d
x3db.x3db
xgl.xgl
xml.xml
zae.zae
wav.WAV
ddsexrjpg.ddsexrjpg
jpeg.jpeg
pg.pg
png.png
tga.tga
bmp.bmp
psd.psd
gif.gif
hdr.hdr
pic.pic
Windows Shared lib dll.dll
Linux shared lib so.so
The Machinery Theme.tm_theme
The Machinery Spritesheet.tm_spritesheet
The Machinery database project.the_machinery_db
The Machinery Directoy Project.the_machinery_dir
zip.zip
7z.7z
tar.tar

Import Projects

The Machinery allows to share and remix the content of projects made within the Engine via the import project feature.

Project Import provides an easy way to import assets from one The Machinery project to another. To use it, select File > Import File… and pick a The Machinery project file to import. The project you select is opened in a new Import Project tab and from there, you can simply drag-and-drop or copy/paste assets into your main project’s Asset Browser.

Importing assets from another project.

Importing assets from another project.

When you drag-and-drop or copy-paste some assets, all their dependencies are automatically dragged along so that they are ready to use.

Here is a video showing this in action. We start with a blank project, then we drag in a level from the physics sample and a character from the animation sample, put them both in the same scene, and play:

To make it even easier to share your stuff, we’ve also added File > Import from URL… This lets you import any file that The Machinery understands: GLTF, FBX, JPEG, or a complete The Machinery project directly from an URL. You can even import zipped resource directories in the same way.

For example, in the image below, we imported a Curiosity selfie from NASA (using the URL https://www.nasa.gov/sites/default/files/thumbnails/image/curiosity_selfie.jpg ) and dropped it into the scene we just created:

JPEG imported from URL.

JPEG imported from URL.

Have you made something interesting in The Machinery that you want to share with the world? Save your project as an Asset Database and upload it to a web server somewhere.

Other people can use the Import from URL… option to bring your assets into their own projects.

Note: Be aware when you download plugins from the internet, they might contain plugin assets. Only Trust them if you can trust the source! More on this See Plugin Assets

The asset pipeline by Example

This section we focus on how to set up a simple entity from an imported dcc_asset.

You will learn the basics about:

  • What the creation graph is
  • How the asset pipeline works
  • How to create (rigg) an Entity from an imported asset

Introduction

There are lots of things you might want to do to an imported asset coming from a DCC-tool. For example, extracting images and materials into a representation that can be further tweaked by your artists or rigging (create) an entity from the meshes present in the asset. In The Machinery, we provide full control over how data enters the engine and what data-processing steps that get executed, allowing technical artists to better optimize content and set up custom, game-specific asset pipelines.

This is handled through Creation Graphs. A Creation Graph is essentially a generic framework for processing arbitrary data on the CPUs and GPUs, exposed through a graph front-end view. While we can use Creation Graphs for any type of data processing.

For more information visit the Creation Graphs section.

Tip: if you wish to see other use cases such as particle systems, sky rendering and sprite sheets, then have a look in the creation_graphs sample that we provide.

Importing a DCC asset

You can import an asset by selecting File > Import... in the main menu, pressing Ctrl-I, or dropping a DCC file on the Asset Browser tab. When you do this, it ends up in our data-model as a dcc_asset.

For a more in detail explanation about how to import assets checkout the Asset Import Part.

Basic entity rigging, with image and material extraction

If you click on a dcc_asset that contains a mesh in the Asset Browser, you will be presented with importer settings in the Properties tab:

Inspecting a dcc_asset

The Preview tab does just enough to show you what the dcc_asset contains, but what you probably want is an entity that contains child entities representing the meshes found inside the dcc_asset. Also, you probably want it to extract the images and materials from the dcc_asset so you can continue to tweak those inside The Machinery. There are two ways to do this. Either you drag the DCC asset onto the Scene Tab, or you click the Import Asset button. The Import Assets button will automatically create a prototype Entity Asset in your project, while dropping it into the scene will rig the entity inside the scene, without creating a prototype.

In either case, we will for our door.dcc_asset get a door.resources directory next to it. This directory will contain materials and images extracted from the DCC asset. If you prefer dropping the assets into the scene directly, but also want an entity prototype, then you can check the Create Prototype on Drop check box.

Each image and material in the resources folder is a Creation Graph, which is responsible for the data-processing of those resources. You can inspect these graphs to see what each one does. They are described in more detail below.

Creation Graphs for dcc_asset import

In The Machinery, there are no specific asset types for images or materials, instead, we only have Creation Graphs (.creation assets). To extract data from a dcc_asset in a creation graph, the dcc_asset-plugin exposes a set of helper nodes in the DCC Asset category:

  • DCC Asset/DCC Image -- Takes the source dcc_asset together with the name of the image as input and outputs a GPU Image that can be wired into various data processing nodes (such as mipmap generation through the Image/Filter Image node) or directly to a shader node.

  • DCC Asset/DCC Material -- Takes the source dcc_asset together with the name of the material as input and outputs all properties of the material model found inside the dcc_asset. This material representation is a close match to GLTF 2.0's PBR material model. Image outputs are references to other .creation assets which in turn output GPU Images.

  • DCC Asset/DCC Mesh -- Takes the source dcc_asset together with the name of the mesh as input and outputs a GPU Geometry that can be wired to an Output/Draw Call output node for rendering together with the minimum and maximum extents of the geometry that can be wired to an Output/Bounding Volume output node for culling.

The steps for extracting images and material .creation assets from a dcc_asset involve deciding what data-processing should be done to the data before it gets wired to an output node of the graph. This can either be done by manually assembling creation graphs for each image and material, or by building a generic creation graph for each asset type and use that as a prototype when running a batch processing step we refer to as Resource Extraction.

Here's an example of what a generic creation graph prototype for extracting images might look like:

Simple image processing.

To quickly get up and running we provide a number of pre-authored creation graphs for some of the more common operations:

  • import-image-- Operations applied to images imported directly into the editor (not as part of a dcc-asset).

  • dcc-image -- Operations applied to images extracted from an imported dcc_asset.

  • dcc-material -- Shader graph setup to represent materials extracted from an imported dcc_asset.

  • dcc-mesh -- Operations for generating a draw call that represents a mesh from an imported dcc_asset.

  • drop-image -- Operations for generating a draw call to display the image output of another creation graph in the context of an entity, making it possible to drag-and-drop images into the Scene Tab.

These pre-authored creation graphs are shipped as part of our Core project which is automatically copied into new projects. How they are used when working with assets is exposed through the import_settings asset:

Default Import Settings.

By exposing these settings through an asset it is possible to easily change the default behavior of how imported assets are treated when placed under a specific folder.

Note: It's worth noting that this is somewhat of a power-user feature and not something you need to have a detailed understanding of to get started working with The Machinery.

Prototypes

The Machinery has a prototype system that allows entity assets to be used inside other entities.

Table of Content

So you can for example create an entity asset that represents a room, and then create a house entity that has a bunch of these room entities placed into it:

Three Room entities placed to form a House entity.

Difference between a Prototype, an Instance and an Asset

We call the room asset a prototype, and we call each placed room entity an instance of that prototype. Note that prototypes are not special assets, any entity asset can be used as a prototype, with instances of it placed in another entity asset.

In the Entity Tree tab, prototype instances are shown in yellow to distinguish them from locally owned child entities (which are shown in white).

Instance Properties: Inspect

If you expand an instance you will notice that most of its components and child entities are grayed out and can't be selected. This is because they are inherited from the prototype, and the prototype controls their values. If the prototype is modified — for example if we scatter some more props on the floor — those changes are reflected everywhere the prototype has been placed. In the example above, all three room instances would get the scattered objects.

Instance Properties: Override

If we want to, however, we can choose to override some of the prototype’s properties. When we override a property, we modify its value for this instance of the prototype only. Other instances will keep the value from the prototype.

To initiate an override, right-click (or double click) the component or child entity whose properties you want to override and choose Override in the context menu. In addition to modifying properties, you can also add or remove components or child entities on the overridden entity.

Note: If you override the property of some node deep in the hierarchy of the placed entity, all its parents will automatically get overridden too. Let's modify the position of the barrel in the front-most room:

Barrel position overridden.

The overridden entities and components are drawn in blue. We change the x and z components of the position to move the barrel. Note how the changed values are shown in white, while the values that are inherited from the prototype are shown in gray.

If you look back to the first picture you will see that the Link Component was automatically overridden when the prototype was instanced. This is needed because if we didn’t override the Link Component, it would use the values from the prototype, which means all three room instances would be placed in the same position.

When we override something on an instance, all the things not explicitly overridden are still inherited from the prototype. If we modify the prototype — for example change the barrel to a torch — all instances will get the change, since it was only the x and z positions of the object that we changed.

Instance Properties: Reset or Propagate

The context menus can be used to perform other prototype operations. For example, you can Reset properties back to the prototype values. You can also Propagate the changes you have made to an overridden component or entity back to the prototype so that all instances get the changes. Finally, you can Remove the overrides and get the instance back to being an exact replica of the prototype.

Other forms of prototypes: e.g. Graphs

Prototypes can also be used for other things than entities. For example, if you have a graph that you want to reuse in different places you can create a Graph Asset to hold the graph, and then instantiate that asset in the various places where you want to use it. Just as with the entity instances, you can override specific nodes in the graph instance to customize the behavior.

TODO: Add link

What is next?

Creating Prototype Assets

This walkthrough shows you how to create Prototype Assets.

Prototypes act as "templates" or "prefabs" for other assets. When you instantiate a prototype, the instance will inherit all the properties of the prototype unless you specifically override them.

In The Machinery there is no distinction between "prototype assets" and "ordinary assets". Any asset can be used as a prototype for other assets. The prototype system is also hierarchical. I.e., prototypes may themselves have prototypes. This lets you mix and match assets in lots of interesting ways.

Create a Prototype as a New Asset

In The Machinery, the assets most commonly used as prototypes are:

  • Entities
  • Entity Graphs
  • Creation Graphs

Since prototypes are just an ordinary assets, you can create an empty prototype, by creating an asset of the desired type in the Asset Browser: Right Click → New → Entity/Entity Graph/Creation Graph.

This will add a new asset to your project. Any changes made to the asset will be applied to all instances of the prototype.

Entity Prototype: Drag and Drop

You can create a Prototype from an Entity by simply dragging and dropping it from the Entity Tree into the Asset Browser.

This creates a new asset with the file extension .entity. It also replaces the entity in the Entity Tree with an instance of the newly created prototype.

Entity Prototype: Create Prototype from Entity

You can also create a prototype by using the context menu in the Entity Tree View on the Entity you want to turn into a Prototype:

Graph Prototypes from Subgraphs

You can turn a Subgraph into a prototype by choosing Create Subgraph Prototype in the Subgraph node's context menu. This creates a Subgraph Prototype Asset (.entity_graph) in your Asset Browser. It will also change the Subgraph to become an instance of the newly created prototype. If you open the Subgraph node at this point all the nodes will be grayed out. This shows that they are inherited from the prototype. Any changes you make there will be local to that instance.

To make a change that propagates to all instances of the prototype, open the prototype in the asset browser, or by using the Open Prototype button in the Properties view of the Subgraph node.

Instantiating Prototypes

How you create prototype instances depend on what kind of prototype you want to instance.

Entity Prototypes

To create an instance of an entity prototype, you can simply drag an drop the entity from the Asset Browser into the Entity Tree or the Scene Tab:

You can also use the context menu in the Entity Tree to replace any entity with an instantiated asset:

At runtime, you can create instances of an entity asset by spawning them from the Entity Graph:

Subgraph Prototypes

You can add an instance of a Subgraph by dropping the .graph asset from the Asset Browser into the Graph editor.

Note that the dropped graph must be of the same type as the graph you are editing. I.e. Creation Graph Subgraphs can only be used in Creation Graphs, Entity Graph Subgraphs can only be used in Entity Graphs.

Another way of creating an instance of a Subgraph is to create an empty Subgraph node in the Graph and then picking a prototype for it in the Properties view:

Creation Graph Prototypes

The Render Component and other components that make use of Creation Graphs typically let you specify the prototype to use for the Creation Graph in the Properties View:

The Edit button in this view lets you open the specific Creation Graph instance used by this component for editing.

Meshes / Materials / Shaders

In the Machinery Creation Graphs represent Meshes, Materials and Shaders all at the same time.

The Creation Graph is used to create asset pipelines and also define GPU-related things such as materials and shaders. In essence it lets you set up a graph that takes some inputs and processes them using nodes that may run on either the CPU or GPU, the graph can then output things such as shader instances, images or draw calls. The big power of the Creation Graph is that it lets you reason about data that lives in the borderland between GPU and CPU, but being able to do so within one single graph.

Creations Graphs live separately from Entity Graphs. You often find Creation Graphs living under Render Components (for issuing draw calls), or within .creation_graph assets, where they are used to define materials and images. As an example, a Creation Graph that outputs an image probably takes an image that lives on the disk as input, uploads it to the GPU and then has the graph output a GPU image. However, the user is then free to add nodes in-between these steps, for example node for compression or mipmap generation. Compared to other game engines, things that often end up in the properties panel of an import image, can here be done dynamically in a graph, depending on the needs of your project.

Within the core folder that we ship with the engine you will find several creation graphs, many of these are used for defining defaul materials and also as defaults graph for use within our DCC asset import pipeline.

Simple red material

Image loading and material creation are just a few examples of what can be achieved with the creation graph. The table below shows when a creation graph is used compared to the tools one could use in Unity and Unreal.

Asset TypeUnityUnrealThe Machinery
ImagesTextureTextureCreation Graph with Image Output node
MaterialsShaderMaterialCreation Graph with Shader Instance Output node
ParticlesParticle EffectCascadeCreation Graph with GPUSim nodes that emits particles
Procedural materialsProcedural MaterialsMaterialCreation Graph with Image Output node with dynamic logic
MeshesMeshStatic MeshDCC Asset referred to by Creation Graph within Render Component

The engine comes with a Creation Graphs sample, it contains examples of how to make materials and particle systems.

Like the entity graph, the creation graph can executes nodes in sequence from an event. Some examples of this are the Tick, Init, and Compile events which are executed at known points or intervals. However, creation graphs commonly work using a a reverse flow where an output node is triggered and then all the nodes it depends on are run, in order to supply the output. Examples of these outputs are Draw Call, Shader Instance, Image, and Physics Shape. Note that these outputs are just blobs of data interpreted by the code that uses them. You can in other words add your own output nodes and types from code.

Simulation

In The Machinery, we make a distinction between simulating and editing. When you are editing, you see a static view of the scene. All the runtime behaviors like physics, animation, destruction, entity spawning, etc are disabled. (Editing the scene with everything moving around would be very tricky.)

In contrast, when you are simulating or running, all the dynamic behaviors are enabled. This allows you to see the runtime behavior of your entities. If you are building a game, the simulation mode would correspond to running the game.

To simulate a scene, open a scene in the Simulate tab.

Simulate tab.

You can use the controls in the tab to pause, play, or restart the simulation or change the simulation speed. Note that if you haven't added any simulation components to the scene, the Simulate tab will be just as static as the Scene tab. In order to get something to happen, you need to add some runtime components.

The Entity Graph gives you a visual scripting language for controlling entity behavior. It will be described in the next section.

You can launch the engine in simulation mode from the command line:

the-machinery.exe --load-project project.the_machinery --simulate scene

Here project.the_machinery should be the name of your project file and scene the name of the entity you want to simulate. This will open a window with just the Simulate tab, simulating the scene that you specified.

Entity Graphs

The Entity Graph implements a visual scripting language based on nodes and connections. To use it, right-click on an entity to add a Graph Component and then double click on the Graph Component to open it in the Graph Editor:

Graph editor.

The visual scripting language uses Events to tick the execution of nodes. For example, the Tick Event node will trigger its out connector whenever the application ticks its frame. Connect its output event connector to another node to run that node during the application's update.

Nodes that just fetch data and don't have any side-effects are considered "pure". They don't have any event connectors and will run automatically whenever their data is needed by one of the non-pure nodes. Connect the data connectors with wires to pass data between nodes.

In addition to connecting wires, you can also edit input data on the nodes directly. Click a node to select it and edit its input data in the properties.

There are a lot of different nodes in the system and I will not attempt to describe all of them. Instead, here is a simple example that adds a super simple animation to an entity using the graph component:

What is next?

For more information go and checkout the Gameplay Coding / Visual Scripting Chapter

For more examples, check out the pong and animation projects in the samples.

Physics

The Machinery integrates Nvidia's PhysX toolkit and uses it for physics simulation of entities. This section will not attempt to describe in detail how physics simulation works, for that we refer to the PhysX documentation. We will only talk about how physics is set up in The Machinery.

Table of Content

The physics simulation system

The physics simulation system introduces two new assets: Physics Material and Physics Collision as well as four new components: Physics Shape Component, Physics Body Component, Physics Joint Component, and Physics Mover Component.

Physics Assets

A Physics Material asset specifies the physical properties of a physics object: friction (how "slippery" the object is) and restitution (how "bouncy" the object is). Note that if you don't assign a material to a physics shape it will get default values for friction and constitution.

A Physics Collision asset describes a collision class. Collision Classes control which physics shapes collide with each other. For example, a common thing to do is to have a debris class for small objects and set it up so that debris collide with regular objects, but not with other debris. That way, you are not wasting resources on computing collisions between lots of tiny objects. (Note that the debris objects still need to collide with regular objects, or they would just fall through the world.)

In addition to deciding who collides with who, the collision class also decides which collisions generate callback events. These events can be handled in the Entity Graph.

If you don't assign a collision class to a physics shape, it will get the Default collision class.

Physics Components

The Physics Shape Component can be added to an entity to give it a collision shape for physics. Entities with shape components will collide with each other when physics is simulated.

A physics shape can either be specified as geometry (sphere, capsule, plane, box) or it can be computed from a graphics mesh (convex, mesh). Note that if you use computed geometry, you must press the Cook button in the Properties UI to explicitly compute the geometry for the object.

Convex shape.

If you just give an entity a Physics Shape Component it will become a static physics object. Other moving objects can still collide with it, but the object itself won't move.

To create a moving physics object, you need to add a Physics Body Component. The body component lets you specify dynamic properties such as damping, mass, and inertia tensor. It also lets you specify whether the object should be kinematic or not. A kinematic object is being moved by animation. Its movement is not affected by physics, but it can still affect other physical objects by colliding with them and pushing them around. In contrast, if the object is not kinematic it will be completely controlled by physics. If you place it above ground, it will fall down as soon as you start the simulation.

Note that parameters such as damping and mass do not really affect kinematic objects, since the animations will move them the same way, regardless of their mass or damping. However, these parameters can still be important because gameplay code could at some point change the object from being kinematic to non-kinematic. If the gameplay code never makes the body non-kinematic, the mass doesn't matter.

The Physics Joint Component can be used to add joints to the physics simulation. Joints can tie together physics bodies in various ways. For example, if you tie together two bodies with a hinge joint they will swing as if they were connected by a hinge. For a more thorough description of joints, we refer to the PhysX documentation.

The Physics Mover Component implements a physics-based character controller. If you add it to an entity, it will keep the entity's feet on the ground, prevent it from going through walls, etc. For an example of how to use the character controller, check out the animation or gameplay sample projects.

Physics scripting

Physics can be scripted using the visual scripting language in the Entity Graph.

We can divide the PhysX scripting nodes into a few categories.

Nodes that query the state of a physics body:

  • Get Angular Velocity
  • Get Velocity
  • Is Joint Broken
  • Is Kinematic

Nodes that manipulate physics bodies:

  • Add Force
  • Add Torque
  • Break Joint
  • Push
  • Set Angular Velocity
  • Set Kinematic
  • Set Velocity

Event nodes that get triggered when something happens in the scene:

  • On Contact Event
  • On Joint Break Event
  • On Trigger Event

Nodes that query the world for physics bodies:

  • Overlap
  • Raycast
  • Sweep

Note that the query nodes may return more than one result. They will do that by triggering their Out event multiple times, each time with one of the result objects. (In the future we might change this and have the nodes actually return arrays of objects.)

From C you can access those features via the tm_physx_scene_api.

Missing Features

Note that The Machinery doesn't currently support all the features found in PhysX. The most glaring omissions are:

  • D6 joints and joint motors.
  • Vehicles.

We will add more support going forward.

For an example of how to use physics, see the Physics Sample Project.

Tutorials

For more information and guides checkout out the tutorial chapter as well as our Physics Sample.

Animation

The Animation system lets you play animations on entities. You can also create complicated animation blends, crossfades, and transitions using an Animation State Machine.

The animation system adds two new assets to The Machinery: Animation Clip and Animation State Machine as well as two new components: Animation Simple Player and Animation State Machine.

To get an animation into The Machinery you first export it from your DCC tool as FBX or another suitable file format. Then you import this using File > Import....

An Animation Clip is an asset created from an imported DCC animation asset that adds some additional data. First, you can set a range of the original animation to use for the clip, so you can cut up a long animation into several individual clips. Second, you can specify whether the animation should loop or not as well as its playback speed. You can use a negative playback speed to get a "reverse" animation.

Finally, you can specify a "locomotion" node for the animation. If you do, the delta motion of that node will be extracted from the animation and applied to the entity that animation is played on, instead of to that bone. This lets an animation "drive" the entity and is useful for things like walking and running animations. The locomotion node should typically be the root node of the skeleton. If the root mode is animated and you "don't" specify a locomotion node, the result will be that the root node "walks away" from the animation.

The Animation Simple Player Component is a component that you can add to an entity to play animations on it. The component lets you pick a single animation to play on the entity. This is useful when you want to play simple animations such as doors opening, coins spinning, flags waving, etc. If you want more control over the playback and be able to crossfade and blend between animations you should use an Animation State Machine instead.

Animation state machines

The Animation State Machine Asset represents an Animation State Machine. If you double-click an Animation State Machine Asset in the Asset Browser, an Animation State Machine Editor will open:

Animation State Machine Editor

The Animation State Machine (ASM) introduces a number of concepts: Layers, States, Transitions, Events, Variables, and Blend Sets.

The ASM represents a complex animation setup with multiple animations that can be played on a character by dividing it into States. Each state represents something the character is doing (running, walking, swimming, jumping, etc) and in each state, one particular animation, or a particular blend of animations is being played. The states are the boxes in the State Graph.

The ASM can change state by taking a Transition from one state to another. The transitions are represented by arrows in the graph. When the transition is taken, the animation crossfades over from the animations in one state to the animations in the other state. The properties of the transition specify the crossfade time. Note that even though the crossfade takes some time, the logical state transition is immediate. I.e. as soon as the transition is taken, the state machine will logically be in the new state.

The transitions are triggered by Events. An event is simply a named thing that can be sent to the ASM from gameplay code. For example, the gameplay may send a "jump" event and that triggers the animation to transition to the "jump" state.

Variables are float values that can be set from gameplay code to affect how the animations play. The variables can be used in the states to affect their playback. For example, you may create a variable called run_speed and set the playback Speed of your run state to be run_speed. That way, gameplay can control the speed of the animation.

Note that the Speed of a state can be set to a fixed number, a variable, or a mathematical expression using variables and numbers. (E.g. run_speed * 2.) We have a small expression language that we use to evaluate these expressions.

The ASM supports multiple Layers of state graphs. This works similar to image layering in an application such as Photoshop. Animations in "higher" layers will be played "on top" of animations in the lower layers and hide them.

As an example of how to use layering, you could have a bottom layer that controls the player's basic locomotion (walking, running, etc). Then you could have a second layer on top of that for controlling arm movements. That way, you could play a reload animation on the arms while the legs are still running. Finally, you could have a top layer to play "hurt" animations when the player for example gets hit by a bullet. These hurt animations could then interrupt the reload animations whenever the player got hit.

Blend Sets can be used to control the per-bone opacity of animations playing in higher layers. They simply specify a weight for each bone. In the example above, the animations in the "arm movement" layer would have opacity 1.0 for all arm bones, but 0.0 for all leg bones. That way, they would hide the arm movement from the running animation below, but let the leg movement show through.

The Animation State Machine Editor has a Tree View to the left that lets you browse all the layers, states, transitions, events, variables, and blend sets. The State Graph lets you edit the states and transitions in the current layer. The Properties window lets you set the properties of the currently selected objects and the Preview shows you a preview of what the animation looks like. Note that for the preview to work, you must specify a Preview Entity in the properties of the state machine. This is the entity that will be used to preview the ASM. When you select a state in the State Graph, the preview will update to show that state.

In the Preview window, you also find the Motion Mixer. This allows you to send events to the ASM and change variables to see how the animation reacts.

The ASM currently supports the following animation states:

Regular State

Plays a regular animation.

Random State

Randomly plays an animation out of a set of options.

Empty State

A state that doesn't play any animation at all. Note that this state is only useful in higher layers. You can transition to an empty state to "clear" the animation in the layer and let the animations from the layers below it shine through.

Blend State

Allows you to make a 1D or 2D blend between animations based on the value of variables. This is often used for locomotion. You can use different animations based on the characters running speed and whether the character is turning left or right and position them on a map to blend between them.

Animation Blending

Once you have created an Animation State Machine, you can assign it to a character by giving it an Animation State Machine Component.

For an example of how the animation system works, have a look at the mannequin sample project.

Missing features

Note that the animation system is still under active development. Here are some features that are planned for the near future:

  • Ragdolls.
  • Animation compression.
  • Triggers.
  • More animation states.
    • Offset State.
    • Template State.
    • Expression-based Blend State.
    • Graph-based Blend State.
  • Beat transitions.
  • Constraints.

Animation Compression

We support compressed animations. Compressed animations have the extension .animation. Note that with this, we have three kinds of animation resources:

ResourceDescription
.dcc_assetAnimation imported from a Digital Content Creation (DCC) software, such as Max, Maya, Blender, etc. Note that .dcc_asset is used for all imported content, so it could be animations, textures, models, etc.
.animationA compressed animation. The compressed animation is generated from the .dcc_asset by explicitly compressing it.
.animation_clipSpecifies how an animation should be played: playback speed, whether it plays forward or backward, if it drives the root bone or not, etc.

An Animation Clip references either an uncompressed .dcc_asset or a compressed .animation to access the actual animation data.

To create a compressed animation, right-click a .dcc_asset file that contains an animation and choose Create xxx.animation in the context menu:

A compressed animation.

When you first do this, the animation shows a white skeleton in T-pose and a moving blue skeleton. The blue skeleton is the reference .dcc_animation and the white skeleton is the compressed animation. By comparing the skeletons you can see how big the error in the animation is.

At first, the white skeleton is in T-pose because we haven’t actually generated the compressed data yet. To do that, press the Compress button:

Compressed animation with data.

This will update the Format and Buffer fields and we can see that we have 9.2 KB of compressed data for this animation and that the compression ratio is x 6.66. I.e, the compressed data is 6.66 times smaller than the uncompressed one. The white and the blue skeletons overlap. The compression error is too small to be noticed in this view, we have to really zoom in to see it:

Zoomed in view of one of the fingers.

When you compress an animation like this, The Machinery tries to come up with some good default compression settings. The default settings work in a lot of cases, but they’re not perfect, because The Machinery can’t know how the animation is intended to be viewed in your game.

Are you making a miniature fighting game, and all the models will be viewed from a distant overhead camera? In that case, you can get away with a lot of compression. Or are you animating a gun sight that will be held up really close to the player’s eye? In that case, a small error will be very visible.

To help the engine, you can create an .animation_compression asset. (New Animation Compression in the asset browser.) The Animation Compression asset control the settings for all the animations in the same folder or in its subfolders (unless the subfolders override with a local Animation Compression asset):

Animation Compression settings.

The Animation Compression settings object has two properties:

Max Error specifies the maximum allowed error in the compressed animation. The default value is 0.001 or 1 mm. This means that when we do the compression we allow bones to be off by 1 mm, but not more. The lower you set this value, the less compression you will get.

Skin Size specifies the size we assume for the character’s skin. It defaults to 0.1, or 10 cm. We need the skin size to estimate the effects of rotational errors. For example, if the rotation of a bone is off by 1°, the effect of that in mm depends on how far away from the bone the mesh is.

10 cm is a reasonable approximation for a human character, but notice that there are situations where the skin size can be significantly larger. For example, suppose that a 3 m long staff is attached to the player’s hand bone. In this case, rotational errors in the hand are amplified by the full length of the staff and can lead to really big errors in the position of the staff end. If this gives you trouble, you might want to up the skin size to 3 for animations with the staff.

We don’t support setting a per-bone skin size, because it’s unclear if the effort of specifying per-bone skin sizes is really worth it in terms of the memory savings it can give. (Also, even a per-bone skin size might not be enough to estimate errors perfectly. For example, an animator could have set up a miter joint where the extent of the skin depends on the relative angle of two bones and goes to infinity as the angle approaches zero.)

Note that sometimes animations are exported in other units than meters. In this case, the Skin Size and the Max Error should be specified in the same units that are used in the animation file.

Sound

The Machinery comes with a low-level sound system that can import WAV files into the project and play them back. The sound system can position sounds in 3D space and mix together 2D and 3D sounds for output on a stereo, 5.1, or 7.1 sound system.

You can play a sound by adding a Sound Source Component to an object in the level of by using one of the sound playing nodes in the visual scripting language.

Missing features

The sound system is rudimentary. Here are some features that are planned for the future:

  • Sound streaming
  • Sound compression
  • WASAPI backend
  • React to the user plugging in or unplugging headphones
  • Hermite-interpolated resampling
  • Reverb
  • Directional sound sources
  • Doppler effect
  • Multiple listeners
  • HRTF
  • High-level sound system
    • Random sounds
    • Composite sounds
    • Streaming sounds
    • Compressing sounds

Publishing your game

You publish your game via File -> Publish. The Engine opens the publishing tab.

In there, you have a couple of options:

OptionDescription
Executable NameThe name of the executable, e.g. test.exe
Window TitleThe text which is displayed in the window title.
World EntityThe Entry point of your game.
ResolutionThe default resolution to use when running the published project
FullscreenIf checked the game will launch in Fullscreen
Directory ProjectDecides if the game data is published as binary data or as human readable directory.

Directory vs. none Directory Project

If you check this option, the game is exported as a human-readable project. Otherwise the game data will be compressed and stored in a binary .the_machinery format.

Not recommended for other than debug purposes.

Sculpt Tool

Note: This tool is in a preview state.

With the Sculpt Tool The Machinery supports rapid prototyping when it comes to level white boxing. You make use of the tool by adding a Sculpt Component to an Entity. Using the sculpt component you can quickly sketch out levels or make beautiful blocky art:

A blocky character in a blocky forest setting.

A blocky character in a blocky forest setting.

How to use the Tool

To use the Sculpt Component, first add it to an entity, by right-clicking the entity in the Entity Tree and selecting Add Component. Then, select the newly created Sculpt component in the Entity Tree.

This gives you a new sculpt tool in the toolbar:

Sculpt tool.

Sculpt tool.

With this tool selected, you can drag out prototype boxes on the ground. You can also drag on an existing prototype box to create boxes attached to that box.

The standard Select, Move, Rotate, and Scale tools can be used to move or clone (by shift-dragging) boxes.

You can add physics to your sculpts, by adding a Physics Shape Component, just as you would for any other object.

Note: If you are cooking a physics mesh or convex from your sculpt data, you need to explicitly recook whenever the sculpt data changes.

Here is a video of sculpting in action:

Note: Currently, all the sculpting is done with boxes. We may add additional shape support in the future.

Additional though

In addition to being a useful tool, the Sculpt Component also shows the deep integration you can get with custom plugins in The Machinery. The Sculpt Component is a separate plugin, completely isolated from the rest of The Machinery and if you wanted to, you could write your own plugins to do similar things.

The Truth

The Machinery uses a powerful data model to represent edited assets. This model has built-in support for serialization, streaming, copy/paste, drag-and-drop as well as unlimited undo/redo. It supports an advanced hierarchical prefab model for making derivative object instances and propagating changes. It even has full support for real-time collaboration. Multiple people can work together in the same game project, Google Docs-style. Since all of these features are built into the data model itself, your custom, game-specific data will get them automatically, without you having to write a line of code.

The Data Model

The Machinery stores its data as objects with properties. Each object has a type and the type defines what properties the object has. Available property types are bools, integers, floats, strings, buffers, references, sub-objects and sets of references or sub-objects.

The object/properties model gives us us forward and backward compatibility and allows us to implement operations such as cloning without knowing any details about the data. We can also represent modifications to the data in a uniform way (object, property, old-value, new-value) for undo/redo and collaboration.

The model is memory-based rather than disk-based. I.e. the in-memory representation of the data is considered authoritative. Read/write access to the data is provided by a thread-safe API. If two systems want to cooperate, they do so by talking to the same in-memory model, not by sharing files on disk. Of course, we still need to save data out disk at some point for persistence, but this is just a “backup” of the memory model and we might use different disk formats for different purposes (i.e. a git-friendly representation for collaborative work vs single binary for solo projects).

Since we have a memory-based model which supports cloning and change tracking, copy/paste and undo can be defined in terms of the data model. Real-time collaboration is also supported, by serializing modifications and transmitting them over the network. Since the runtime has equal access to the data model, modifying the data from within a VR session is also possible.

We make a clear distinction between “buffer data” and “object data”. Object data is stuff that can be reasoned about on a per-property level. I.e. if user A changes one property of an object, and user B changes another, we can merge those changes. Buffer data are binary blobs of data that are opaque to the data model. We use it for large pieces of binary data, such as textures, meshes and sound files. Since the data model cannot reason about the content of these blobs it can’t for example merge changes made to the same texture by different users.

Making the distinction between buffer data and object data is important because we pay an overhead for representing data as objects. We only want to pay that overhead when the benefits outweigh the costs. Most of a game’s data (in terms of bytes) is found in things like textures, meshes, audio data, etc and does not really gain anything from being stored in a JSON-like object tree.

In The Truth, references are represented by IDs. Each object has a unique ID and we reference other objects by their IDs. Since references have their own property type in The Truth, it is easy for us to reason about references and find all the dependencies of an object.

Sub-objects in The Truth are references to owned objects. They work just as references, but have special behaviours in some situations. For example, when an object is cloned, all its sub-objects will be cloned too, while its references will not.

For more information checkout the documentation and these blog posts: The Story behind The Truth: Designing a Data Model or this one.

Access values

The truth objects (tm_tt_id_t) are immutable objects unless you explicitly make them writable. Therefore you do not have to be afraid of accidentally changing a value when reading from an object property.

To read from an object property we need access to the correct Truth Instance as well as to an object id. We also need to know what kind of property we want to access. That is why we always want to define our properties in a Header-File. Which allows us and others to find quickly our type definitions. A good practice is to comment on what kind of data type property contains.

Let us assume our object is of type TM_TT_TYPE__RECT:

enum {
    TM_TT_PROP__RECT__X, // float
    TM_TT_PROP__RECT__Y, // float
    TM_TT_PROP__RECT__W, // float
    TM_TT_PROP__RECT__H, // float
};

When we know what we want to access, we call the correct function and access the value. In our example we want to get the width of an object. The width is stored in TM_TT_PROP__RECT__W.

The function we need to call:

void (*get_float)(tm_the_truth_o *tt,const tm_the_truth_object_o *obj, uint32_t property);

With this knowledge we can assemble the following function that logs the width of an object:

void log_with(tm_the_truth_o *tt, tm_tt_id_t my_object){   
	const float width = tm_the_truth_api->get_float(tt,tm_tt_read(tt,my_object),TM_TT_PROP__RECT__W);
    TM_LOG("the width is %f",width);
}

Make the code robust

To ensure we are actually handling the right type we should check this at the beginning of our function. If the type is not correct we should early out and log a warning.

All we need to do is compare the tm_tt_type_t's of our types. Therefore we need to obtain the type id from the object id and from our expected type. From a tm_tt_id_t we can obtain the type by calling tm_tt_type() on them. tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_TYPE); will give us back the object type from a given hash. After that we can do our comparison.

void log_with(tm_the_truth_o *tt, tm_tt_id_t my_object) {
  const tm_tt_type_t type = tm_tt_type(my_object);
  const tm_tt_type_t expected_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__RECT);

  if (type.u64 != expected_type.u64) {
    TM_LOG("The provided type does not mmatch! %p{tm_tt_type_t} != "
           "%p{tm_tt_type_t}",
           &type, &expected_type);
    return;
  }

  const float width = tm_the_truth_api->get_float(tt, tm_tt_read(tt, my_object),
                                                  TM_TT_PROP__RECT__W);
  TM_LOG("the width is %f", width);
}

Note: Check out the logger documentation for more information on it. log.h

Create an Object

You can create an object of a Truth Type via two steps:

  1. You need to obtain the Type from the type hash. We call the object_type_from_name_hash to obtain the tm_tt_type_t
  2. You need to create an Object from that Type. We call create_object_of_type to create an object tm_tt_id_t . We pass TM_TT_NO_UNDO_SCOPE because we do not need an undo scope for our example.

First, we need to have access to a Truth instance. Otherwise, we could not create an object. In this example, we create a function.

tm_tt_id_t create_my_type_object(tm_the_truth_o *tt) {
  const tm_tt_type_t my_type = tm_the_truth_api->object_type_from_name_hash(
      tt, TM_TT_TYPE_HASH__MY_TYPE);
  const tm_tt_id_t my_type_object =
      tm_the_truth_api->create_object_of_type(tt, my_type, TM_TT_NO_UNDO_SCOPE);
  return my_type_object;
}

Wherever we call this function we can then edit and modify the type and add content to it!

The alternative approach is to use the "Quick Object Creation function".

tm_tt_id_t quick_create_my_type_object(tm_the_truth_o *tt) {
  return tm_the_truth_api->quick_create_object(tt, TM_TT_NO_UNDO_SCOPE,
                                               TM_TT_TYPE_HASH__MY_TYPE, -1);
}

Note: need to pass -1 to tell the function that we are at the end of the creation process. More info here.

What is next?

If you want to learn more about how to create your own custom type, follow the "Custom Truth Type" walkthrough.

Modify an object

To manipulate an object, you need to have its ID (tm_tt_id_t). When you create an object, you should keep its ID around if you intend to edit it later.

Table of Content

In this example, we have a function that gets an object and the Truth instance of that object.

void modify_my_object(tm_the_truth_o *tt, tm_tt_id_t my_object) {}

Important: you can only edit an object that is part of the same instance! Hence your my_object must be created within this instance of the Truth (tt).

1. Make the object writable

To edit an object, we need to make it writable first. In the default state, objects from the Truth are immutable. The Truth API has a function that is called write. When we call it on an object, we make it writable.

tm_the_truth_object_o *my_object_w = tm_the_truth_api->write(tt, my_object);

2. Write to the object.

We need to know what kind of property we want to edit. That is why we always want to define our properties in a Header-File. A good practice is to comment on what kind of data type property contains.

Let us assume our object is of type TM_TT_TYPE__RECT:

enum {
    TM_TT_PROP__RECT__X, // float
    TM_TT_PROP__RECT__Y, // float
    TM_TT_PROP__RECT__W, // float
    TM_TT_PROP__RECT__H, // float
};

In our example we want to set the width to 100. The width is stored in TM_TT_PROP__RECT__W.

When we know what we want to edit, we call the correct function and change the value.

The function we need to call:

void (*set_float)(tm_the_truth_o *tt, tm_the_truth_object_o *obj, uint32_t property,float value);

Let us bring all of this together:

tm_the_truth_object_o *my_object_w = tm_the_truth_api->write(tt, my_object);
tm_the_truth_api->set_float(tt, my_object_w, TM_TT_PROP__RECT__W, 100);

3. Save the change

In the end, we need to commit our change to the system. In this example we do not care about the undo scope. That is why we provide the TM_TT_NO_UNDO_SCOPE define. This means this action is not undoable.

tm_the_truth_object_o *my_object_w = tm_the_truth_api->write(tt, my_object);
tm_the_truth_api->set_float(tt, my_object_w, TM_TT_PROP__RECT__W, 100);
tm_the_truth_api->commit(tt, my_object_w, TM_TT_NO_UNDO_SCOPE);

If we wanted to provide an undo scope we need to create one:

tm_the_truth_object_o *my_object_w = tm_the_truth_api->write(tt, my_object);
tm_the_truth_api->set_float(tt, my_object_w, TM_TT_PROP__RECT__W, 100);

Now this action can be reverted in the Editor.

4. Get a value

Instead of changing the value of width to 100 we can also increment it by 100! All we need to do is get the value first of the Truth Object and add 100 to it. To access a property we need to use the macro tm_tt_read. This will give us an immutable (read only) pointer to the underlying object. This allows us to read the data from it.

void modify_my_object(tm_the_truth_o *tt, tm_tt_id_t my_object) {
  const tm_tt_type_t type = tm_tt_type(my_object);
  const tm_tt_type_t expected_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__RECT);
  float wdith = tm_the_truth_api->get_float(tt, tm_tt_read(tt, my_object),
                                            TM_TT_PROP__RECT__W);
  wdith += 100;
  tm_the_truth_object_o *my_object_w = tm_the_truth_api->write(tt, my_object);
  tm_the_truth_api->set_float(tt, my_object_w, TM_TT_PROP__RECT__W, wdith);
  const tm_tt_undo_scope_t undo_scope =
      tm_the_truth_api->create_undo_scope(tt, "My Undo Scope");
  tm_the_truth_api->commit(tt, my_object_w, undo_scope);
}

Note: If we had a lot of read actions we should only call tm_tt_read once and store the result in a const tm_the_truth_object_o* variable and reuse.

5. Make the code robust

To ensure we are actually handling the right type we should check this at the beginning of our function. If the type is not correct we should early out.

All we need to do is compare the tm_tt_type_t's of our types. Therefore we need to obtain the type id from the object id and from our expected type. From a tm_tt_id_t we can obtain the type by calling tm_tt_type() on them. tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_TYPE); will give us back the object type from a given hash. After that we can do our comparison.

void modify_my_object(tm_the_truth_o *tt, tm_tt_id_t my_object) {}

Common Types

The Truth comes with several useful common types. You can find them in the `the_truth_types. (API Documentation).

MacroDescription
TM_TT_TYPE__BOOL/TM_TT_TYPE_HASH__BOOLThe first property contains the value.
TM_TT_TYPE__UINT32_T/TM_TT_TYPE_HASH__UINT32_TThe first property contains the value.
TM_TT_TYPE__UINT64_T/TM_TT_TYPE_HASH__UINT64_TThe first property contains the value.
TM_TT_TYPE__FLOAT/TM_TT_TYPE_HASH__FLOATThe first property contains the value.
TM_TT_TYPE__DOUBLE /TM_TT_TYPE_HASH__DOUBLEThe first property contains the value.
TM_TT_TYPE__STRING/TM_TT_TYPE_HASH__STRINGThe first property contains the value.
TM_TT_TYPE__VEC2/TM_TT_TYPE_HASH__VEC2The first property contains the x value and the second the y value.
TM_TT_TYPE__VEC3/TM_TT_TYPE_HASH__VEC3The first property contains the x value and the second the y value and the third the z value.
TM_TT_TYPE__VEC4/TM_TT_TYPE_HASH__VEC4The first property contains the x value and the second the y value and the third the z value while the last one contains the w value.
TM_TT_TYPE__POSITION/TM_TT_TYPE_HASH__POSITIONSame as vec4.
TM_TT_TYPE__ROTATION/TM_TT_TYPE_HASH__ROTATIONBased on a vec4. Used to represent the rotation of an object via quaternions.
TM_TT_TYPE__SCALE/TM_TT_TYPE_HASH__SCALESame as vec3.
TM_TT_TYPE__COLOR_RGB/TM_TT_TYPE_HASH__COLOR_RGBRepresents a RGB colour.
TM_TT_TYPE__COLOR_RGBA/TM_TT_TYPE_HASH__COLOR_RGBARepresents a RGBA colour.
TM_TT_TYPE__RECT/TM_TT_TYPE_HASH__RECTThe first property contains the x value and the second the y value and the third the width value while the last one contains the height value.

There is a helper API to handle all of these types in an easy way, to reduce the boilerplate code: tm_the_truth_common_types_api.

Note: There is a list of all Truth Types the Engine comes with available on our API Documentation

Aspects

An “aspect” is an interface (struct of function pointers) identified by a unique identifier. The Truth allows you to associate aspects with object types. This lets you extend The Truth with new functionality. For example, you could add an interface for debug printing an object:

#define tm_debug_print_aspect_i_hash                                           \
  TM_STATIC_HASH("tm_debug_print_aspect_i", 0x39821c78639e0773ULL)

typedef struct tm_debug_print_aspect_i {
  void (*debug_print)(tm_the_truth_o *tt, tm_tt_id_t object);
} tm_debug_print_aspect_i;

Note: to genereate the TM_STATIC_HASH you need to run hash.exe or tmbuild.exe --gen-hash for more info open the hash.exe guide

You could then use this code to debug print an object o with:

static void example_use_case(tm_the_truth_o *tt, tm_tt_id_t object) {
  tm_debug_print_aspect_i *dp =
      tm_tt_get_aspect(tt, tm_tt_type(object), tm_debug_print_aspect_i);
  if (dp)
    dp->debug_print(tt, object);
}

Note: that plugins can extend the system with completely new aspects.

The best example of how the Engine is using the aspect system is the tm_properties_aspect_i which helps us to defines custom UIs for Truth objects.

Create a custom Truth Type

This walkthrough shows you how to create a type for the Truth. The Truth is our centralized data model for editing data in the Engine. For more details on the system itself, click here: The Truth.

You should have basic knowledge about how to write a custom plugin. If not, you might want to check this Guide.

We will cover the following topics:

  • How to define a Type.
  • Type Properties

After this walkthrough you could check out the "Create a custom asset" tutorial!

Table of Content

Define a Type

A Truth-Type in The Machinery consists out of a name (its identifier) and properties.

Note: In theory, you could also define a Type without properties.

To add a Type to the system, you need access to the Truth instance. The Engine may have more than one instance of a Truth.

Example: There is a Project Truth to keep all the project-related settings and an Engine/Application Truth that holds all the application-wide settings.

Generally speaking, you want to define Truth Types at the beginning of the Engine's life cycle. Therefore the designated place is the tm_load_plugin function. The Truth has an interface to register a truth type creation function: tm_the_truth_create_types_i.

This interface expects a function of the signature: void create_truth_types(tm_the_truth_o *tt). Whenever the Engine creates a Truth, it invokes this interface on all loaded plugins, and their types are registered. You do not need to register your Type to the interface if you want to register your Type to a specific Truth.

Note: Mostly this function is called: create_truth_types

Let us define a type. To do that, we need to get the tm_truth_api first:

// beginning of the source file
static struct tm_the_truth_api *tm_the_truth_api;
#include <foundation/api_registry.h>
// include
// [macros.h](https://ourmachinery.com/apidoc/foundation/macros.h.html#macros.h)
// to access TM_ARRAY_COUNT for convinace:
#include <foundation/macros.h>
#include <foundation/the_truth.h>
// ... other code
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
}

After this, we define our type name once as a constant char define and one hashed version. There are some conventions to keep in mind:

  1. The plain text define should start with: TM_TT_TYPE__.
  2. The hashed define should start with: TM_TT_TYPE_HASH__
  3. The name may or may not start with tm_ but the name plain text and the hashed version need to match!
#pragma once
#include <foundation/api_types.h>

#define TM_TT_TYPE__MY_TYPE "tm_my_type"
#define TM_TT_TYPE_HASH__MY_TYPE TM_STATIC_HASH("tm_my_type", 0xde0e763ccd72b89aULL)

Tip: Do not forget to run hash.exe. Otherwise, the TM_STATIC_HASH macro will cause an error. You can also run tmbuild --gen-hash

It is good practice to place the types into a header file so others can use these types as well! When that is done we can call the tm_the_truth_api->create_object_type() to create the actual type. It will return a tm_tt_type_t which is the identifier of our type. The tm_tt_id_t will also refer to the type here!

The function expects:

ArgumentDescription
tm_the_truth_o *ttThe Truth instance. This function will add the type to this instance
const char *nameThe name of the type. It will be hashed internally. Therefore the hash value of TM_TT_TYPE__ and TM_TT_TYPE_HASH___ should match! If a type with name already exists, that type is returned. Different types with the same name are not supported!
const tm_the_truth_property_definition_t *propertiesThe definitions of the properties of the type.
uint32_t num_propertiesThe number of properties. Should match properties

The home of this function should be our void create_truth_types(tm_the_truth_o *tt) . We need to add this one to our source file. After this we add the call to create_object_type to it. Remember that we have no properties yet, and our call would look like this:

static void create_truth_types(tm_the_truth_o *tt) {
  tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_TYPE, 0, 0);
}

The last step is to tell the plugin system that we intend to register our register_truth_type().

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
  tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i,
                                  create_truth_types);
}

The full source code should look like this:

my_type.h

#pragma once
#include <foundation/api_types.h>

#define TM_TT_TYPE__MY_TYPE "tm_my_type"
#define TM_TT_TYPE_HASH__MY_TYPE TM_STATIC_HASH("tm_my_type", 0xde0e763ccd72b89aULL)

(Tip: Do not forget to run hash.exe)

my_type.c

// beginning of the source file
static struct tm_the_truth_api *tm_the_truth_api;
#include <foundation/api_registry.h>
#include <foundation/the_truth.h>
#include "my_type.h"
static void create_truth_types(tm_the_truth_o *tt)
{
    tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_TYPE, 0, 0);
}
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, create_truth_types);
}

After all of this you have registered your type and it could be used. This type is just not really useful without properties.

About Properties

In The Truth, an Object-Type is made of one or multiple properties. Properties can represent the basic types:

  • bool, string, float, UINT64, UNIT32, double, `buffer\
  • subobject - An object that lives within this property
  • reference - A reference to another object
  • subobject set - A Set of subobjects
  • reference set - A Set of references.

What is the difference between a reference and a subobject?

To see the difference, consider how clone_object() works in both cases:

  • When you clone an object with references, the clone will reference the same objects as the original, i.e. they now have multiple references to them.
  • When you clone an object with subobjects, all the subobjects will be cloned too. After the clone operation, there is no link between the object's subobjects and the clone's subobjects.

An arbitrary number of objects can reference the same object, but a subobject only has a single owner.

When you destroy an object, any references to that object become NIL references — i.e., they no longer refer to anything.

When you destroy an object that has subobjects, all the subobjects are destroyed with it.

Note: For more information please check: The API Documentation

Adding properties

Let us add some properties to our Type! As you remember, when we created the Type, the function create_object_type() required a pointer to the definition of properties. You can define properties via the tm_the_truth_property_definition_t struct.

{{$include env.TM_SDK_DIR/foundation/the_truth.h:407:473}}

(API Documentation)

Within our create_truth_types we create an array of type tm_the_truth_property_definition_t. For this example, we define the properties of type bool and string.

// beginning of the source file
static struct tm_the_truth_api *tm_the_truth_api;
#include <foundation/api_registry.h>
// include [macros.h](https://ourmachinery.com/apidoc/foundation/macros.h.html#macros.h) to access TM_ARRAY_COUNT for convinace:
#include <foundation/macros.h>
#include <foundation/the_truth.h>

#include "my_type.h"

static void create_truth_types(tm_the_truth_o *tt)
{
    tm_the_truth_property_definition_t properties[] = {
        {"my_bool", TM_THE_TRUTH_PROPERTY_TYPE_BOOL},
        {"my_string", TM_THE_TRUTH_PROPERTY_TYPE_STRING},
    };
    tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_TYPE, properties, TM_ARRAY_COUNT(properties));
}
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, create_truth_types);
}

That is all we need to do to define properties for our Type! Also thanks to our automatic "reflection" system you do not have to worry about providing a UI for the type. The Properties View will automatically provide a UI for this type.

What is next?

You can find more in depth and practical tutorials in the tutorial chapter

Creation Graphs

The Creation Graph is used to create asset pipelines and also define GPU-related things such as materials and shaders. In essence it lets you set up a graph that takes some inputs and processes them using nodes that may run on either the CPU or GPU, the graph can then output things such as shader instances, images or draw calls. The big power of the Creation Graph is that it lets you reason about data that lives in the borderland between GPU and CPU, but being able to do so within one single graph.

Creations Graphs live separately from Entity Graphs. You often find Creation Graphs living under Render Components (for issuing draw calls), or within .creation_graph assets, where they are used to define materials and images. As an example, a Creation Graph that outputs an image probably takes an image that lives on the disk as input, uploads it to the GPU and then has the graph output a GPU image. However, the user is then free to add nodes in-between these steps, for example node for compression or mipmap generation. Compared to other game engines, things that often end up in the properties panel of an import image, can here be done dynamically in a graph, depending on the needs of your project.

Within the core folder that we ship with the engine you will find several creation graphs, many of these are used for defining defaul materials and also as defaults graph for use within our DCC asset import pipeline.

Simple red material

Image loading and material creation are just a few examples of what can be achieved with the creation graph. The table below shows when a creation graph is used compared to the tools one could use in Unity and Unreal.

Asset TypeUnityUnrealThe Machinery
ImagesTextureTextureCreation Graph with Image Output node
MaterialsShaderMaterialCreation Graph with Shader Instance Output node
ParticlesParticle EffectCascadeCreation Graph with GPUSim nodes that emits particles
Procedural materialsProcedural MaterialsMaterialCreation Graph with Image Output node with dynamic logic
MeshesMeshStatic MeshDCC Asset referred to by Creation Graph within Render Component

The engine comes with a Creation Graphs sample, it contains examples of how to make materials and particle systems.

Like the entity graph, the creation graph can executes nodes in sequence from an event. Some examples of this are the Tick, Init, and Compile events which are executed at known points or intervals. However, creation graphs commonly work using a a reverse flow where an output node is triggered and then all the nodes it depends on are run, in order to supply the output. Examples of these outputs are Draw Call, Shader Instance, Image, and Physics Shape. Note that these outputs are just blobs of data interpreted by the code that uses them. You can in other words add your own output nodes and types from code.

Creation Graphs for Unity Developers

When a creation graph is used as a surface shader it most closely resembles to Unity's shader graph. This is what we will focus on first.

Simple surface shader

In the example above the editor's layout was made to resemble Unity's shader graph view. When creating a material shader you need to have a Shader Instance output node. From here we can specify our shader by adding node to the left of the Shader Instance node. In this example the Lit node closely resembles Unity's PBR Master node.

Creation Graphs for Unreal Engine developers

Creation graphs are used for many different assets in The Machinery. When a creation graph is used for a shader it most closely relates to Unreal’s materials (any domain). This is what we will focus on first.

Simple brick material

In the example above the editor closely resembles the material editor from Unreal, this is however not the default layout. You can see the creation graph in the center with its output being a Shader Instance. Adding this allows any consuming code to query the material from this creation graph and it will allow the preview tab to display your material.

Simple rotating particle

The previous example showed a surface or material shader. This example shows a creation graph that fully defines a simple particle. The Shader Instance (material) is now passed to a Draw Call node, with this combination we can now fully render the particle without the need of an explicit mesh. Instead we use the Construct Quad node for a procedural quad mesh. Note that we specify the Instance Count and Num Vertices (for a single quad that is 6).

Node types

Nodes in the creation graph can be subdivided into four types, understanding the difference between these nodes is important when creating new nodes. The diagram below shows how each node can be categorized.

GPU nodes are somewhat special as they will be compiled down into a single shader instead of being interpreted like the CPU part of the creation graph. Note that GPU nodes also have a different background color to distinguish them. GPU nodes will not connect to any CPU node unless their output is a Shader Instance, this is the point at which the GPU portion of the graph is compiled and passed to the CPU.

The CPU portion of a creation graph is very similar to the entity graph in terms of layout with one exception. The creation graph often works by querying output nodes from the creation graph and working its way back from there. event nodes on the other hand allow you to follow the same flow as the entity graph, beginning from some event and continuing into other nodes.

In the example above you can see a creation graph that uses the Draw Call and Bounding Volume output nodes. A Creation Graph with these kinds of nodes is commonly found living under the Render Component, since such a componen will automatically process any draw calls. The inputs to this graph are: a DCC mesh and a Lit Shader Instance, where the latter can be thought of as a shader or material. Note the Variable GPU node that is used to pass the color from the CPU side to the GPU side, this is the only way to connect CPU nodes to GPU nodes. Currently we support the following output nodes, note that multiple of these can be present in a single creation graph.

NameInformation
Image OutputOften used by Image assets and can be supplied to materials as textures etc. Allows preview of the Image.
Bounding VolumeUsed for culling.
Draw CallGenerally used with the Render Component, allows preview.
Shader InstanceCan be used as a material, allows preview.
Physics ShapeGenerally used with a Physics Shape Component.
Ray Trace InstanceUsed to generate acceleration structures and hit shaders.
Entity Spawner - Output TransformsCan be used to query transforms from the Entity Spawner node.

Shader system interaction

A creation graph interacts with the shader system in three main ways:

  • Its GPU nodes are defined using .tmsl shaders.
  • GPU output nodes call special linker functions to evaluate the creation graph.
  • Shader instances in a creation graph are constructed using the shader system.

The last point is a technical detail that doesn’t matter for anyone extending or using the creation graph so it won’t be covered in this guide. Additional information about the creation_graph_node shader block can be found in the Shader System Reference.

Any GPU node that can be used in the creation graph has an associated .tmsl shader file. Most of these can be found here: the_machinery/shaders/nodes/*.tmsl. We also supply a Visual Studio extension for this file format which adds syntax highlighting, this extension will be used in this guide.

This is the shader code for the Sin node. It defines one input (a) and one output (res, which is the same type as a). This shader file will be constructed using the shader system into a single .hlsl function. For more information on how to create basic GPU nodes see Creating custom GPU Nodes.

This is an example of the shader code needed in the creation graph output nodes. When a creation graph node outputs a Shader Instance and has any inputs; it should define these three functions in it’s shader code block so the graph can be evaluated. The tm_graph_read function passes all the stage input variables to the graph (like position, color, uv, etc.). The tm_graph_evaluate function does most of the work. It uses the tm_graph_io_t struct to evaluate the graph by calling the functions generated by the normal nodes. Finally the tm_graph_write function passes all the graph variable to the stage output. It is important to note that whilst the tm_graph_evaluate function is necessary for graph evaluation; the tm_graph_read and tm_graph_write are not, they are helper function. For more information on how to create GPU output nodes see Creating custom GPU Nodes.

Graphics

Modern rendering architecture

The renderer has been designed to take full advantage of modern explicit graphic APIs like Vulkan. You can reason explicitly about advanced setups such as multiple GPUs and GPU execution queues. Similar to the rest of the engine, the built-in rendering pipeline is easy to tweak and extend.

Supported graphics backends

  • Vulkan 1.2
  • Nil

Camera

In The Machinery a camera is an object that converts the simulated world into a two-dimensional image. It can do this with in a physically plausible way or in a more arcade way depending on how it is set up. The method chosen is called the projection mode and The Machinery offers three modes: Perspective, Orthographic, and Physical.

Adding a camera to the scene

A camera must be used to view any scene, the Scene Tab therefor starts with a default camera. But once you wish to simulate this world an additional camera is needed. This can be done by adding a Camera Component to any entity in the scene and setting it as the viewing camera using the Set Camera node.

You can see a preview of the newly added camera in real time in the Preview Tab when selecting the camera component.

Customizing the camera

The first thing to consider when adding a camera to a scene is which projection mode it should use.
Perspective is the default projection mode. This projection mode linearly projects three-dimensional objects into two-dimensions, this was the effect that objects further away from the camera appear smaller than objects near the camera. This camera is controlled by changed the Vertical Field of View property.
Physical cameras use the same projection as perspective cameras, but instead of controlling the camera using FoV, this camera is controlled using focal length. This camera is intended for users familiar with real world cameras and aims to be more physically descriptive than its protective counterpart.
Orthographic cameras use parallel projection instead of linear projection. This means that depth has no impact on the objects scale. These cameras are often useful when making a 2D or isometric game. This camera is controlled using the box height property.

Orthographic camera

Additionally all projection modes define near and far plane properties. These directly correlate to the visible range of the camera. Lowering the visible range can improve precision within that range which might reduce depth artifacts. But this can also be used to create impossible shots, like being able to view through walls.

Setting the near plane to 4 allows us to place the camera outside of the room whilst still being able to view into it.

All camera properties

PropertyDescription
Projection ModeSpecifies which projection mode to use.
Near PlaneSpecifies the near clipping plane of the camera.
Far PlaneSpecifies the far clipping plane of the camera.
Vertical FoV (Perspective Only)Specifies the vertical field of view of the camera (in degrees in the editor, and in radians in code).
Box Height (Orthographic Only)Specifies the height of the box used for orthographic projection in meters.
Focal Length (Physical Only)Specifies the distance between the lens and the image sensor in millimeters.
ISOSpecifies the sensor’s ISO sensitivity with 100 being native ISO. This property can be used by the exposure and film grain post process effects if these are set to use the camera properties. Otherwise this property is ignored.
Shutter SpeedSpecifies the time (in seconds) the camera shutter is open. The longer the shutter is open the more light hits the sensor, which brightens the image, but also increases the motion blur. This property is only used when the respective post-processing effects are set to use the camera properties.
Sensor Size (Physical Only)Specifies the size of the camera’s sensor in millimeters. This property can be used for image gating when desired. It also plays a role in the focus breathing effect.
Focus DistanceSpecifies the distance in meters to the point of perfect focus. If the depth-of-field post process effect is active, then any point not on the focal plane will become blurry.
ApertureSpecifies the aperture ratio of the lens. This is expressed as an f-number or f-stop. Smaller f-numbers relate to larger diaphragm openings in the lens which allow more light onto the sensor. This brightens the image, but also lowers the depth of field. This property is only used when the relative post-processing effects are set to use the camera properties.

Example

The properties of the camera can be manipulated to create interested and film like effects. For example, by decreasing the focal length whilst dollying the camera backwards we can create a Dolly Zoom effect.

Camera Effects

The Machinery can simulate various camera effects as desired. These are implemented as separate components to allow full freedom in their application. Currently the following effects are available:

EffectDescription
BloomThe bloom effects adds fringes of light extending from the borders of bright areas of the scene. This simulates the real world glow that comes from viewing bright lights through a lens.
ExposureExposure controls the amount of light that hits the sensor. This has the effect of brightening or darkening the scene as desired. This can either be set using real world camera parameters or automatically as the human eye would.
Depth of FieldReal world lenses cannot focus on the entire scene at the same time. The depth of field effect simulated this be blurring out of focus areas of the scene.

Physically based light and cameras

The Machinery’s default render pipeline uses exclusively physically based rendering for accurate and predictable results based on real-life measurements. Using physically based units to set light intensity and camera parameters are therefor paramount to synthesizing accurate images. These units are widely used in the real-world, like on light bulb packaging, DSLR-cameras, and light meters.

Basics of physical light

For the purposes of real-time rendering we can split up light into two parts: the luminous flux and the chromaticity. Here the luminous flux is the strength of power of the light source, i.e. the more luminous flux the brighter the light. The chromaticity on the other hand is the color of the light regardless of its luminance.

Note that when we talk about luminous flux we are referring to the total amount light emitted from the source, not the light received by an object. In this case we would refer to illuminance, which is the total amount of light falling on a given surface. This distinction is important as The Machinery allows the user to specify their light intensity in both luminance and illuminance, where illuminance assumes a unit area (1 m^2) for the light to fall on.

By Jrh.main - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=69452076

The Machinery supports a wide range of light units. It is up to the user to determine which light unit they wish to use. The available units are:

UnitTypeShort Description
Candela (cd)Luminous IntensityScientific measurement of light being emitted in a certain direction.
Lumen (lm)Luminous FluxTotal amount of light emitted from a source. This is easiest to reference with real-world artificial lights.
Lux (lx)IlluminanceAmount of light hitting one square meter. Most useful for directional lights.
NitsLuminanceTotal light intensity of an area light. Easy to reference with real-world monitors and TVs.
Exposure Value (EV)CameraUnits relative to the viewing camera. Easiest unit to use if you’re not familiar with physically based units.

Candela (cd)

Candela is the SI base unit of luminous intensity (not to be confused with luminous flux). It measures the amount of light emitted in a particular direction as perceived by the human eye. Candela is mainly useful when accurate measurements relative to the human eye are required. A common wax candle emits light with a luminous intensity of roughly one candela. Because of its scientific nature; we only allow the use of candela for punctual light sources (point and spot lights) as they have a well defined area of effect.

Lumen (lm)

Lumen is the SI unit used to measure luminous flux. This measures the total amount of light emitted by a light source relative to the human eye. Lumens are useful units when describing artificial light sources like lamps. Therefor they are available for punctual light sources (point and spot lights) and area lights. The amount of lumen produced by an artificial light source can often be found on the light bulb’s box. For instance a decorative light might emit about 30 lm, whilst a interior light might emit 1000 lm.

Lux (lx)

Lux is an SI derived unit for illuminance and is equal to one lumen per square meter. The Machinery uses a scaled version of lux as its main light unit when rendering, meaning this unit is almost directly mapped to the rendering pipeline. Lux is particularly useful when describing a light source that doesn’t have a well defined source (like directional or IBL lights). Lux can be measured by devices available to the consumer. For reference, a starlit night might produce 0.001 lx, whilst a overcast day might produce around 1000 lx.

Nits (cd/m^2)

The Nit is an SI derived unit for luminance and is equal to one candela per square meter. It describes how much light in emitted from a particular area and is therefor only available for area lights as they have a well defined area. Nits are particularly useful when modeling a virtual display as most displays specify their brightness on the box using Nits. For example, the sRGB specification targets monitors at 80 Nits, most LCD consumer monitors are around 300 Nits, and HDR monitors can range from 450 to 1600 Nits.

Exposure Value (EV)

Exposure Value is a derived unit for illuminance relative to a camera at ISO level 100. Although not a unit of light, exposure value is a very useful unit as it describes light intensity in a more human readable way. For example 1 EV is about a moonlit night, whilst 10 EV is an overcast scene. Exposure value is available for all light sources as it describes intensity as perceived by a camera.

Chromaticity

Light color can be described using various specifications in The Machinery. The most artist friendly ways of specifying light color being RGB and HSV. Both of these methods employ a trichromacy model to describe color which is assumed by most models in color science. Another way to specify color is using color temperature with Kelvin. This is often used in the real world for light bulbs to indicate which hue is emitted by them, ranging from a dark orange to a light blue. For example 1850 K specified the hue emitted by a wax candle, whilst 3000 K is often closer to LED's.

Basics of physical cameras

The use of physically based light can create a scene which has greater accuracy than a scene without regards for physical correctness, however any scene in inevitably viewed through a camera. Using the physical camera with post processing effects based on that camera greatly increases the accuracy of the synthesized scene. Note that this is not always desired as it comes as the cost of artist flexibility and requires the user to be familiar with real-world cameras. This section will focus on the physical camera, for more information about camera in general see [Camera].

Real-world camera setups can be roughly divided into two parts, the camera itself and the lens. If a first-person view is desired than the brain can be though of as the camera, whilst the eye will function as the lens. Here the camera is the object that transforms the visible scene into an image whilst the lens modifies the view that the camera has into the scene.

http://www.rags-int-inc.com/phototechstuff/lens101/LensDiagram_1024.gif

The diagram above shows a simplified diagram of how a real-world camera projects the scene onto the sensor. In The Machinery we don’t simulate all aspects of a real-world camera, as that would be too expensive for games. Instead we use a thin lens model or a simple pinhole camera based on the desired post-processing effects. This means that the physical camera has the following properties which should be familiar to anyone who has worked in the field of photography.

PropertyShort Description
Focal LengthThe distance in millimeters from the center of the lens to the focal plane.
Shutter SpeedThe amount of seconds that the shutter allows light onto the sensor.
ApertureThe aperture size in f-stops.
ISOThe sensors sensitivity, 100 is the native sensitivity.
Sensor SizeThe physical size of the sensor in millimeters.
Focus DistanceThe distance in meters from the focal plane to the object in perfect focus.

Focal Length

The focal length the lens specified how strongly the lens converges or diverges the incoming light. This is mainly determined by the curvature of the lens. i.e. the thicker the lens the shorter the focal length. This is also the main driver behind the field of view of the camera. A long focal length (i.e. a thin lens) creates a narrow field of view, whilst a short focal length (i.e. a thick lens) creates a wide field of view. When zooming in on a physical camera you would increase the focal length of the camera by physically moving one or more lens elements forwards in the lens. Conversely, to zoom out you would decrease the distance between one or more lens elements and the image sensor.

Shutter Speed

The shutter speed of the camera determines how long light is allowed to fall onto the camera sensor. The more light is allowed onto the sensor the brighter the image, which might be required in dark scenes. But, this introduces motion blur as subjects move whilst the frame is being taken. conversely in bright scenes the shutter speed could be very short which makes motion blur less noticeable. In The Machinery motion blur is an (not yet implemented) optional post-processing effect, and increasing the shutter speed doesn’t actually change the time it takes to take a frame.

Aperture

The aperture of a lens describes the size of the opening that allows light to pass onto the sensor. Just like with shutter speed, the more light is allowed onto the sensor the brighter the image. But, when the aperture is opened it lowers the depth of field. The aperture of the lens isn’t actually specified as a physical size, instead a ratio is used between the focal length of the lens and the diameter opening, this ratio is often referred to as f-stops or f-numbers. Perhaps unintuitively, as the f-number decreases the lens diaphragm opens increasing the lens opening.

ISO

The ISO of the camera specifies the sensitivity of the camera sensor to light. A more sensitive sensor captures more light which brightens the image, but this introduces noise and film grain on the final frame. Note that film grain is an (not yet implemented) optional post-processing effect.

Sensor Size

The width and height of the sensor in millimeters. This is the second driver behind the final field of view of the camera. The size of the sensor should also impact the aspect ratio of the final frame, but this is not yet implemented. Typically the sensor size would be set to a constant value and a fitting algorithm would determine how the image is displayed on the users screen. For example, by stretching or cropping the image.

Focus Distance

The focus distance of the camera determines at which distance the an objects needs to be from the camera to be in perfect focus. This parameter is only used when the depth-of-field post-processing effect is active. The focus distance is proportional to the focal length and the image distance.

Additional Resources

Creation Graphs

The Creation Graph is used to create asset pipelines and also define GPU-related things such as materials and shaders. In essence it lets you set up a graph that takes some inputs and processes them using nodes that may run on either the CPU or GPU, the graph can then output things such as shader instances, images or draw calls. The big power of the Creation Graph is that it lets you reason about data that lives in the borderland between GPU and CPU, but being able to do so within one single graph.

Creations Graphs live separately from Entity Graphs. You often find Creation Graphs living under Render Components (for issuing draw calls), or within .creation_graph assets, where they are used to define materials and images. As an example, a Creation Graph that outputs an image probably takes an image that lives on the disk as input, uploads it to the GPU and then has the graph output a GPU image. However, the user is then free to add nodes in-between these steps, for example node for compression or mipmap generation. Compared to other game engines, things that often end up in the properties panel of an import image, can here be done dynamically in a graph, depending on the needs of your project.

Within the core folder that we ship with the engine you will find several creation graphs, many of these are used for defining defaul materials and also as defaults graph for use within our DCC asset import pipeline.

Simple red material

Image loading and material creation are just a few examples of what can be achieved with the creation graph. The table below shows when a creation graph is used compared to the tools one could use in Unity and Unreal.

Asset TypeUnityUnrealThe Machinery
ImagesTextureTextureCreation Graph with Image Output node
MaterialsShaderMaterialCreation Graph with Shader Instance Output node
ParticlesParticle EffectCascadeCreation Graph with GPUSim nodes that emits particles
Procedural materialsProcedural MaterialsMaterialCreation Graph with Image Output node with dynamic logic
MeshesMeshStatic MeshDCC Asset referred to by Creation Graph within Render Component

The engine comes with a Creation Graphs sample, it contains examples of how to make materials and particle systems.

Like the entity graph, the creation graph can executes nodes in sequence from an event. Some examples of this are the Tick, Init, and Compile events which are executed at known points or intervals. However, creation graphs commonly work using a a reverse flow where an output node is triggered and then all the nodes it depends on are run, in order to supply the output. Examples of these outputs are Draw Call, Shader Instance, Image, and Physics Shape. Note that these outputs are just blobs of data interpreted by the code that uses them. You can in other words add your own output nodes and types from code.

Shaders

The Creation Graph provides an artist-friendly way to create custom shaders by wiring together nodes into a shader network. Each node in the graph represents a snippet of HLSL code that gets combined by the shader system plugin into full HLSL programs. It can sometimes be nice to work directly with HLSL code for more advanced shaders, either by exposing new helper nodes to the Creation Graph or by directly writing a complete shader program in HLSL. This is typically done by adding new .tmsl files (where tmsl stands for The Machinery Shading Language) that The Machinery loads on boot up.

A .tmsl file is essentially a data-driven JSON front-end for creating and populating a tm_shader_declaration_o structure which is the main building block that the compiler in shader system plugin operates on. While a tm_shader_declaration_o can contain anything needed to compile a complete shader (all needed shader stages, any states and input/output it needs, etc), it is more common that they contain only fragments of and multiple tm_shader_declaration_o are combined into the final shader source that gets compiled into a tm_shader_o that can be used when rendering a draw call (or dispatching a compute job).

Note: You can find all built-in shaders in the folder: ./bin/data/shaders shipped with your engine version. (For source access: ./the_machinery/shaders)

Inserting the creation_graph block in the .tmsl file will get exposed as a node in the Creation Graph. Nodes exposed to the Creation Graph can either be function nodes (see: data/shaders/nodes/) or output nodes (see: data/shaders/output_nodes/). A function node won't compile into anything by itself unless it's connected to an output node responsible for declaring the actual shader stages and evaluating the branches of connected function nodes.

Note: More information about creating creation graph nodes you can find in the Creation Graph Section:

Typically these are function nodes (see data/shaders/nodes) that won't compile into anything without getting connected to an "output" node. We ship with a few built-in output nodes (see data/shaders/output_nodes) responsible for declaring the actual shader stages and glue everything together.

Note: For more details on the Shader Language itself, please check the Shader Reference or the Chapter The Machinery Shading Language.

The whole Shader System is explained in more detail within these posts:

Custom shaders how?

If you intend to write custom shaders you can. All your custom shaders need to be placed under the bin\data\shaders of the engine. They will be automatically compiled (if needed) on boot up of the Editor. For help with how to write a custom shader please follow the The Machinery Shading Language Guide

The Machinery Shader Language

Shaders in The Machinery are defined using The Machinery Shader Language (tmsl). Traditionally shaders (like those written in glsl or hlsl) only contain the shader code itself and some I/O definitions. The Machinery (like other engines) stores not only the shader code, but also the pipeline state in its shader files. Additionally The Machinery allows shaders to define variations and systems that allow for more complex shader generation and combinations. For a complete list of what can be in a tmsl file see the Shader System Reference. For an in depth look at the design goals of these shader files see The Machinery Shader System blog posts.

A shader file can be divided into three distinct sections:

  • Code blocks, these define HLSL code blocks that contain the main shader code.
  • Pipeline state blocks, these define the Pipeline State Objects (PSO) or shader environment required for the code to run.
  • Compilation blocks, these define meta information about how the shader should be compiled. This also allows for multiple variations of shaders, for instance one with multi-sampling enabled and one with multi-sampling disabled.

Let’s have a look at a Hello Triangle shader for The Machinery.

imports: [
    { name: "color" type: "float3" }
]

vertex_shader: {
    import_system_semantics : [ "vertex_id" ]

    code: [[
        const float2 vertices[] = {
            float2(-0.7f, 0.7f),
            float2(0.7f, 0.7f),
            float2(0.0f, -0.7f)
        };

        output.position = float4(vertices[vertex_id], 0.0f, 1.0f);
        return output;
    ]]
}

pixel_shader: {
    code: [[
        output.color = load_color();
        return output;
    ]]
}

compile: {}

In this example we have some shader code, no explicit pipeline state, and an empty compile block. The first thing to note is that tmsl files use a JSON like format. The main sections of code are the vertex_shader and the pixel_shader blocks. Within these are code blocks which specify the HLSL code that needs to run at the relative pipeline stage. In this example we create a screen-space triangle from three constant vertices and give it a color passed on as an input.

If we want to pass anything to a shader we need to define it in the imports block. Anything defined in here will be accessible though load_# or get_# functions. See the Shader System Reference for more information.

We also need to define a compile or system block in order for our shader to be compiled. If neither block is defined then the shader is assumed to be a library type shader which can be included into other shaders.

Note: You can find all built-in shaders in the folder: ./bin/data/shaders/ in the shipped engine (for source code access this is: ./the_machinery/shaders/).

Procedural shaders

Note that shaders don’t have to be written and compiled in this way. You can generate shaders directly from code using the tm_shader_repository_api. You can create a new shader declaration by calling create_shader_declaration(), populate it with your custom code by using the tm_shader_declaration_api, and compile it using create_from_declaration(). Any tmsl file will go through the same pipeline.

Note: Shader are also used to create GPU nodes for the Creation Graph, see Creation Graph: Shader System Interaction for more information.

The Machinery Shader Language Visual Studio Extension

This Visual Studio extension adds The Machinery's .tmsl language support:

  • Syntax highlighting
  • Snippets

image

Installation

Open and download the extension from VS Marketplace and use it in VS Studio.

VS Code will follow at some point.

Lighting

The Machinery’s default render pipeline provides a basic lighting stack for common lighting effects. All of these are handled through components and are capable of using the volume component to localize their effect to a spatial domain. This section will provide some clarity on how to best use these effects.

Currently we support the following lighting effects:

Ambient Occlusion

Ambient Occlusion is an effect as part of global illumination that approximates the attenuation of light due to occlusion.

SSAO

The easiest way to get AO in your scene is by adding SSAO component which is a method that calculates ambient occlusion in screen space using the depth and normal buffer from the GBuffer rendering pass.

PropertyDescription
RadiusDefines the sample radius in world space units. Larger radius needs more step count to be more correct but higher step count can hurt performance. Having larger radius also makes cutoffs on the edges of the screen more visible because screen space effects don't have scene information outside of screen.
 SSAO radius compare SSAO with radius set to 1(Left)   SSAO with radius set to 5(Right)
PowerControls the strength of the darkening effect that, increases the contrast.
 SSAO power compare SSAO with power set to 1.5(Left)   SSAO with power set to 3.0(Right)
BiasDepth buffer precision in the distance and high-frequency details in normals can cause some artifacts and noise. This parameter allows you to tweak it. But higher value will reduce details.
 SSAO power compare SSAO with bias set to 0.0(Left)   SSAO with bias set to 0.1(Right)
Step CountThe number of depth samples for each sample direction. This property has direct corelation with performance. Keeping it in 4-6 range will result optimal performance/quality ratio.

Technical Details

The SSAO implementation is based on the slides from Practical Real-Time Strategies for Accurate Indirect Occlusion presented at Siggraph 2016. It consist of 4 passes:

  • Half Screen Trace Pass: Calculates the horizon for a sample direction and the corresponding occlusion with 4x4 noise.
  • Half Screen Spatial Denoiser: Resolves that 4x4 noise with a bilateral 4x4 box blur.
  • Half Screen Temporal Denoiser: Temporally stables the result from spatial blur. Increases the sample count and reduces the flickering.
  • Full Screen Bilateral Upscale: Depth aware upscale pass that brings the denoised AO target to full resolution.

Post Processing

The Machinery’s default render pipeline provides a basic post processing stack for common effects. All of these are handled through components and are capable of using the volume component to localize their effect to a spatial domain. This section will provide some clarity on how to best use these effects.

Currently we support the following post-processing effects:

Adjusting Anti-Aliasing

Anti-Aliasing (AA) refers to various techniques for smoothing aliased, or jagged edges when dealing with a finite number of samples, like a monitor. The Machinery supports Temporal Anti-Aliasing (TAA) as a post-processing effect. Multisample Anti-Aliasing (MSAA) support is planned on a per image level, but is currently not available to all render backends.

No TAA (left) vs TAA (right)

TAA Settings

Adjusting exposure

In photography, exposure is the amount of light per unit area. The Machinery uses physically based lighting, materials, and cameras, so setting up the scene exposure correctly is important to getting the final image to look correct. To start using exposure, add an Exposure Component to your scene.

Tools

Exposure is generally measured in EV100 (Exposure Value at ISO 100) or IRE. Both can be visualized in The Machinery in the Render/Exposure/ menu.

EV100 Visualization

The EV100 visualizer shows the luminance per pixel relative to the camera’s exposure range. By default the camera has a range of [-1, 15], but in this scene it is [-1.2, 15]. In this view, higher values mean higher luminance. Note that this is an absolute scale, values in the higher and lower ranges might be clipped after exposure is applied.

IRE Visualization

The IRE visualizer (false color) shows the luminance per pixel after exposure. This scale is relative to the dynamic range of sRGB. The view works by splitting the luminance range into bands (where red is fully clipped at the top range). This view is useful when exposing a specific element in your scene. The [43, 47] (green) band is the typical range for middle grey and the [77, 84] band is a typical Caucasian skin tone.

The Machinery offers three workflows for metering exposure:

  • Manual: this mode is easy to use for static scenes where you want to focus on a specific luminance range.
  • Camera Driven: this mode is best used if you prefer to recreate a real world camera. This mode requires some understanding of real world camera properties.
  • Automatic: this mode (also known as eye adaptation) is best used on characters or other moving cameras in your scene.

Using Manual Exposure

Manual exposure just has one setting to change, Exposure Compensation. This value is added to the target exposure (zero for manual mode). Therefore this value should be increased if the luminance of the scene is increased as well (higher compensation corresponds to a darker scene).

Using Camera Driven Exposure

Camera Driven Exposure has no settings of its own. Instead, the Shutter Speed, Aperture, and ISO settings are using from the viewing camera. This mode is best used if you have a good understanding of camera properties, but in general: lowering shutter speed darkens the scene, increasing aperture (f-number) darkens the scene, and lowering ISO darkens the scene.

Using Automatic Exposure

Automatic Exposure uses a histogram to calculate the average exposure value of the scene and exposes it accordingly. The histogram can be visualized in the Render/Exposure/ menu. The settings for this mode are:

  • Min/Max EV100: these settings define the acceptable range of the automatic exposure, any value outside of this range is clamped to the outer buckets.
  • Exposure Compensation: this allows you to offset the target exposure by a specific amount. Unlike in manual mode, this setting is linear.
  • Speed Up/Down: these settings allow you to alter the speed at which the automatic exposure interpolates to the target value. By default it will expose faster to a change upwards rather than downwards.
  • Mask Mode: this allows you to weigh the scene samples based on a mask. Currently, the only supported mask is Favor Center which weighs the samples at the center of the screen higher than the edges.

General workflow

Let’s use the tools to manually expose this scene. Note that there is no right way to apply exposure to a scene. The example shown below is meant to be a bit dark, so making it brighter might go against the artistic direction.

In the false color visualization we can see that there are many areas where the scene is too dark to distinguish the colors. We can also see a little bit of bright clipping on the top side of the rock. The background sky on the other hand is pretty well exposed. Let’s try to brighten up the scene to focus more on the foreground rather than the background. I’ve done this by decreasing the exposure compensation from -0.4 to -1.3.

You can see that the scene feels a lot brighter (and somewhat warmer) than before. The foliage has noticeably more detail and the highlight on the rock is more pronounced. The sky in the background is now clipping a lot, but it is not as noticeable.

Localized Exposure

Often, you want to have exposure settings localized to a specific region in your scene. This is done using the Volume Component. Once the camera enters a region defined by the Volume Component it will use the highest order Exposure Component it can find. In this example it would use the Exposure Component on the same entity as the Volume Component if the camera is inside the volume and the global exposure component if it’s outside the volume. For more information see the Volume Component documentation.

Bloom

Bloom (or glow) is an effect for real world lenses that produces fringes of light that extend from the borders of bright areas in the scene. It is produced by diffraction patterns of light sources through a lens aperture. This particularly affects lights sources and emissive materials. In The Machinery this is implemented as multiple Gaussian blurs that only affect the bright areas on the scene.

Bloom On (left) and Off (right)

PropertyDescription
ThresholdThe luminance threshold for bloom to start considering a sample. This is roughly measured in Lux.
FalloffDefines the size of the bloom fringes. More falloff means larger fringes.
TintA chromatic tint mask that will be applied to the bloom effect. Setting this to black will render the bloom effect useless without any performance benefit.

Color Grading

The Machinery supports industry standard methods for color grading for HDR colors. Currently, there is no support for custom tone mappers:

Controls

The Color Grading component allows you to use either Lift/Gamma/Gain controls or ASC-CDL controls. Regardless of the method used, the shader will apply a single ASC-CDL transform per channel. The resulting transform is visualized using the graph. Color grading is applied just before the tone mapper in ACEScg space.

The Color Scopes Tab can be used to visualize the color spread in your scene. This might aid in color grading.

Extending The Machinery

In The Machinery, everything is a plugin. You can extend, modify or replace existing engine functionality with your plugins.

The Engine explicitly aims to be simple, minimalistic, and easy to understand. All our code is written in plain C, a significantly more straightforward language than modern C++. The entire codebase compiles in less than 30 seconds, and we support hot-reloading of DLLs, allowing for fast iteration cycles. You can modify your plugin code while the editor or the game runs since the plugin system supports hot-reloading. In short, we want to be "hackable." Our APIs are exposed as C interfaces, which means you can easily use them from C, C++, D, or any other language with an FFI for calling into C code.

Guides to follow:

The plugin system

The Machinery is built around a plugin model. All features, even the built-in ones, are provided through plugins. You can extend The Machinery by writing your own plugins.

When The Machinery launches, it loads all the plugins named tm_*.dll in its plugins/ folder. If you write your own plugins, name them so that they start with tm_ and put them in this folder, they will be loaded together with the built-in plugins.

Note: When you create a new plugin via the Engine, the premake file will not copy the plugin into your global plugin folder. The reason behind this is that we do not know if you want to create a plugin asset.

Table of Content

What are the types of plugins?

In The Machinery you can have 2 type of plugins: Engine Plugins and Plugin Assets.

Engine PluginsPlugin Assets
StorageStored in SDK_DIR/pluginsIn your project via drag&drop or imported
AvailabilityAll projectsOnly the project they were imported in
Hot-Reload Support?YesYes
Collaboration SupportNo, unless the client also loads them. They are not syncronized.Yes they are automatically synchronized since they are part of the project.

What are plugins?

In The Machinery is a Shared Library (.dll or .so) that contains the plugin entry function TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) . This is the only function that needs to be there. Otherwise the Engine could not load your plugin. Every plugin is a collection of API's or Interfaces. They together form a plugin and give the Engine the Extensibility and flexibility.

About API's

The Machinery is organized into individual APIs that can be called to perform specific tasks. A plugin is a DLL that exposes one or several of these APIs. In order to implement its functionality, the plugin may in turn rely on APIs exposed by other plugins.

A central object called the API registry is used to keep track of all the APIs. When you want to use an API from another plugin, you ask the API registry for it. Similarly, you expose your APIs to the world by registering them with the API registry.

This may seem a bit abstract at this point, so let’s look at a concrete example, unicode.h which exposes an API for encoding and decoding Unicode strings:

{{$include {TM_SDK_DIR}/foundation/unicode.h:0:97}}

Let’s go through this.

First, the code includes <api_types.h>. This is a shared header with common type declarations, it includes things like <stdbool.h> and <stdint.h> and also defines a few The Machinery specific types, such as tm_vec3_t.

In The Machinery we have a rule that header files can't include other header files (except for <api_types.h>). This helps keep compile times down, but it also simplifies the structure of the code. When you read a header file you don’t have to follow a long chain of other header files to understand what is happening.

Next follows a block of forward struct declarations (in this case only one).

Next, we have the name of this API defined as a constant tm_unicode_api, followed by the struct tm_unicode_api that defines the functions in the API.

To use this API, you would first use the API registry to query for the API pointer, then using that pointer, call the functions of the API:

static struct tm_unicode_api *tm_unicode_api;
#include <foundation/api_registry.h>
#include <foundation/unicode.h>

static void demo_usage(char *utf8, uint32_t codepoint)
{
    tm_unicode_api->utf8_encode(utf8, codepoint);
    //more code...
}

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_unicode_api = tm_get_api(reg, tm_unicode_api);
}

The different APIs that you can query for and use are documented in their respective header files, and in the apidoc.md.html documentation file (which is just extracted from the headers). Consult these files for information on how to use the various APIs that are available in The Machinery.

In addition to APIs defined in header files, The Machinery also contains some header files with inline functions that you can include directly into your implementation files. For example <math.inl> provides common mathematical operations on vectors and matrices, while <carray.inl> provides a “stretchy-buffer” implementation (i.e. a C version of C++’s std::vector).

About Interfaces

We also add an implementation of the unit test interface to the registry. The API registry has support for both APIs and interfaces. The difference is that APIs only have a single implementation, whereas interfaces can have many implementations. For example, all code that can be unit-tested implements the unit test interface. Unit test programs can query the API registry to find all these implementations and run all the unit tests.

To extend the editor you add implementations to the interfaces used by the editor. For example, you can add implementations of the tm_the_truth_create_types_i in order to create new data types in The Truth, and add implementations of the tm_entity_create_component_i in order to define new entity components. See the sample plugin examples.

It does not matter in which order the plugins are loaded. If you query for a plugin that hasn’t yet been registered, you get a pointer to a nulled struct back. When the plugin is loaded, that struct is filled in with the actual function pointers. As long as you don’t call the functions before the plugin that implements them has been loaded, you are good. (You can test this by checking for NULL pointers in the struct.)

My first plugin

This walkthrough shows you how to create your first plugin.

  • How to create a plugin from scratch?
  • What are the parts a plugin contains?
  • Writing a basic API.
  • Writing a basic Interface.
  • Making use of Plugin Callbacks.
  • How to build a plugin?
  • What is the difference between an Engine Plugin and a Plugin Asset?

Table of Content

Programming in C

The Machinery uses C99 as its interface language. I.e., all the header files that you use to communicate with The Machinery are C99 header files, and when you write a plugin, you should expose C99 headers for your APIs. The implementation of a plugin can be written in whichever language you like, as long as it exposes and communicates through a C99 header. In particular, you can write the implementation in C++ if you want to. (At Our Machinery, we write the implementations in C99.)

Note: Not used to C? No problem we have a collection of extra resources to work with C. Introduction to C

Basic Plugin Template

The Engine provides an easy way to create plugins for you via the file -> New Plugins menu. There you can choose default plugin templates. Let us choose the minimal plugin. They come with default files:

In this case, we choose the Tab template. The only difference is that the custom_tab would be minimal.

The folder structure for a minimal is called minimal.

  • premake5.lua - Your build configuration, on Windows it will generate a .sln file for you.
  • libs.json - Defines the binary dependencies of your projects. tmbuild will automatically download them for you.
  • *.c - Your source file. It contains the sample template code to guide you on what is needed.
  • build.bat / build.sh - quick build files to make building simpler for you.

What do we have?

Premake5

We are using Premake for our meta-build system generator at Our Machinery. This file defines our plugin binary dependencies and builds options for all the Machinery platforms. Premake generates the actual build scripts that we then build with tmbuild, our one-click build tool. More on tmbuild here.

We recommend you to use one single premake file that manages all your plugins. Having a main premake file avoids going into each project folder to build your project. As recommended in the chapter Project Setup: Possible folder structure for a project, we also recommend separating your plugins into subfolders.

Note: The current plugin templates always create all metafiles directly for you, but you can just adjust the main premake file and delete the other ones. This workflow is in review.

In the Book Chapter Premake you can find more in-depth information about the premake file.

Libs.json

This file tells tmbuild what kind of binary dependencies you have and what versions you need. tmbuild will automatically download them for you. For more information on the libs.json file, read its chapter.

The Build Scripts

Every plugin that is generated with the engines comes with a build.bat or build.sh at the moment. They are here to help you with your workflow. Whenever you execute them for the first time (double click on them) the script will ask you if you want your plugin to be copied into the plugins folder or if you want a plugin asset. If you decide to create a plugin asset, you need to import the Shared Lib once into your project and then use the Import Change option. More infos on Plugin Asset here.

What is the difference between a Plugin Asset and an Engine Plugin?

The significant difference is that a plugin asset is an imported Shared Library that lives within your project as a binary data blob. A plugin asset means it is only available within this project and not within other projects. On the other hand, an Engine plugin lives in the engines plugin folder and is available within all projects. More on the difference here.

Source Code

Your actual plugin code lives within the source files within Source Files and header files that help the outside world to make use of the plugin. Every plugin has one entry point. This is the source file that contains the tm_load_plugin function.

Entry Point

The tm_load_plugin() function is our entry point. In this function, we get access to the API Registry. All our APIs or Interfaces are living within this API.

The difference is that APIs only have a single implementation, whereas interfaces can have many implementations. For more information: Check the Plugin System Chapter.

We can register everything we need to register to the Engines Plugin System. You mustn't execute heavy code in this function or rely on other plugins since they might not be loaded yet! This function is just there to perform load and register operations.

It is not recommended to use this function to initialize and deinitialize data. For such things, we recommend using the init or shutdown call-back, especially since they are guaranteed to be only called when an initialization or a shutdown happens. This is in contrast to the tm_load_plugin() since this function is also called on reload.

More about hot reload here: Hot-Reloading

Plugin callbacks (Init, Shutdown, Tick)

The plugin system also provides for plugin call-backs. It is recommended to rely on these calls as little as possible. You should not rely on those for your gameplay code!

tm_plugin_init_i - This is typically called as early as possible after all plugins have been loaded.

Note: It is not called when a plugin is reloaded.

tm_plugin_shutdown_i - Is typically be called as early as possible during the application shutdown sequence

Note: It is not called when a plugin is reloaded.

tm_plugin_tick_i - This is typically called as early as possible in the application main loop “tick”.

They are stored in the foundation/plugin_callbacks.h.

Our very first API - The Command API

This API shall allow us to register commands in any plugin and execute them later if needed.

In our first API, we want to have a function that creates an API context that we need to initialize and deinitialize at the end. Moreover, we want to create an interface that we can use to register commands that you can execute via the Command API. Since requesting all commands every time might be slow, we want to cache them at the beginning of the plugin and at the end. Also, on reload.

Note: This might not be the best design choice, e.g., thread safety, but this works for demonstration purposes. PS: Treat this like you would treat slide code.

Write your own API

Let us extend the current minimal plugin and add API. API's are only useful if they can be used from the outside. Therefore a header file is needed.

my_plugin.h:

#include "foundation/api_types.h"

struct my_api
{
    void (*foo)(void);
};

#define my_api_version TM_VERSION(1, 0, 0)

my_plugin.c:

static struct tm_api_registry_api *tm_global_api_registry;
static struct tm_error_api *tm_error_api;
static struct tm_logger_api *tm_logger_api;

#include "my_api.h"

#include "foundation/api_registry.h"
#include "foundation/error.h"
#include "foundation/log.h"
#include "foundation/unit_test.h"

static void foo(void)
{
    // ...
}

static struct my_api *my_api = &(struct my_api){
    .foo = foo,
};

static void my_unit_test_function(tm_unit_test_runner_i *tr, struct tm_allocator_i *a)
{
    // ...
}

static struct tm_unit_test_i *my_unit_test = &(struct tm_unit_test_i){
    .name = "my_api",
    .test = my_unit_test_function,
};

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;

    tm_error_api = tm_get_api(reg, tm_error_api);
    tm_logger_api = tm_get_api(reg, tm_logger_api);

    tm_set_or_remove_api(reg, load, my_api, my_api);

    tm_add_or_remove_implementation(reg, load, tm_unit_test_i, my_unit_test);
}

When The Machinery loads a plugin DLL, it looks for the tm_load_plugin() function and calls it. If it can't find the function, it prints an error message. We store the API registry pointer in a static variable so that we can use it everywhere in our DLL. We also tm_get_api() some of the API pointers that we will use frequently and store them in static variables so that we don’t have to use the registry to query for them every time we want to use them. Finally, we add our own API to the registry, so others can query for and use it.

Basic Steps towards the Command API

To be added...

Sample Plugins

The easiest way to build a plugin is to start with an existing example. There are three places where you can find plugin samples:

  1. The samples folder in the SDK has a number of plugin samples.

  2. The All Sample Projects package in the Download tab has a plugins folder with some small samples. You can also find their source code here: https://github.com/OurMachinery/sample-projects

  3. You can create a new plugin with the menu command File > New Plugin. This will create a new .c file for the plugin together with some helper files for compiling it. (Follow this guide)

The distribution already comes with pre-built .dlls for the sample plugins, such as bin/plugins/tm_pong_tab.dll. You can see this plugin in action by selecting Tab > Pong in the editor to open up its tab:

Pong tab.

Table of Content

What are the build Requirements

To build plugins you need three things:

  1. You need to have Visual Studio 2019 installed including the MS C++ Build Tools on your computer. Note that the Community Edition works fine. (Or clang and the build essentials on Linux)
  2. You need to set the TM_SDK_DIR environment variable to the path of the SDK package that you installed on your computer. When you compile a plugin, it looks for The Machinery headers in the %TM_SDK_DIR%/headers folder.
  3. You need the tmbuild.exe from the SDK package. tmbuild.exe does all the steps needed to compile the plugin. Put it in your PATH or copy it to your plugin folder so that you can run it easily from the command line.

Build the sample plugin

To compile a plugin, simply open a command prompt in the plugin folder and run the tmbuild.exe executable:

sample-projects/plugins/custom_tab> %TM_SDK_DIR%/bin/tmbuild.exe
​~~~ cmd output
Installing 7za.exe...
Installing premake-5.0.0-alpha14-windows...
Building configurations...
Running action 'vs2019'...
Generated custom_tab.sln...
Generated build/custom_tab/custom_tab.vcxproj...
Done (133ms).
Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.

  custom_tab.c
  custom_tab.vcxproj -> C:\work\themachinery\build\bin\plugins\tm_custom_tab.dll

-----------------------------
tmbuild completed in: 23.471 s

tmbuild.exe will perform the following steps to build your executable:

  1. Create a libs folder and download premake5 into it. (You can set the TM_LIB_DIR environment variable to use a shared libs directory for all your projects.)
  2. Run premake5 to create a Visual Studio project from the premake5.lua script.
  3. Build the Visual Studio project to build the plugin.

Note: You can learn more about tmbuild in its own section.

Sample Plugins:

ProjectDescription
samples\plugins\assimpShows how to write a complex plugin. This is the default assimp importer plugin of the engine.
samples\plugins\atmospheric_skyShows how to interact with the ECS and the renderer.
samples\plugins\default_render_pipeThe source code of our default render pipeline. This can help you in case you want to learn more about the render pipeline
samples\plugins\gltfThe source code of our gltf importer
samples\plugins\graph_nodesShows how to implement graph nodes.
samples\plugins\pong_tabShows how to implement a more complex tab.
samples\plugins\spin_componentShows how to implement a component and a engine.
samples\plugins\ui_sample_tabA great playground for our IMGUI UI System.

Write a Tab

This walkthrough shows you how to add a custom Tab to the Engine.

During this walkthrough, we will cover the following topics:

  • How to create a tab from scratch.
  • Where and how do we register the Tab to the Engine.

You should have basic knowledge about how to write a custom plugin. If not, you might want to check this Guide and the Write a plugin guide. The goal of this walkthrough is to dissect the Tab plugin provided by the Engine.

Table of Content

Where do we start?

In this example, we want to create a new plugin, which contains our Tab. We open the Engine go to file -> New Plugin -> Editor Tab. The file dialog will pop up and ask us where we want to save our file. Pick a location that suits you.

Tip: Maybe store your plugin in a folder next to your game project.

After this, we see that the Engine created some files for us.

folder structure new plugin

Now we need to ensure that we can build our project. In the root folder (The folder with the premake file), we can run tmbuild and see if there is no issue. We will build our projects once and generate the .sln file (on windows).

If there is an issue, we should ensure we have set up the Environment variables correctly and installed all the needed dependencies. For more information, please read this guide.

Now we can open the .c file with our favorite IDE. The file will contain the following content:

static struct tm_api_registry_api *tm_global_api_registry;

static struct tm_draw2d_api *tm_draw2d_api;
static struct tm_ui_api *tm_ui_api;
static struct tm_allocator_api *tm_allocator_api;

#include <foundation/allocator.h>
#include <foundation/api_registry.h>

#include <plugins/ui/docking.h>
#include <plugins/ui/draw2d.h>
#include <plugins/ui/ui.h>
#include <plugins/ui/ui_custom.h>

#include <the_machinery/the_machinery_tab.h>

#include <stdio.h>
#define TM_CUSTOM_TAB_VT_NAME "tm_custom_tab"
#define TM_CUSTOM_TAB_VT_NAME_HASH TM_STATIC_HASH("tm_custom_tab", 0xbc4e3e47fbf1cdc1ULL)
struct tm_tab_o
{
    tm_tab_i tm_tab_i;
    tm_allocator_i allocator;
};
static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in, tm_rect_t rect)
{
    tm_ui_buffers_t uib = tm_ui_api->buffers(ui);
    tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
    tm_draw2d_style_t *style = &(tm_draw2d_style_t){0};
    tm_ui_api->to_draw_style(ui, style, uistyle);
    style->color = (tm_color_srgb_t){.a = 255, .r = 255};
    tm_draw2d_api->fill_rect(uib.vbuffer, *uib.ibuffers, style, rect);
}
static const char *tab__create_menu_name(void)
{
    return "Custom Tab";
}

static const char *tab__title(tm_tab_o *tab, struct tm_ui_o *ui)
{
    return "Custom Tab";
}
static tm_tab_vt *custom_tab_vt;

static tm_tab_i *tab__create(tm_tab_create_context_t *context, tm_ui_o *ui)
{
    tm_allocator_i allocator = tm_allocator_api->create_child(context->allocator, "Custom Tab");
    uint64_t *id = context->id;
    tm_tab_o *tab = tm_alloc(&allocator, sizeof(tm_tab_o));
    *tab = (tm_tab_o){
        .tm_tab_i = {
            .vt = custom_tab_vt,
            .inst = (tm_tab_o *)tab,
            .root_id = *id,
        },
        .allocator = allocator,
    };
    *id += 1000000;
    return &tab->tm_tab_i;
}
static void tab__destroy(tm_tab_o *tab)
{
    tm_allocator_i a = tab->allocator;
    tm_free(&a, tab, sizeof(*tab));
    tm_allocator_api->destroy_child(&a);
}
static tm_tab_vt *custom_tab_vt = &(tm_tab_vt){
    .name = TM_CUSTOM_TAB_VT_NAME,
    .name_hash = TM_CUSTOM_TAB_VT_NAME_HASH,
    .create_menu_name = tab__create_menu_name,
    .create = tab__create,
    .destroy = tab__destroy,
    .title = tab__title,
    .ui = tab__ui};
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;

    tm_draw2d_api = tm_get_api(reg, tm_draw2d_api);
    tm_ui_api = tm_get_api(reg, tm_ui_api);
    tm_allocator_api = tm_get_api(reg, tm_allocator_api);

    tm_add_or_remove_implementation(reg, load, tm_tab_vt, custom_tab_vt);
}

Code structure

Let us dissect the code structure and discuss all the points of interest.

API and include region

The file begins with all includes and API definitions:

static struct tm_api_registry_api *tm_global_api_registry;

static struct tm_draw2d_api *tm_draw2d_api;
static struct tm_ui_api *tm_ui_api;
static struct tm_allocator_api *tm_allocator_api;

#include <foundation/allocator.h>
#include <foundation/api_registry.h>

#include <plugins/ui/docking.h>
#include <plugins/ui/draw2d.h>
#include <plugins/ui/ui.h>
#include <plugins/ui/ui_custom.h>

#include <the_machinery/the_machinery_tab.h>

#include <stdio.h>
#define TM_CUSTOM_TAB_VT_NAME "tm_custom_tab"
#define TM_CUSTOM_TAB_VT_NAME_HASH                                             \
  TM_STATIC_HASH("tm_custom_tab", 0xbc4e3e47fbf1cdc1ULL)

The code will fill the API definitions with life in the tm_load_plugin function.

The most important aspects here are the two defines on the bottom:

#define TM_CUSTOM_TAB_VT_NAME "tm_custom_tab"
#define TM_CUSTOM_TAB_VT_NAME_HASH                                             \
  TM_STATIC_HASH("tm_custom_tab", 0xbc4e3e47fbf1cdc1ULL)

The first one defines the name of our Tab and the second one represents its hash value. The hash value can be used later on to access, search the Tab in the tm_docking_api.

Note: If you modify the values, please ensure you ran hash.exe again or tmbuild --gen-hash so the hash value is updated!

Define your Data

In the next section, we define the data the Tab can hold. It might be any data you need for the Tab to work and do its job. The tab instance owns the data. It is not shared between Tabs instances. Therefore its lifetime is bound to the current instance.

struct tm_tab_o {
  tm_tab_i tm_tab_i;
  tm_allocator_i allocator;
};

A tm_tab_i represents a tab object. A tab object is represented as a vtable that defines its function interface and an opaque pointer to the Tab's internal data. This design is used so that the application layer can extend the vtable with its own interface.

Define the actual Tab

Every Tab in The Machinery is based on the tm_tab_vt and registered to the tm_tab_vt in the tm_load_plugin() function.

The default tm_tab_vt offers multiple options and settings we can set for our Tab.

NameDescription
tm_tab_vt.nameName uniquely identifying this tab type.
tm_tab_vt.name_hashA hash of the name.
tm_tab_vt.create_menu_name()Optional. Returns the (localized) name that should be shown for this tab type in menus that allow you to create new tabs. If this function returns NULL, the tab type won't appear in these menus. This can be used for tabs that should only be accessible when certain feature flags are set.
tm_tab_vt.create_menu_category()Optional. Returns the (localized) category that should be shown for this tab type in menus that allow you to create new tabs. If this function returns NULL or is not set, the tab type will appear at the root level of the menu, uncategorized.
tm_tab_vt.create()Creates a new tab of this type and returns a pointer to it. tm_tab_create_context_t is an application defined type containing all the data a tab needs in order to be created. ui s the UI that the tab will be created in.
tm_tab_vt.destroy()Destroys the tab
Object methods
tm_tab_vt.ui()Callback for drawing the content of the tab into the specified rect. The uistyle is the tm_ui_api.default_style() with the clipping rect set to rect.
tm_tab_vt.ui_serial()Optional. If implemented, called from the main UI job once all parallel UI rendering (fork/join) has finished. This can be used for parts of the UI that needs to run serially, for example because they call out to non-thread-safe function.
tm_tab_vt.hidden_update()This function is optional. If the Tab wants to do some processing when it is not the selected Tab in its tabwell, it can implement this callback. This will be called for all created tabs whose content is currently not visible.
tm_tab_vt.title()Returns the localized title to be displayed for the tab. This typically consists of the name of the tab together with the document that is being edited, such as "Scene: Kitchen*"
tm_tab_vt.set_root()Optional. Sets the root object of the tab. If a new Truth is loaded, this is called with set_root(inst, new_tt, 0).
tm_tab_vt.root()Returns the root object and The Truth that is being edited in the tab. This is used, among other things to determine the undo queue that should be used for Undo/Redo operations when the tab has focus
tm_tab_vt.restore_settings()Optional. Allow the tab to restore it's own state to the settings. For example the Asset Browser will use this to save the view size of the assets.
tm_tab_vt.save_settings()Optional. Allow the tab to save it's own state to the settings. For example the Asset Browser will use this to save the view size of the assets.
tm_tab_vt.can_close()Optional. Returns true if the tab can be closed right now and false otherwise. A tab might not be able to close if it's in the middle of an important operation. Tabs that do not implement this method can be closed at any time.
tm_tab_vt.focus_event()documentation
tm_tab_vt.feed_events()Optional. For feeding events to the tab. Useful for feeding events to UIs that are internal to a tab.
tm_tab_vt.process_dropped_os_files()Optional. If set, the tab will receive the path to the files that were dropped from the OS since the previous frame.
tm_tab_vt.toolbars()Optional. Returns a carray of toolbars to be drawn in the tab, allocated using ta. How to add toolbars
tm_tab_vt.need_update()Optional. Allow the tab to decide whether it's UI needs an update. Tabs that have animated components like the pong tab will return always true, while other tab may decide to return true only under certain circumstances. If not provided, the assumed default value will be true, so the tab will be updated every frame. If it returns false the UI will be cached. Therefore any call to .ui wont be called.
tm_tab_vt.hot_reload()Optional. Will be called after any code hot reload has happened.
tm_tab_vt.entity_context()Optional. Should be implemented if tab owns an entity context.
tm_tab_vt.viewer_render_args()Optional. Should be implemented if tab owns an entity context that supports to be rendered outside of it's UI callbacks.
Flags
tm_tab_vt.cant_be_pinnedIf set to true, the tab can't be pinned even though it has a root function.
tm_tab_vt.run_as_jobIf set to true, the tab's UI will run as a background job, parallel to the rest of the UI rendering. Warning: Setting this to true indicates to the docking system that the ui() function is thread-safe. If the function is not actually thread-safe you will see threading errors.
tm_tab_vt.dont_restore_at_startupIf set to true, the tab will be considered volatile, and it won't be restored when the last opened project is automatically opened at startup, even if the user had the tab opened when the project was closed.
tm_tab_vt.dont_restore_root_asset_at_startupIf set to true, the tab will be restored at startup, but the root of the tab won't be set to the one that was set during application shutdown. Basically the project will be restored, but it will be always empty.

In this example, we make use of the following options:

static tm_tab_vt *custom_tab_vt =
    &(tm_tab_vt){.name = TM_CUSTOM_TAB_VT_NAME,
                 .name_hash = TM_CUSTOM_TAB_VT_NAME_HASH,
                 .create_menu_name = tab__create_menu_name,
                 .create = tab__create,
                 .destroy = tab__destroy,
                 .title = tab__title,
                 .ui = tab__ui};

In the cause of the rest of this walkthrough, we will discuss:tab__create_menu_name, tab__create, tab__destroy , tab__title and tab__ui.

Define the metadata functions

As we can see in our definition of the custom_tab_vt object we provide the tm_tab_vt.create_menu_name() and the tm_tab_vt.title(). The create_menu_name is an optional function to allow you to provide a name for the create tab menu. In contrast, the title() function is not optional and is needed. It provides the name of the Tab, which the editor shall show in the tab bar.

static const char *tab__create_menu_name(void) { return "Custom Tab"; }

static const char *tab__title(tm_tab_o *tab, struct tm_ui_o *ui) {
  return "Custom Tab";
}

Define create and destroy the Tab

As mentioned before, the data of a tab is bound to its lifetime. Therefore you should create the data on create and let go of it on destroy.

The create function provides you the tm_tab_create_context_t access to many essential things, such as an allocator. This allocator is the one you should use directly or create a child allocator.

Note: for more information check tm_tab_create_context_t's documentation.

static tm_tab_vt *custom_tab_vt;

static tm_tab_i *tab__create(tm_tab_create_context_t *context, tm_ui_o *ui) {
  tm_allocator_i allocator =
      tm_allocator_api->create_child(context->allocator, "Custom Tab");
  uint64_t *id = context->id;
  tm_tab_o *tab = tm_alloc(&allocator, sizeof(tm_tab_o));
  *tab = (tm_tab_o){
      .tm_tab_i =
          {
              .vt = custom_tab_vt,
              .inst = (tm_tab_o *)tab,
              .root_id = *id,
          },
      .allocator = allocator,
  };
  *id += 1000000;
  return &tab->tm_tab_i;
}

We use the provided allocator to allocate the Tab struct, and then we initialize it with the data we deem to be needed.

tm_tab_o *tab = tm_alloc(&allocator, sizeof(tm_tab_o));
*tab = (tm_tab_o){
    .tm_tab_i =
        {
            .vt = custom_tab_vt,
            .inst = (tm_tab_o *)tab,
            .root_id = *id,
        },
    .allocator = allocator,
};

Since we have allocated something, we need to keep track of the used allocator! Hence we have it as a member in our Tab struct.

In the end, we pass a pointer to the Tab interface.

 return &tab->tm_tab_i;

When it comes to free the Tab data, we can just call tm_free() on our Tab:

static void tab__destroy(tm_tab_o *tab) {
  tm_allocator_i a = tab->allocator;
  tm_free(&a, tab, sizeof(*tab));
  tm_allocator_api->destroy_child(&a);
}

Define the UI update

In the default example, we create a Tab that only updates when the Tab is active and visible. Therefore we do not need the tm_tab_vt.hidden_update() function and can just implement the required one: tm_tab_vt.ui().

The Tab itself shall not be jobifed since run_as_job is not provided (its default value is false). Therefore we know our function itself may contain none thread safe elements.

If we wanted to make our Tab jobifed, we could make use of the tm_tab_vt.hidden_update() function. This function is optional. If the Tab wants to do some processing when it is not the selected Tab in its tabwell, it can implement this callback. This will be called for all created tabs whose content is currently not visible.

Let us digest the current code line by line:

static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in,
                    tm_rect_t rect) {
  tm_ui_buffers_t uib = tm_ui_api->buffers(ui);
  tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
  tm_draw2d_style_t *style = &(tm_draw2d_style_t){0};
  tm_ui_api->to_draw_style(ui, style, uistyle);
  style->color = (tm_color_srgb_t){.a = 255, .r = 255};
  tm_draw2d_api->fill_rect(uib.vbuffer, *uib.ibuffers, style, rect);
}
static const char *tab__create_menu_name(void) { return "Custom Tab"; }

static const char *tab__title(tm_tab_o *tab, struct tm_ui_o *ui) {
  return "Custom Tab";
}
static tm_tab_vt *custom_tab_vt;

static tm_tab_i *tab__create(tm_tab_create_context_t *context, tm_ui_o *ui) {
  tm_allocator_i allocator =
      tm_allocator_api->create_child(context->allocator, "Custom Tab");
  uint64_t *id = context->id;
  tm_tab_o *tab = tm_alloc(&allocator, sizeof(tm_tab_o));
  *tab = (tm_tab_o){
      .tm_tab_i =
          {
              .vt = custom_tab_vt,
              .inst = (tm_tab_o *)tab,
              .root_id = *id,
          },
      .allocator = allocator,
  };
  *id += 1000000;
  return &tab->tm_tab_i;
}
static void tab__destroy(tm_tab_o *tab) {
  tm_allocator_i a = tab->allocator;
  tm_free(&a, tab, sizeof(*tab));
  tm_allocator_api->destroy_child(&a);
}
static tm_tab_vt *custom_tab_vt =
    &(tm_tab_vt){.name = TM_CUSTOM_TAB_VT_NAME,
                 .name_hash = TM_CUSTOM_TAB_VT_NAME_HASH,
                 .create_menu_name = tab__create_menu_name,
                 .create = tab__create,
                 .destroy = tab__destroy,
                 .title = tab__title,
                 .ui = tab__ui};
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_global_api_registry = reg;

  tm_draw2d_api = tm_get_api(reg, tm_draw2d_api);
  tm_ui_api = tm_get_api(reg, tm_ui_api);
  tm_allocator_api = tm_get_api(reg, tm_allocator_api);

  tm_add_or_remove_implementation(reg, load, tm_tab_vt, custom_tab_vt);
}

The tm_docking_api, which will call our Tab's update, provides us with the essential information:

  • tm_tab_o* tab our tab data to access any data we need
  • tm_ui_o* ui an instance of the UI, needed to call the tm_ui_api
  • const tm_ui_style_t* uistyle_in an instance of the current UI style, can be used to create a local version of it to modify the UI Style for this Tab.
  • tm_rect_t rect the render surface of the Tab.

In the first line of the function body, we create a new instance of the UI Buffers. You may use them to access the underlying buffers for calls to thetm_draw2d_api.Also, this object allows access to the commonly shared metrics and colors.

tm_ui_buffers_t uib = tm_ui_api->buffers(ui);

After this, we define our local copy of the UI Style. Then we create an empty tm_draw2d_style_t instance. We need to create a Style from the UI Style. You need tm_draw2d_style_t* style later for drawing anything with our draw 2d api.

tm_ui_buffers_t uib = tm_ui_api->buffers(ui);
tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
tm_draw2d_style_t *style = &(tm_draw2d_style_t){0};
tm_ui_api->to_draw_style(ui, style, uistyle);

Now we are set, and we can finally color our tab background to red. You can do this with the tm_draw2d_api.fill_rect() call. Beforehand we need to change our style's color to red and then call the tm_draw2d_api.fill_rect(). We need to pass in the vertex buffer and the index buffer pointer so the function can draw into them.

style->color = (tm_color_srgb_t){.a = 255, .r = 255};
tm_draw2d_api->fill_rect(uib.vbuffer, *uib.ibuffers, style, rect);

Note: For more information on the rational behind the UI System please check out this blog post https://ourmachinery.com/post/one-draw-call-ui/

Register the Tab

The last thing before we can compile our project and test it in the Engine is registering the Tab to the Plugin System. As mentioned before, you need to register the Tab to the: tm_tab_vt .

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_global_api_registry = reg;

  tm_draw2d_api = tm_get_api(reg, tm_draw2d_api);
  tm_ui_api = tm_get_api(reg, tm_ui_api);
  tm_allocator_api = tm_get_api(reg, tm_allocator_api);

  tm_add_or_remove_implementation(reg, load, tm_tab_vt, custom_tab_vt);
}

Plugin assets

When you put a plugin in the plugin folder, it will be loaded every time you start The Machinery and used by all projects. This is convenient, but sometimes you want plugins that are project specific, e.g., the gameplay code for a particular game.

Table of Content

The two ways of achieving plugin only assets

There are two ways of doing this.

First, you could create a separate The Machinery executable folder for that specific project. Just make a copy of the The Machinery folder and add any plugins you need. Whenever you want to work on that project, make sure to start that executable instead of the standard one.

In addition to adding project specific plugins, this method also lets you do additional things, such as using different versions of The Machinery for different projects and remove any of the standard plugins that you don't need in your project.

The second method is to store plugins as assets in the project itself. To do this, create a New Plugin in the Asset Browser and set the DLL path of the plugin to your DLL. We call this a Plugin Asset.

The Plugin Assets will be loaded whenever you open the project and unloaded whenever you close the project. Since the plugin is distributed with the project, if you send the project to someone, they will automatically get the plugin too -- they don't have to manually install into their plugin folder. This can be a convenient way of distributing plugins.

WARNING: Security Warning

Since plugin assets can contain arbitrary code and there is no sandboxing, when you run a plugin asset, it will have full access to your machine. Therefore, you should only run plugin assets from trusted sources. When you open a project that contains plugin assets, you will be asked if you want to allow the code to run on your machine or not. You should only click [Allow] if you trust the author of the project.

NOTE: Version Issues

Since The Machinery is still in early adopters mode and doesn't have a stable API, plugins will only work with the specific version they are developed for. If you send a plugin to someone else (for example as a plugin asset in a project), you must make sure that they use the exact same version of The Machinery. Otherwise, the plugin will most likely crash.

How to create a plugin asset

You can create a plugin asset in the Asset Browser. Righ Click -> New -> New Plugin. This will create a plugin asset in your asset browser. On its own this is quite useless. When you select it you can set the DLL Path for your plugin on windows or on linux. The moment you have selected the path to the dll. It will be imported and stored in the asset.

Note: The asset plugin will store the path absolute.

The plugin asset settings look as following:

You would have to repeat the above described workflow every time you change the code of your plugin. This is very annoying, but do not worry hot-reloading comes to rescue!

You can enable hot-reload for plugin assets by checking the Import When Changed checkbox (1) in the plugin properties. If checked, the editor will monitor the plugin's import path for changes and if it detects a file change, it will reimport the plugin.

The Windows & Linux DLL Path (2) can be used to provide the path to the DLLs for the importing the plugin. Plugin Assets need to obey the same rules as normal plugins. Therefore they need to provide the TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) function. In case this is not possible because the DLL is a helper the helper check box can be called.

Hot-Reloading

We support hot-reloading of plugins while The Machinery is running. This allows you to work on a plugin and see the changes in real-time without having to shut down and restart the application between each change to your code.

Hot-reloading is enabled by default, but can be disabled with the --no-hot-reload parameter.

When a reload happens, the function pointers in the plugin's API struct are replaced with function pointers to the new code. Since clients hold pointers to this struct, they will use the new function pointers automatically -- they don't have to re-query the system for the API.

Note that hot-reloading is not magical and can break in a lot of situations. For example, if you remove functions in the API or change their parameters, any clients of your API will still try to call them using the old parameter lists, and things will most likely crash. Similarly, if a client has stashed away a function pointer to one of your API functions somewhere, such as in a list of callbacks, there is no way for us to patch that copy and it will continue to call the old code. Also, if you make changes to the layout of live data objects (such as adding or removing struct fields) things will break because we make no attempts to transfer the data to the new struct format.

But adding or removing static functions, or changing the code inside functions should work without problems. We find hot-reloading to be a big time saver even if it doesn't work in all circumstances.

If you want to use global variables in your DLL you should do so using the tm_api_registry_api->static_variable() function in your tm_load_plugin() code. If you just declare a global variable in your .c file, that variable will be allocated in the DLLs memory space and when the DLL is reloaded you will lose all changes to the variable. When you use static_variable(), the variable is allocated on the heap, and its content is preserved when the DLL is reloaded.

If you are using hot-reloading together with a debugger on Windows, be aware that the debugger will lock .pdb files which will prevent you from rebuilding your code. The suggested workflow is something like this:

  • Detach the debugger if it's currently attached.
  • Rebuild your DLL and fix any compiler bugs.
  • When the DLL is built successfully, The Machinery will automatically reload it.
  • If you need to continue debugging, re-attach the debugger.

Application Hook's

The Machinery allows you to hook your code into specific customization points. Those points happen in different phases and have specific purposes. The biggest difference between the Runner and the Editor is that only the customization points differ in the central update loop.

Table of Content

Application Create

Update

Important side note here tm_plugin_tick_i should not be used for gameplay. To manage your gameplay you should rely on the given gameplay hooks:

  • Entity Component Systems
  • Entity Component Engines
  • Simulation Entry

They are the only recommended way of handling gameplay in the Engine.

Note: Plugin reloads only happen if a plugin has been identified as replaced.

Editor

Runner

Project Hooks

Application Shutdown

Overview

InterfaceDescription
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)Entry point for all plugins
tm_plugin_init_iIs typically called as early as possible after all plugins have been loaded. Is not called when a plugin is reloaded.
tm_plugin_set_the_truth_iIs called whenever the "main" Truth of the application changes. The "main" Truth is the primary Truth used for editing data in the application. Under API Review
tm_render_pipeline_vt
tm_the_machinery_project_loaded_iIs called when a project is loaded.
tm_plugin_reload_iIs called whenever plugins are reloaded after the reload finishes.
tm_plugin_tick_iIs typically called as early as possible in the application main loop "tick".
A tab that is registered to tm_tab_vt and has a tm_tab_vt.ui() or tm_tab_vt.hidden_update() function.Interface name for the tab vtable. Any part of the UI that can act as a tab should implement this interface.
tm_entity_register_engines_simulation_i_version Is called at the beginning of a simulation (start up phase) and all Systems / Engines are registered to the entity context. tm_entity_system_i.update() or tm_engine_i.update()Used to register a tm_entity_register_engines_i that should run in simulation mode with. More information in the designated chapter. Entity Component System
tm_entity_register_engines_editor_i_versionUsed to register a tm_entity_register_engines_i that should run in editor mode with.
tm_simulation_entry_i tm_simulation_entry_i.start() tm_simulation_entry_i.tick() tm_simulation_entry_i.stop()The Simulation Entry interface tm_simulation_entry_i makes it possible to choose a point of entry for code that should run while the simulation (simulation tab or runner) is active. More information in the designated chapter. Simulation Entry
tm_the_machinery_project_unloaded_iIs called when a project is unloaded.
tm_the_machinery_project_saved_iIs called when a project is saved.
tm_plugin_shutdown_iIs called when the application shutdowns on all plugins that have an interface registered. Is not called when a plugin is reloaded.

Premake Guide

At Our Machinery we are using Premake for our meta build system generator. Premake generates for us the actual build scripts that we then build with tmbuild our one-click build tool. More on tmbuild here.

Table of Content

Why Use a Build Configurator?

Maintaining different Makefiles and Visual Studio Solution files for a cross platform project can be a lot of work. Especially since you have to adjust the generator a lot for every new platform you support.

  • Writing Makefiles even for the simplest projects can be a tedious effort without automation, and it is difficult to debug.
  • Visual Studio Solution and Project Files are pre-generated, but are not designed to be easy to edit by hand, so resolving merge conflicts with them can be frustrating.

Several build tools exist that allow us to avoid these issues. CMake is widely used, but tends to have a high learning curve and its scripting language is not the easiest to learn. Moreover it can very hard to debug.

Source: This information is based on Getting Started With Premake by Johannes Peter

What is premake

Premake5 is a Lightweight, Open-source, Lua-based alternative to CMake. As with CMake, Premake allows you to define the structure and contents of your project and then dynamically generate whatever build files (Makefiles, VS Solutions, Xcode Projects, etc) you need at the time.

At the core of Premake5 is a premake5.lua file that describes your project (what programming language it uses, where to find source files, what dependencies it has, etc.). ‘premake5.lua’ is to Premake what is ‘Makefile’ is to GNU Make or a project/sln file to Visual Studios.

Because premake5.lua allows you to uniquely generate whatever build files you need in seconds, you no longer need to version them in your repository. You configure your version control to ignore the build files, and version the premake5.lua file instead. Resolving merge conflicts on a premake5.lua file is far more sane than on a Visual Studio project file.

Once you have a premake5.lua file, you can run the premake executable to generate your desired project files. For example:

  • To generate a Makefile on Linux you run ./premake5 vs20122
  • To generate a VS 2022 Solution on Windows you run premake5 vs2022

Source: This information is based on Getting Started With Premake by Johannes Peter

The Basic The Machinery Premake Setup

Since Premake is lua based you can make use of Lua's features. In this case we show a simple premake file for a plugin:

-- premake5.lua
-- version: premake-5.0.0-alpha14

function snake_case(name)
    return string.gsub(name, "-", "_")
end

-- Include all project files from specified folder
function folder(t)
    if type(t) ~= "table" then t = {t} end
    for _,f in ipairs(t) do
        files {f .. "/**.h",  f .. "/**.c", f .. "/**.inl", f .. "/**.cpp", f .. "/**.m", f .. "/**.tmsl"}
    end
end

function check_env(env)
    local env_var = os.getenv(env)

    if env_var == nil then
        return false
    end
    return true
end

function tm_lib_dir(path)
    local lib_dir = os.getenv("TM_LIB_DIR")

    if not check_env("TM_LIB_DIR") then
        error("TM_LIB_DIR not set")
        return nil
    end

    return lib_dir .. "/" .. path
end

oldlibdirs = libdirs
function libdirs(path)
    if not check_env("TM_SDK_DIR") then
        error("TM_SDK_DIR not set")
        return
    end
    sdk_dir = os.getenv("TM_SDK_DIR")
    oldlibdirs { 
        sdk_dir .. "/lib/" .. _ACTION .. "/%{cfg.buildcfg}",
        sdk_dir .. "/bin/%{cfg.buildcfg}",
        dirs
    }
    oldlibdirs { 
        sdk_dir .. "/lib/" .. _ACTION .. "/%{cfg.buildcfg}",
        sdk_dir .. "/bin/%{cfg.buildcfg}",
        dirs
    }
end

-- Make incluedirs() also call sysincludedirs()
oldincludedirs = includedirs
function includedirs(dirs)
    if not check_env("TM_SDK_DIR") then
        error("TM_SDK_DIR not set")
        return
    end
    sdk_dir = os.getenv("TM_SDK_DIR")
    oldincludedirs { 
        sdk_dir .. "/headers",
        sdk_dir ,
         dirs
    }
    sysincludedirs { 
        sdk_dir .. "/headers",
        sdk_dir ,
        dirs
    }
end
-- Makes sure the debugger points to the machinery
function set_debugger_to_engine()
    local sdk_dir = os.getenv("TM_SDK_DIR")
    if not check_env("TM_SDK_DIR") then
        error("TM_SDK_DIR not set")
        return
    end
    local debug_path_source = ""
    local debug_path_binary = ""
    if os.target() == "windows" then
         debug_path_source = "/bin/Debug/the-machinery.exe"
         debug_path_binary = "/bin/the-machinery.exe"
    else
         debug_path_source = "/bin/Debug/the-machinery"
         debug_path_binary = "/bin/the-machinery"
    end
    if os.isfile(sdk_dir..""..debug_path_source) then
        debugcommand(sdk_dir..debug_path_source)
    elseif os.isfile(sdk_dir..""..debug_path_binary) then
        debugcommand(sdk_dir..debug_path_binary)
    else
        error("Could not find '"..sdk_dir..""..debug_path_binary.."' nor '"..sdk_dir..""..debug_path_source.."'\nSuggestion: Please make sure the TM_SDK_DIR enviroment variable is pointing to the correct folder.")
    end
end


newoption {
    trigger     = "clang",
    description = "Force use of CLANG for Windows builds"
}

workspace "test-plugin"
    configurations {"Debug", "Release"}
    language "C++"
    cppdialect "C++11"
    flags { "FatalWarnings", "MultiProcessorCompile" }
    warnings "Extra"
    inlining "Auto"
    sysincludedirs { "" }
    targetdir "bin/%{cfg.buildcfg}"

filter "system:windows"
    platforms { "Win64" }
    systemversion("latest")

filter {"system:linux"}
    platforms { "Linux" }

filter { "system:windows", "options:clang" }
    toolset("msc-clangcl")
    buildoptions {
        "-Wno-missing-field-initializers",   -- = {0} is OK.
        "-Wno-unused-parameter",             -- Useful for documentation purposes.
        "-Wno-unused-local-typedef",         -- We don't always use all typedefs.
        "-Wno-missing-braces",               -- = {0} is OK.
        "-Wno-microsoft-anon-tag",           -- Allow anonymous structs.
    }
    buildoptions {
        "-fms-extensions",                   -- Allow anonymous struct as C inheritance.
        "-mavx",                             -- AVX.
        "-mfma",                             -- FMA.
    }
    removeflags {"FatalLinkWarnings"}        -- clang linker doesn't understand /WX

filter "platforms:Win64"
    defines { "TM_OS_WINDOWS", "_CRT_SECURE_NO_WARNINGS" }
    includedirs { }
    staticruntime "On"
    architecture "x64"
    libdirs { }
    disablewarnings {
        "4057", -- Slightly different base types. Converting from type with volatile to without.
        "4100", -- Unused formal parameter. I think unusued parameters are good for documentation.
        "4152", -- Conversion from function pointer to void *. Should be ok.
        "4200", -- Zero-sized array. Valid C99.
        "4201", -- Nameless struct/union. Valid C11.
        "4204", -- Non-constant aggregate initializer. Valid C99.
        "4206", -- Translation unit is empty. Might be #ifdefed out.
        "4214", -- Bool bit-fields. Valid C99.
        "4221", -- Pointers to locals in initializers. Valid C99.
        "4702", -- Unreachable code. We sometimes want return after exit() because otherwise we get an error about no return value.
    }
    linkoptions {"/ignore:4099"}
    buildoptions {"/utf-8"}     

filter {"platforms:Linux"}
    defines { "TM_OS_LINUX", "TM_OS_POSIX" }
    includedirs { }
    architecture "x64"
    toolset "clang"
    buildoptions {
        "-fms-extensions",                   -- Allow anonymous struct as C inheritance.
        "-g",                                -- Debugging.
        "-mavx",                             -- AVX.
        "-mfma",                             -- FMA.
        "-fcommon",                          -- Allow tentative definitions
    }
    libdirs { }
    disablewarnings {
        "missing-field-initializers",   -- = {0} is OK.
        "unused-parameter",             -- Useful for documentation purposes.
        "unused-local-typedef",         -- We don't always use all typedefs.
        "missing-braces",               -- = {0} is OK.
        "microsoft-anon-tag",           -- Allow anonymous structs.
    }
    removeflags {"FatalWarnings"}

filter "configurations:Debug"
    defines { "TM_CONFIGURATION_DEBUG", "DEBUG" }
    symbols "On"
    filter "system:windows"
        set_debugger_to_engine() -- sets the debugger in VS Studio to point to the_machinery.exe

filter "configurations:Release"
    defines { "TM_CONFIGURATION_RELEASE" }
    optimize "On"

project "test-plugin"
    location "build/test-plugin"
    targetname "test-plugin"
    kind "SharedLib"
    language "C++"
    files {"*.inl", "*.h", "*.c"}

Wow this is a lot of code ... lets talk about the basics: filter , workspace and project

filter {...} allows you to set a filter for a specific configuiration or option. For example:

filter "configurations:Debug"
    defines { "TM_CONFIGURATION_DEBUG", "DEBUG" }
    symbols "On"
    filter "system:windows"
        set_debugger_to_engine() -- sets the debugger in VS Studio to point to the_machinery.exe

filter "configurations:Release"
    defines { "TM_CONFIGURATION_RELEASE" }
    optimize "On"

This example tells Premake5 that on Debug we have the define TM_CONFIGURATION_DEBUG and we generate debug symbols : symbols "On" and only on windows we make sure that we point to the right debugger:

    filter "system:windows"
        set_debugger_to_engine() -- sets the debugger in VS Studio to point to the_machinery.exe

With this knowlege we still have not reduced the amount of code to think about! Yes that is right so lets just say: We can ignore 90% of the premake file and just take it as it is and only focus on the really important aspects:

--- more code we can ignore
workspace "test-plugin"
    configurations {"Debug", "Release"}
    language "C++"
    cppdialect "C++11"
    flags { "FatalWarnings", "MultiProcessorCompile" }
    warnings "Extra"
    inlining "Auto"
    sysincludedirs { "" }
    targetdir "bin/%{cfg.buildcfg}"
--- more code we can ignore
project "test-plugin"
    location "build/test-plugin"
    targetname "test-plugin"
    kind "SharedLib"
    language "C++"
    files {"*.inl", "*.h", "*.c"}

This defines our workspace. This woul be in VS our Solution:

workspace "test-plugin" --- The name of the Workspace == The Solution name in VS
    configurations {"Debug", "Release"} -- The configurations we offer
    language "C++" -- The Language in this case C++
    cppdialect "C++11" 
    flags { "FatalWarnings", "MultiProcessorCompile" } -- We treat warnings as errors and can compile with more cores
    warnings "Extra" -- That we basically show all warnings
    inlining "Auto"
    sysincludedirs { "" } -- making sure we have the system includes
    targetdir "bin/%{cfg.buildcfg}" -- where do we store our binary files, not our .d files etc...

The next important aspect is the project itself:

project "test-plugin" -- The name of the project in VS or the target in the Makefile
    location "build/test-plugin" -- Where do we want to store all our artifacts such as .d .obj etc
    targetname "test-plugin" -- The name of our executable, shared lib or static lib?
    kind "SharedLib" -- What kind? StaticLib, SharedLib, Executable?
    language "C++"-- What is our languge of implemntation? 
    files {"*.inl", "*.h", "*.c"} -- What files do we want to automagically include in our project?

If you want to add a new project to your premake file you can just add a new block such as the one above to to your premake file:

project "new-project" 
    location "build/new-project" 
    targetname "new-project" 
    kind "SharedLib" 
    language "C"
	cppdialect "C11" -- you can also add other things here!
    files {"*.inl", "*.h", "*.c"} 

This adds a new project called new-project to your premake file when you compile now the dll with the name `new-projectwill be build on Windows on Linux it will be a.so\ file.

Advanced Premake5: Adding functions to make our live easier

The code example above is great but its quite a lot of work. This is something you can change. Since Premake5 is using Lua you can just write functions to bundle your code together. For example lets say we want a base for all our projects:

function base(name)
    project(name)
        language "C++"
        includedirs { "" }
end

And than we want to make sure that our plugins (.dlls) are all the same but easy to use:

function plugin(name)
    local sn = snake_case(name) -- function that converts a name to snake case
    base(name)
        location("build/plugins/" .. sn)
        kind "SharedLib"
        targetdir "bin/%{cfg.buildcfg}/plugins"
        targetname("tm_" .. sn)
        defines {"TM_LINKS_" .. string.upper(sn)}
        dependson("foundation")
        folder {"plugins/" .. sn}
        language "C++"
        includedirs { "" } -- A override (see above) that makes sure we have the right include dir also with the SDK dirs
end

This allows us to re-write our example from above:

plugin("test-plugin")
plugin("new-project")

Moreover we need a utility tool? No problem we can just write a function for this:

-- Project type for utility programs
function util(name)
    local sn = snake_case(name)
    base(name)
        location("build/" .. sn)
        kind "ConsoleApp"
        targetdir "bin/%{cfg.buildcfg}"
        defines { "TM_LINKS_FOUNDATION" }
        dependson { "foundation" }
        links { "foundation" }
        folder {"utils/" .. sn}
        filter { "platforms:Linux" }
            linkoptions {"-ldl", "-lanl", "-pthread"}
        filter {} -- clear filter for future calls
end

This allows for the following lines of code:

plugin("test-plugin")
plugin("new-project")
util("my-untility")

The Machinery Project Recommendation

We recommend you to make use of one single premake file that manages all your plugins at one build. This avoids the need to go in each of the folder to build your project. As recommended in the the chapter Project Setup: Possible folder structure for a project we recommend also to seperate your plugins into sub folders. The following image shows a potential setup for your game plugins:

In here we have one single premake file and a single libs.json as well as the libs folder. This allows you to run tmbuild just in this folder and all plugins or the ones you want to build can be built at once.

In this case the premake file could look like this:

TODO move this to code snippets!

-- premake5.lua
-- version: premake-5.0.0-alpha14

function snake_case(name)
    return string.gsub(name, "-", "_")
end

-- Include all project files from specified folder
function folder(t)
    if type(t) ~= "table" then t = {t} end
    for _,f in ipairs(t) do
        files {f .. "/**.h",  f .. "/**.c", f .. "/**.inl", f .. "/**.cpp", f .. "/**.m", f .. "/**.tmsl"}
    end
end

function check_env(env)
    local env_var = os.getenv(env)

    if env_var == nil then
        return false
    end
    return true
end

function tm_lib_dir(path)
    local lib_dir = os.getenv("TM_LIB_DIR")

    if not check_env("TM_LIB_DIR") then
        error("TM_LIB_DIR not set")
        return nil
    end

    return lib_dir .. "/" .. path
end

oldlibdirs = libdirs
function libdirs(path)
    if not check_env("TM_SDK_DIR") then
        error("TM_SDK_DIR not set")
        return
    end
    sdk_dir = os.getenv("TM_SDK_DIR")
    oldlibdirs { 
        sdk_dir .. "/lib/" .. _ACTION .. "/%{cfg.buildcfg}",
        sdk_dir .. "/bin/%{cfg.buildcfg}",
        dirs
    }
    oldlibdirs { 
        sdk_dir .. "/lib/" .. _ACTION .. "/%{cfg.buildcfg}",
        sdk_dir .. "/bin/%{cfg.buildcfg}",
        dirs
    }
end

-- Make incluedirs() also call sysincludedirs()
oldincludedirs = includedirs
function includedirs(dirs)
    if not check_env("TM_SDK_DIR") then
        error("TM_SDK_DIR not set")
        return
    end
    sdk_dir = os.getenv("TM_SDK_DIR")
    oldincludedirs { 
        sdk_dir .. "/headers",
        sdk_dir ,
         dirs
    }
    sysincludedirs { 
        sdk_dir .. "/headers",
        sdk_dir ,
        dirs
    }
end
-- Makes sure the debugger points to the machinery
function set_debugger_to_engine()
    local sdk_dir = os.getenv("TM_SDK_DIR")
    if not check_env("TM_SDK_DIR") then
        error("TM_SDK_DIR not set")
        return
    end
    local debug_path_source = ""
    local debug_path_binary = ""
    if os.target() == "windows" then
         debug_path_source = "/bin/Debug/the-machinery.exe"
         debug_path_binary = "/bin/the-machinery.exe"
    else
         debug_path_source = "/bin/Debug/the-machinery"
         debug_path_binary = "/bin/the-machinery"
    end
    if os.isfile(sdk_dir..""..debug_path_source) then
        debugcommand(sdk_dir..debug_path_source)
    elseif os.isfile(sdk_dir..""..debug_path_binary) then
        debugcommand(sdk_dir..debug_path_binary)
    else
        error("Could not find '"..sdk_dir..""..debug_path_binary.."' nor '"..sdk_dir..""..debug_path_source.."'\nSuggestion: Please make sure the TM_SDK_DIR enviroment variable is pointing to the correct folder.")
    end
end


newoption {
    trigger     = "clang",
    description = "Force use of CLANG for Windows builds"
}

workspace "test-plugin"
    configurations {"Debug", "Release"}
    language "C++"
    cppdialect "C++11"
    flags { "FatalWarnings", "MultiProcessorCompile" }
    warnings "Extra"
    inlining "Auto"
    sysincludedirs { "" }
    targetdir "bin/%{cfg.buildcfg}"

filter "system:windows"
    platforms { "Win64" }
    systemversion("latest")

filter {"system:linux"}
    platforms { "Linux" }

filter { "system:windows", "options:clang" }
    toolset("msc-clangcl")
    buildoptions {
        "-Wno-missing-field-initializers",   -- = {0} is OK.
        "-Wno-unused-parameter",             -- Useful for documentation purposes.
        "-Wno-unused-local-typedef",         -- We don't always use all typedefs.
        "-Wno-missing-braces",               -- = {0} is OK.
        "-Wno-microsoft-anon-tag",           -- Allow anonymous structs.
    }
    buildoptions {
        "-fms-extensions",                   -- Allow anonymous struct as C inheritance.
        "-mavx",                             -- AVX.
        "-mfma",                             -- FMA.
    }
    removeflags {"FatalLinkWarnings"}        -- clang linker doesn't understand /WX

filter "platforms:Win64"
    defines { "TM_OS_WINDOWS", "_CRT_SECURE_NO_WARNINGS" }
    includedirs { }
    staticruntime "On"
    architecture "x64"
    libdirs { }
    disablewarnings {
        "4057", -- Slightly different base types. Converting from type with volatile to without.
        "4100", -- Unused formal parameter. I think unusued parameters are good for documentation.
        "4152", -- Conversion from function pointer to void *. Should be ok.
        "4200", -- Zero-sized array. Valid C99.
        "4201", -- Nameless struct/union. Valid C11.
        "4204", -- Non-constant aggregate initializer. Valid C99.
        "4206", -- Translation unit is empty. Might be #ifdefed out.
        "4214", -- Bool bit-fields. Valid C99.
        "4221", -- Pointers to locals in initializers. Valid C99.
        "4702", -- Unreachable code. We sometimes want return after exit() because otherwise we get an error about no return value.
    }
    linkoptions {"/ignore:4099"}
    buildoptions {"/utf-8"}     

filter {"platforms:Linux"}
    defines { "TM_OS_LINUX", "TM_OS_POSIX" }
    includedirs { }
    architecture "x64"
    toolset "clang"
    buildoptions {
        "-fms-extensions",                   -- Allow anonymous struct as C inheritance.
        "-g",                                -- Debugging.
        "-mavx",                             -- AVX.
        "-mfma",                             -- FMA.
        "-fcommon",                          -- Allow tentative definitions
    }
    libdirs { }
    disablewarnings {
        "missing-field-initializers",   -- = {0} is OK.
        "unused-parameter",             -- Useful for documentation purposes.
        "unused-local-typedef",         -- We don't always use all typedefs.
        "missing-braces",               -- = {0} is OK.
        "microsoft-anon-tag",           -- Allow anonymous structs.
    }
    removeflags {"FatalWarnings"}

filter "configurations:Debug"
    defines { "TM_CONFIGURATION_DEBUG", "DEBUG" }
    symbols "On"
    filter "system:windows"
        set_debugger_to_engine() -- sets the debugger in VS Studio to point to the_machinery.exe

filter "configurations:Release"
    defines { "TM_CONFIGURATION_RELEASE" }
    optimize "On"

function base(name)
    project(name)
        language "C++"
        includedirs { "" }
end

function plugin(name)
    local sn = snake_case(name) -- function that converts a name to snake case
    base(name)
        location("build/plugins/" .. sn)
        kind "SharedLib"
        targetdir "bin/%{cfg.buildcfg}/plugins"
        targetname("tm_" .. sn)
        defines {"TM_LINKS_" .. string.upper(sn)}
        dependson("foundation")
        folder {"plugins/" .. sn}
        language "C++"
        includedirs { "" } -- A override (see above) that makes sure we have the right include dir also with the SDK dirs
end

plugin("plugin-a")
plugin("plugin-b")

Basic Premake5 Cheat Sheet

CommandDocumentation
filterCan be used similar to a if to configure your build configuration only in certain cases.
filter "system:windows", filter "system:Linux" filter "system:web"Filters below this only apply if the system is one of the give platforms.
filter "configurations:Release", filter "configurations:Debug"The filter only apply if the configuration is Debug/Release.
filter { "system:windows", "options:clang" }Any filter below this only apply if the msvc-clang tool chain is used.
languageSets the Programming Language. e.g. language ("C") or language ("C++")
dependsonSpecify one or more non-linking project build order dependencies.
targetnameSpecifies the base file name for the compiled binary target.
definesAdds preprocessor or compiler symbols to a project.
locationSets the destination directory for a generated workspace or project file.
kindSets the kind of binary object being created by the project or configuration, such as a console or windowed application, or a shared or static library. e.g. ConsoleApp, WindowedApp, SharedLib, StaticLib
targetdirSets the destination directory for the compiled binary target.
optimizeThe optimize function specifies the level and type of optimization used while building the target configuration.
symbolsTurn on/off debug symbol table generation.
toolsetSelects the compiler, linker, etc. which are used to build a project or configuration.
buildoptionsPasses arguments directly to the compiler command line without translation.
architectureSpecifies the system architecture to be targeted by the configuration.
disablewarningsDisables specific compiler warnings.
removeflagsThe remove...() set of functions remove one or more values from a list of configuration values. Every configuration list in the Premake API has a corresponding remove function: flags() has removeflags(), defines() has removedefines(), and so on.
staticruntimeSelect the staticruntime
linkoptionsPasses arguments directly to the linker command line without translation.
includedirsSpecifies the include file search paths for the compiler. Note: We are using a modified version in our codebase (see Premake code above) that overrides the default behaviour by saving the original function code in oldincludedirs. In our implementation we make sure that the TM_SDK_DIR is correctly set.
libdirsSpecifies the library search paths for the linker. Note: We are using a modified version in our codebase (see Premake code above) that overrides the default behaviour by saving the original function code in oldlibdirs. In our implementation we make sure that the TM_SDK_DIR is correctly set.
platformsSpecifies a set of build platforms, which act as another configuration axis when building.
postbuildcommands Specifies shell commands to run after build is finished.
prebuildcommandsSpecifies shell commands to run before each build.
Pre- and Post-Build StagesThese are the simplest to setup and use: pass one or more command lines to the prebuildcommands, prelinkcommands, or postbuildcommands functions. You can use Tokens to create generic commands that will work across platforms and configurations.
FunctionsDocumentation
os.getenv()The os.getenv function gets the value of an environment variable. It receives the name of the variable and returns a string with its value. (https://www.lua.org/pil/22.2.html)
os.target()Returns the name of the operating system currently being targeted.
See system for a complete list of OS identifiers.

How to use tmbuild

We described tmbuild's core idea in our blog-post One-button source code builds. tmbuild is our custom one-click "build system." and it is quite a powerful tool. It allows you to do the most important tasks when developing with The Machinery: Building your plugin or the whole engine.

You can execute the tool from any terminal such as PowerShell or the VS Code internal Console window.

The key features are:

  • building
  • packaging
  • cleaning the solution/folder
  • downloading all the dependencies
  • running our unit tests

This walkthrough introduces you to tmbuild and shows you how to use and manipulate The Machinery Projects. You will learn about:

  • How to build with it
  • How to build a specific project with it
  • How to package your project

Also, you will learn some more advanced topics such as:

  • How to build/manipulate tmbuild

Table of Content

Installing tmbuild

When you download and unzip The Machinery either via the website or via the download tab you can find tmbuild in the bin folder in the root.

Alternatively, you can build it from source code\utils. We will talk about this later in this walkthrough.

Before we use tmbuild, we need to ensure that we have installed either build-essentials under Linux, XCode on Mac, or Visual Studio 2017 or 2019 (Either the Editor such as the Community Edition or the Build Tools).

Windows Side nodes:

On Windows, it is essential to install the C/C++ Build tools. If you run into the issue that tmbuild cannot find Visual Studios 2019 on Windows, it could be because you installed it on a typical path. No problem, you can just set the environment variable TM_VS2017_DIR or TM_VS2019_DIR to the root C:\Program Files (x86)\Microsoft Visual Studio\2019. The tool will find the right installed version automagically.

Set up our environment variables

Before we can build any project, we need to set up our environment. You need to set the following environment variable: (If this one has not been set the tool will not be able to build)

  • TM_SDK_DIR - This is the path to find the folder headers and the folder lib

If the following variable is not set, the tool will assume that you intend to use the current working directory:

  • TM_LIB_DIR - The folder which determines where to download and install all dependencies (besides the build environments)

How to add environment variables?

Windows

On Windows all you need to do is you need to add the folder where you installed The Machinery to your environment variables. You can do this like this: Start > Edit the system environment variables > environment variables > system variables > click New... > add TM_SDK_DIR or TM_LIB_DIR as the Variable Name and the needed path as the Variable Value. Close and restart the terminal or Visual Studio / Visual Studio Code. As an alternative, you can set an environment variable via PowerShell before you execute tmbuild, which will stay alive till the end of the session: $Env:TM_SDK_DIR="..PATH"

Debian/Ubuntu Linux

You open the terminal or edit with your favorite text editor ~/.bashrc and you add the following lines:

#...
export TM_SDK_DIR=path/to/themachinery/
export TM_LIB_DIR=path/to/themachinery/libs

(e.g. via nano nano ~/.bashrc)

Let us Build a plugin.

All you need to do is: navigate to the root folder of your plugin and run in PowerShell tmbuild.exe.

If you have not added tmbuild.exe to your global PATH, you need to have the right path to where tmbuild is located. u[email protected]/home/user/tm/plugins/my_plugin/> ./../../bin/tmbuild

This command does all the magic. tmbuild will automatically download all the needed dependencies etc., for you (Either in the location set in TM_LIB_DIR or in the current working directory). You may have noticed tmbuild will always run unit tests at the end of your build process.

Note: tmbuild will only build something when there is a premake5.lua and a libs.json in the current working directory.

Let us build a specific project.

Imagine you have been busy and written a bunch of plugins, and they are all connected and managed via the same Lua file (premake5 file). Now you do not want to check everything all the time. No problem, you can follow these steps: If you run tmbuild --help/-h, you will see many options. One of those options is --project. This one allows you to build a specific project.

tmbuild.exe --project my-project-name

The tool will automatically find the right project and build it. On Windows, you can also provide the relative/absolute path to the project with extension: ``

tmbuild --project /path/to/project.vcxproj

Note: If a project cannot be found it will build all projects.

How how to package a project via tmbuild?

To package a project via tmbuild, all you need to do is use the -p [package name] or --package [package name] command. A package file needs to be of type .json and follow our package scheme, which you can find here.

How to build or manipulate tmbuild from source

You can find the source code of tmbuild in the folder code\utils\tmbuild. In the folder code\utils, you can also find the source code of all the other uses the engine uses.

You can build tmbuild via tmbuild. All you need to do is navigate the code\utils folder and run tmbuild --project tmbuild.

If you do not have access to a build version of tmbuild but to the whole source, you have to follow the following steps:

Windows 10

Make sure you have Visual Studio 19 and the Build Tools installed. Besides, check if you can find msbuild in the terminal. You can install msbuild / vs studio via PowerShell: https://github.com/Microsoft/vssetup.powershell

To check just run:

msbuild

if you cannot find it just add it to your environment path variables: with e.g. C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\

To build from source:

Open a PowerShell instance in The Machinery folder and run the following commands:

# this part can be skipped if you have already downloaded
# all the dependencies and created a highlevel folder to your lib dependencies:
mkdir lib
cd lib
 wget https://ourmachinery.com/lib/bearssl-0.6-r1-win64.zip -OutFile bearssl-0.6-r1-win64.zip
 wget https://ourmachinery.com/lib/premake-5.0.0-alpha14-windows.zip -OutFile premake-5.0.0-alpha14-windows.zip
Expand-Archive -LiteralPath './bearssl-0.6-r1-win64.zip' -DestinationPath "."
Expand-Archive -LiteralPath './premake-5.0.0-alpha14-windows.zip' -DestinationPath "."
cd ..
# continue here if the dependencies are already downloaded
# set TM_LIB_DIR if you have not set it already
$env:TM_LIB_DIR="/path/to/themachinery/lib"
$env:TM_SDK_DIR="/path/to/themachinery/"
# run premake
./../../lib/premake-5.0.0-alpha14-windows/premake5 [vs2019|vs2017]
# navigate to the highlevel folder of the code (here you find the libs.json and the
# premake5.lua
cd code/utils
msbuild.exe "build/tmbuild/tmbuild.vcxproj" /p:Configuration="Debug Win64" /p:Platform=x64

Make sure that you either choose vs2019 or vs2017 not [vs2019|vs2017]

On Debian/Ubuntu

Open a terminal instance and run the following commands:

# If you do not have the huild essentials installed make sure you do:
sudo apt install build-essential clang zip -y
# otherweise continue here:
cd your-folder-of-tm
# this part can be skiped if you have already downloaded
# all the dependencies and created a highlevel folder to your lib dependencies:
mkdir lib
cd ./lib
wget https://ourmachinery.com/lib/bearssl-0.6-r1-linux.zip
wget https://ourmachinery.com/lib/premake-5.0.0-alpha15-linux.zip
unzuip bearssl-0.6-r1-linux.zip .
unzip premake-5.0.0-alpha15-linux.zip .
chmod +x ./premake-5.0.0-alpha15-linux/premake5
cd ..
# continue here if the dependencies are already downloaded
# set TM_LIB_DIR if you have not set it already
export TM_LIB_DIR=/path/to/themachinery/lib
export TM_SDK_DIR=/path/to/themachinery/
# run premake
./../../lib/premake-5.0.0-alpha15-linux/premake5 gmake
# navigate to the highlevel folder of the code (here you find the libs.json and the
# premake5.lua
cd code/utils
# run make:
make tmbuild

How to add tmbuild globally accessible?

Windows On Windows, all you need to do is you need to add the folder/bin to your environment variables. This can be done like this: Start > Edit the system environment variables > environment variables > system variables > search in the list for path > click Edit > click new > add the absolute path to themachinery/bin in there re-login or reboot.

Debian/Ubuntu Linux You open the terminal or edit with your favorite text editor ~/.bashrc, and you add the following lines: export PATH=path/to/themachinery/bin:$PATH (e.g. via nano nano ~/.bashrc)

Gameplay Coding in The Machinery

In this section, you will learn the basics about Gameplay Coding in The Machinery. There are two primary ways of creating a vivid and active world:

Coding within our Entity Component System

The Machinery uses an Entity Component System; therefore, most of your gameplay code will run via Engines or Systems. To learn more about these, please follow this link.

General code entry points using Simulation Entry Component

The Machinery also offers you a Simulation Entry Component which will, when the parent entity is spawned, set up a system with that is used to run code at start-up and each frame. Read more here.

Simulation Entry (writing gameplay code in C)

This walkthrough will show you how to create a simulation entry and what a simulation entry is.

If you wish to program gameplay using C code, then you need some way for this code to execute. You can either make lots of entity components that do inter-component communication, but if you want a more classic monolithic approach, then you can use a simulation entry.

In order for your code to execute using a simulation entry you need two things. Firstly you need an implementation of the Simulation Entry interface, tm_simulation_entry_i (see simulation_entry.h) and secondly you need a Simulation Entry Component attached to an entity.

Define a tm_simulation_entry_i in a plugin like this:

static tm_simulation_entry_i simulation_entry_i = {
    .id = TM_STATIC_HASH("tm_my_game_simulation_entry", 0x2d5f7dad50097045ULL),
    .display_name = TM_LOCALIZE_LATER("My Game Simulate Entry"),
    .start = start,
    .stop = stop,
    .tick = tick,
    .hot_reload = hot_reload,
};

Where start, stop and tick are functions that are run when the simulation starts, stop and each frame respectively. Make sure that id is a unique identifier.

Note: There is also a plugin template available that does this, see File -> New Plugin -> Simulation Entry with The Machinery Editor.

Note: to generate the TM_STATIC_HASH you need to run hash.exe or tmbuild.exe --gen-hash for more info open the hash.exe guide

When your plugin loads (each plugin has a tm_load_plugin function), make sure to register this implementation of tm_simulation_entry_i on the tm_simulation_entry_i interface name, like so:

tm_add_or_remove_implementation(reg, load, tm_simulation_entry_i, &simulation_entry_i);

When this is done and your plugin is loaded, you can add a Simulation Entry Component to any entity and select your registered implementation. Now, whenever you run a simulation (using Simulate Tab or from a Published build) where this entity is present, your code will run.

The same Simulation Entry interface can be used from multiple Simulation Entry Components and their state will not be shared between them.

Note: For more in-depth examples, we refer to the gameplay samples, they all use Simulation Entry.

What happens under the hood?

When the Simulation Entry Component is loaded within the Simulate Tab or Runner, it will set up an entity system. This system will run your start, stop and tick functions. You may then ask, what is the difference between using a Simulation Entry and just registering a system from your plugin? The answer is the lifetime of the code. If you register a system from your plugin, then that system will run no matter what entity is spawned whereas the Simulation Entry Component will add and remove the system that runs your code when the entity is spawned and despawned.

Example: Source Code

static struct tm_localizer_api *tm_localizer_api;
static struct tm_allocator_api *tm_allocator_api;
// beginning of the source file
#include <foundation/api_registry.h>
#include <foundation/localizer.h>
#include <foundation/allocator.h>

#include <plugins/simulation/simulation_entry.h>

struct tm_simulation_state_o
{
    tm_allocator_i *allocator;
    //..
};

// Starts a new simulation session. Called just before `tick` is called for the first
// time. The return value will later be fed to later calls to `stop` and `update`.
tm_simulation_state_o *start(tm_simulation_start_args_t *args)
{
    tm_simulation_state_o *state = tm_alloc(args->allocator, sizeof(*state));
    *state = (tm_simulation_state_o){
        .allocator = args->allocator,
        //...
    };
    //...
    return state;
}

// Called when the entity containing the Simulation Entry Component is destroyed.
void stop(tm_simulation_state_o *state, struct tm_entity_commands_o *commands)
{
    //...
    tm_allocator_i a = *state->allocator;
    tm_free(&a, state, sizeof(*state));
}

// Called each frame. Implement logic such as gameplay here. See `args` for useful
// stuff like duration of the frame etc.
void tick(tm_simulation_state_o *state, tm_simulation_frame_args_t *args)
{
    //...
}

// Called whenever a code hot reload has occurred. Note that the start, tick and
// stop functions will be updated to any new version automatically, this  callback is for other
// hot reload related tasks such as updating function pointers within the simulation code.
void hot_reload(tm_simulation_state_o *state, struct tm_entity_commands_o *commands)
{
    //...
}
static tm_simulation_entry_i simulation_entry_i = {
    .id = TM_STATIC_HASH("tm_my_game_simulation_entry", 0x2d5f7dad50097045ULL),
    .display_name = TM_LOCALIZE_LATER("My Game Simulate Entry"),
    .start = start,
    .stop = stop,
    .tick = tick,
    .hot_reload = hot_reload,
};
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_localizer_api = tm_get_api(reg, tm_localizer_api);
    tm_allocator_api = tm_get_api(reg, tm_allocator_api);
    tm_add_or_remove_implementation(reg, load, tm_simulation_entry_i, &simulation_entry_i);
}

Entity Graphs

The Entity Graph implements a visual scripting language based on nodes and connections. To use it, right-click on an entity to add a Graph Component and then double click on the Graph Component to open it in the Graph Editor:

Graph editor.

Basic Concepts

How to use the Entity Graph?

You need to add a Graph Component to an Entity of your choice.

After that, you have two ways of opening the Graph:

  • Double Click the Graph Component

  • Click in the Property View on Edit

Now the Graph Editor Opens and you can start adding nodes via:

  • Right Click -> Add Node
  • Press Space

Execution

The Entity Graph is an event-driven Visual Scripting language. This means everything happens after an event is triggered! By default, the Engine comes with the following built-in Events:

NameDescription
Init EventIs called when the component is added to the Entity.
Reload EventIs called when the component is reloaded from the truth.
Tick EventIs called every frame.
Terminate EventIs called before the component is removed from the entity.
Custom EventIs called when the named event
is triggered with either a "Trigger Event"
node or from the outside with "Trigger Remote Event"
Trigger EventTriggers an event.
Trigger Remote EventTriggeres an event on a remote Entity
UI TickIs ticked every frame regardless if the game is paused or not!

Anatomy

There are six types of nodes:

TypeFunction
Event NodeStarting point of the Execution
Query NodeQuery nodes are triggered automatically when their output is requested. Nodes that aren't query nodes need to be triggered. Therefore they are "Pure" and do not modify data!
NodesNormal nodes that have an Event input, hence they might modify the data and produce an output or mutate the graphs state!
SubgraphsGraphs within a Graph! Allow you to organize your graph into smaller units. They can also be reused if saved as Subgraph Prototype.
Input NodeAccepts Input from the Outside world and makes it available to the graph. Mainly used for communication between graphs and subgraphs.
Output NodeAccepts Output and passes it to the Outside world and makes it available to a parent graph. Mainly used for communication between graphs and subgraphs.

Moreover, the Visual Scripting language knows two different types of wires:

  • Event Wires They regulate the execution flow.
  • Data Wires Transport the data from node to node!

Inputs

Graphs can have inputs. They can be used to allow the user of your graph to pass data from the outside (e.g. the Editor) to the graph. This happens via the Input Nodes. In the General Graph settings you can add Inputs from the outside world.

Adding a Public Input

  1. You click on the Settings button which opens the Graphs Settings

image

  1. You expand the Input Accordion and press "Add"

  2. This will add a new Input to your graph! There you have a few options.

To make your node public just check the publicly accessible from another graph or from the Editor check the Public checkbox

If you now select the Graph Component of your Entity you will be able to change the value:

This can be a nice way to customize behaviour of your graph and entity!

  1. Add an Input node to your graph. There you have access to the data.

    The Input Node also allows you to access the settings. Hover over the name of the Input and a Settings option becomes available.

Variables

You can store data within your Graph! The Set / Get Variable nodes are the way to go. They give you access to this function. You can also access variables from distance Entities by using the Set / Get Remote Variable nodes.

Branches and loops

The Language comes with built-in support for branches, the If node. The Language supports multiple bool operators to compare values.

Besides, you have two nodes for loops:

  • The Grid node
  • The For node

The Grid node for example:

Subgraphs

You can organize your code into smaller reusable units and combine nodes as a subgraph! Using subgraphs makes your Graph more user-friendly, and it will look less like spaghetti. You can store your subgraph as a .entity_graph asset in your asset browser and allow it to be reused across your project! Which enables you to have maximal flexibility!

What is next?

In the next chapter you will learn more about Subgraphs and the Debugger! In case you want to provide your own Nodes check out this tutorial Extend the Entity Graph

Subgraphs

Subgraphs are a way to organize your graph better and create smaller units. They make them easier to maintain and easy to follow. In its essence a subgraph is a graph within a graph. They are interfaced via a subgraph node. They can produce Input and Output, such as a normal node could. They can also call and react to normal Events!

Create a subgraph

You create a new subgraph by simply selecting all nodes that shall be part of the subgraph. After that, click on them with the right mouse and select Create Subgraph from the context menu**.** The subgraph will replace the selected nodes. You can change its label in the property view. By simply double click you open the subgraph.

Subgraph Inputs

A subgraph can have inputs and outputs. You can add them to them the same way as for a normal Graph. But you can also just connect the needed wires with the subgraph node, as the following image shows:

Subgraph Prototypes

The Machinery's Prototype system allows you to create, configure, and store an Entity/Creation Graph complete with all its subgraphs, input/output nodes as a reusable Entity / Creation Graph Asset.

Note: Since the Entity Graph and the Creation Graph are conceptually similar the same aspects apply to them both! However this document will only focus on the Entity Graph.

This Asset acts as a template from which you can create new Prototype instances in other Entity Graphs/Creation Graphs. Any edits that you make to the Asset are automatically reflected in the instances of that Graph, allowing you to easily make broad changes across your whole Project without having to repeatedly make the same edit to every copy of the Asset.

Note: This does not mean all Prototype instances are identical. You can override individually and add/remove nodes from them, depending on your need!

Create a subgraph Prototype

You can turn a subgraph into a prototype by simply using the context menu of the subgraph node and selecting Create Subgraph prototype. This will create a Subgraph Prototype Asset (.entity_graph) in your Asset Browser. When you open it you are opening the instanced version. Any change to this version will not be shared across all other versions! Only changes made to the prototype will propagate to all changes! To open a prototype you can use the "Open Prototype" Button.

Debugger

The Entity Graph has a Debugger. You can use this Debugger to inspect the current values or set a breakpoint to see if the Graph behaves the way it should. Besides the graph indicated if a node is executed by a highlighted border!

Note: The Debugger only works if the simulate tab and the graph tab are open at the same time!

You can find the Debugger when you click on the button in the upper toolbar with the bug symbol (1). It will open the Debugger Overlay. Besides the "Bug" button, you can find a dropdown menu (2). This dropdown menu lets you switch between Graph instances quickly. This is useful if the Graph is part of an Entity Prototype or itself a Subgraph prototype!

Debug Overlay

In this overlay, you find three-tab:

  1. Watch Wires; Contains all data wires you are watching.
  2. Breakpoints; This Contains a list of all Breakpoints within this Graph and its subgraph
  3. Instances; A list of all instances of this Graph

Watch Wires

Like in a normal code editor, you can hover over any data wire and observe the values during the execution. If a value changed, it would be red, otherwise white.

This might be cucumber some and difficult for observing multiple wires. This is why you can add them to the watch wire list.

The Watch Wire list will indicate as well if a value has changed. You can also remove them there again and find the node with the find node button.

Keep in mind that this list only works within the current graph instance and its subgraph.

Breakpoints

Unlike watching wires which require no extra step, you cannot just add a breakpoint, and it will break immediately since such behaviour could be annoying. You can add breakpoints at any point in time via Right-Click on a node -> Add Breakpoint.

Note: You can only add breakpoints to all nodes besides Event and Query nodes.

To activate the breakpoints, you need to connect to the Simulation by pressing the Connect Button in the Debug Overlay.

(Alternatively, the Breakpoint Overview will inform you that you need to connect to the Simulation)

The moment you are connected, the Simulation will react appropriately, and your breakpoints will happen.

  1. You can disconnect from the Simulation.
  2. You can continue till the next breakpoint hits.
  3. You can Stepover to the next node.

Extending the Entity Graph

This walkthrough shows you how to extend the Entity Graph and use the generate-graph-nodes.exe. You will learn about:

  • How to develop the Entity Graph with your nodes.
  • When to run generate-graph-nodes.exe

How to extend the Entity Graph

You can extend the visual scripting language with your nodes. All you need to do is write the code that implements the node's action, together with some macros that specify how to create a visual scripting node from that code. Then you run the generate-graph-nodes.exe executable to generate an *.inl file with glue code.

  1. Create the file

Our goal is to create a node that computes the square of a floating-point number. We can either add to an existing plugin a new file or create a new plugin.

Important is it to make sure that the filename contains the graph_nodes string. Otherwise, the node generator will ignore the file.

For example: my_nodes.c will be ignored by the tool, while my_graph_nodes.c wont be ignored.

  1. Write the code
GGN_BEGIN("Sample/Math/Float");
GGN_GEN_REGISTER_FUNCTION();
GGN_NODE_QUERY();
static inline void sample_float_square(float a, float *res) { *res = a * a; }
GGN_END();

Let us digest the code example above. There are some things to note here:

  • Node functions always return their results in pointer parameters(the reason is that they can have more than one result). Pointer parameters are seen as out parameters if they are mutable by the node-generator.
  • The function parameter names are also used for the naming of the input/output wires.
  • Two special macros surround all the code for the node(s): GGN_BEGIN() and GGN_END().
  • The Engine will use the function name sample_float_square later: Sample Float Square, and you will find it in the defined category: Sample/Math/Float
  • The GGN_NODE_QUERY() macro marks this node as a Query node. Query nodes are triggered automatically when their output is requested. Nodes that don't just purely modify data need to be triggered by an explicit event. Such as Tick or Init.
  • The GGN_GEN_REGISTER_FUNCTION() macro automatically creates a register function for registering the node with the visual scripting system. Otherwise, you need to write this function yourself.
  • The #include "example_graph_nodes.inl" will include the autogenerated graph node implementations. It needs to be somewhere in the file. (before the tm_load_plugin function)
  • The generate-graph-nodes.exe auto-generates the my_graph_nodes.inl for you. It would be best if you do not edit this file. Otherwise, the generator will overwrite your changes next time.

For a full documentation of all GGN_* macros see plugins/graph_interpreter/graph_node_macros.h. The code we write is relatively slight. We need to ensure that we have some header files, including the graph_node_macros.h, which we are using to tell the generator to generate code for us.

The following list makes sure that the nodes work on their own.

#include <plugins/editor_views/graph.h>
#include <plugins/graph_interpreter/graph_node_helpers.inl>
#include <plugins/graph_interpreter/graph_node_macros.h>

If we now think 'yeah, we can compile', we are wrong; We need some other header files to ensure that the generated magic in the my_graph_nodes.inl file works.

We need to include the following files as well:

static struct tm_graph_interpreter_api *tm_graph_interpreter_api;
#include <foundation/api_registry.h> //Is needed for `GGN_GEN_REGISTER_FUNCTION()`
#include <foundation/localizer.h> // it automatically localizes your category name
#include <foundation/macros.h>
#include <foundation/the_truth_types.h> // is needed for the description of the wire input types

The next question is, Are we done now? The answer is yes nearly. What's left is registering our nodes in the plugin load function. It may look than like this:

#include "example_graph_nodes.inl"

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_graph_interpreter_api = tm_get_api(reg, tm_graph_interpreter_api);
  // This is auto generated from the graph node generator, you just need to call
  // it.
  generated__register_example_graph_nodes(reg, load);
}

Here we just call the generated__register_example_graph_nodes function, defined in my_graph_nodes.inl and auto-generated. You can find the complete example here samples/plugins/graph_nodes.

  1. Run the node generator & tmbuild

The last step before we can compile is to make sure that we run the generate-graph-nodes.exe application. This helper utility processes all GGN_* macros. This program generates glue code that ties your function into the graph system -- creating connectors that correspond to your function parameters, etc. The glue code is stored in an .inl file with the same name as the .c file that contains the graph nodes. Do not forget that if you have used some new TM_STATIC_HASH values, run hash.exe to make sure that those get hashed. Then you are ready to run tmbuild. It will compile your plugin, and then when you start the Engine or have it run in hot-reload mode it will show you your new nodes in the editor.

Note: You can also run tmbuild with a argument: tmbuild --gen-nodes . This will make sure that tmbuild runs generate-graph-nodes.exe before it builds.

Created nodes in the entity graph view

Full sample code:

static struct tm_graph_interpreter_api *tm_graph_interpreter_api;
#include <foundation/api_registry.h> //Is needed for `GGN_GEN_REGISTER_FUNCTION()`
#include <foundation/localizer.h>    // it automatically localizes your category name
#include <foundation/macros.h>
#include <foundation/the_truth_types.h> // is needed for the description of the wire input types
#include <plugins/editor_views/graph.h>
#include <plugins/graph_interpreter/graph_node_helpers.inl>
#include <plugins/graph_interpreter/graph_node_macros.h>
GGN_BEGIN("Sample/Math/Float");
GGN_GEN_REGISTER_FUNCTION();
GGN_NODE_QUERY();
static inline void sample_float_square(float a, float *res)
{
    *res = a * a;
}
GGN_END();
#include "example_graph_nodes.inl"

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_graph_interpreter_api = tm_get_api(reg, tm_graph_interpreter_api);
    // This is auto generated from the graph node generator, you just need to call it.
    generated__register_example_graph_nodes(reg, load);
}

Provide a Custom Datatype to the Entity Graph

This walkthrough shows you how to extend the Entity Graph with a custom data type

  • How to provide a custom compile step for your own data type.

Data Type Overview of the Entity Graph

The following overview lists all types that are supported by the engine build in with the Entity Graph Component. If you wanted to support your own data type you need to implement the tm_graph_component_compile_data_i interface manually.

Truth Type
TM_TT_TYPE_HASH__BOOL
TM_TT_TYPE_HASH__UINT32_T
TM_TT_TYPE_HASH__UINT64_T
TM_TT_TYPE_HASH__FLOAT
TM_TT_TYPE_HASH__DOUBLE
TM_TT_TYPE_HASH__VEC2
TM_TT_TYPE_HASH__VEC3
TM_TT_TYPE_HASH__VEC4
TM_TT_TYPE_HASH__POSITION
TM_TT_TYPE_HASH__ROTATION
TM_TT_TYPE_HASH__SCALE
TM_TT_TYPE_HASH__COLOR_RGB
TM_TT_TYPE_HASH__RECT
TM_TT_TYPE_HASH__STRING
TM_TT_TYPE_HASH__STRING_HASH
TM_TT_TYPE_HASH__KEYBOARD_ITEM
TM_TT_TYPE_HASH__MOUSE_BUTTON
TM_TT_TYPE_HASH__ENTITY_ASSET_REFERENCE
TM_TT_TYPE_HASH__LOCAL_ENTITY_ASSET_REFERENCE
TM_TT_TYPE_HASH__CREATION_GRAPH_ASSET_REFERENCE
TM_TT_TYPE_HASH__EASING
TM_TT_TYPE_HASH__LOOP_TYPE
TM_TT_TYPE_HASH__COMPONENT_PROPERTY_FLOAT
TM_TT_TYPE_HASH__COMPONENT_PROPERTY_VEC2
TM_TT_TYPE_HASH__COMPONENT_PROPERTY_VEC3
TM_TT_TYPE_HASH__COMPONENT_PROPERTY_VEC4
TM_TT_TYPE_HASH__COMPONENT_PROPERTY_ANY

Implement the tm_graph_component_compile_data_i

When you need to implement your own type you need to implement the tm_graph_component_compile_data_i interface. This interface lives in the graph_component.h header file. This header file is part of the graph_interpreter plugin.

The interface is just a function typedef of the following signature:

{{$include {TM_SDK_DIR}/plugins/graph_interpreter/graph_component.h:66}}

How will the graph interpreter use this function?

The Graph Interpreter will call this function when ever it compiles data to a graph before the graph is initialized the first time. The function should return true if it compiled data to a wire otherwise it should return false and the graph interpreter will keep looking for the correct compile function.

The function provided multiple arguments but the following 2 arguments are the most important one:

NameDescription
tm_tt_id_t data_idThe data object the interpreter tries to compile to a wire but does not know how to.
tm_strhash_t to_type_hashThe data the wire expects.

For example in the Animation State Machine TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE are objects that contain a string hash (TM_TT_PROP__ASM_EVENT__NAME) that the animation state machine needs in order to execute a certain event. The list above shows that there is no translation from a TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE to a wire. Therefore we need to provide our own function for this. The animation state machine plugin provides a compilation function for this.

The plugin knows that a TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE is nothing else then a TM_TT_TYPE_HASH__STRING_HASH at the end. This means we need to do the following steps:

  1. We need to figure out what kind of type is in data_id
  2. We need to compare the type of data_id with the to_type_hash being equal to TM_TT_TYPE_HASH__STRING_HASH
  3. perform our actual translation

Let us begin with figuring out the data type of data_id:

const tm_tt_type_t type = tm_tt_type(data_id);
const tm_strhash_t type_hash = tm_the_truth_api->type_name_hash(tt, type);
const tm_the_truth_object_o *data_r = tm_tt_read(tt, data_id);

We also need to get a read object of the data_id to read the data from it.

The next step is to compare the given data:

if (TM_STRHASH_EQUAL(type_hash, TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE) &&
    TM_STRHASH_EQUAL(to_type_hash, TM_TT_TYPE_HASH__STRING_HASH))

If this is true we move on with our data compilation. Now that we know our object is of type TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE we can use the Truth and extract that actual value we care about the TM_TT_PROP__ASM_EVENT__NAME

const tm_tt_id_t event_id = tm_the_truth_api->get_reference(tt, data_r, 0);
const tm_strhash_t event_name_hash = tm_the_truth_api->get_string_hash(
    tt, tm_tt_read(tt, event_id), TM_TT_PROP__ASM_EVENT__NAME);

After this its time to compile or better write the data to the provided wire. For this we need to use the tm_graph_interpreter_api API and its tm_graph_interpreter_api.write() function. The function expects the current interpreter and the wire we write to (we have this one as function parameter of the tm_graph_component_compile_data_i interface uint32_t wire) as well as the size and the amount. Passing all these information to the write function enables it to allocate memory internally in the interpreters memory stack.

tm_strhash_t *v = (tm_strhash_t *)tm_graph_interpreter_api->write_wire(
    gr, wire, 1, sizeof(*v));

Keep in mind the function is a bit miss leading since it says write but what it actually does it just allocates memory for you (if needed) and give you back a writable pointer. At the end we write to the pointer our data and return true.

All together the translation looks like this:

if (TM_STRHASH_EQUAL(type_hash, TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE) &&
    TM_STRHASH_EQUAL(to_type_hash, TM_TT_TYPE_HASH__STRING_HASH)) {
  const tm_tt_id_t event_id = tm_the_truth_api->get_reference(tt, data_r, 0);
  const tm_strhash_t event_name_hash = tm_the_truth_api->get_string_hash(
      tt, tm_tt_read(tt, event_id), TM_TT_PROP__ASM_EVENT__NAME);
  tm_strhash_t *v = (tm_strhash_t *)tm_graph_interpreter_api->write_wire(
      gr, wire, 1, sizeof(*v));
  *v = event_name_hash;
  return true;
}

Source Code

The entire sample source code:

static struct tm_api_registry_api *tm_global_api_registry;
static struct tm_the_truth_api *tm_the_truth_api;
static struct tm_graph_interpreter_api *tm_graph_interpreter_api;

#include <foundation/api_registry.h>
#include <foundation/api_types.h>
#include <foundation/the_truth.h>
#include <foundation/the_truth_types.h>

#include <plugins/editor_views/graph.h>
#include <plugins/graph_interpreter/graph_component_node_type.h>
#include <plugins/graph_interpreter/graph_component.h>
#include <plugins/graph_interpreter/graph_interpreter.h>
#include <plugins/animation/animation_state_machine.h>

#define TM_TT_TYPE__ASM_EVENT_REFERENCE "tm_asm_event_reference"
#define TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE TM_STATIC_HASH("tm_asm_event_reference", 0x60cbc051e2b37c38ULL)

static bool compile_data_to_wire(tm_graph_interpreter_o *gr, uint32_t wire, const tm_the_truth_o *tt, tm_tt_id_t data_id, tm_strhash_t to_type_hash)
{
    const tm_tt_type_t type = tm_tt_type(data_id);
    const tm_strhash_t type_hash = tm_the_truth_api->type_name_hash(tt, type);
    const tm_the_truth_object_o *data_r = tm_tt_read(tt, data_id);
    if (TM_STRHASH_EQUAL(type_hash, TM_TT_TYPE_HASH__ASM_EVENT_REFERENCE) && TM_STRHASH_EQUAL(to_type_hash, TM_TT_TYPE_HASH__STRING_HASH))
    {
        const tm_tt_id_t event_id = tm_the_truth_api->get_reference(tt, data_r, 0);
        const tm_strhash_t event_name_hash = tm_the_truth_api->get_string_hash(tt, tm_tt_read(tt, event_id), TM_TT_PROP__ASM_EVENT__NAME);
        tm_strhash_t *v = (tm_strhash_t *)tm_graph_interpreter_api->write_wire(gr, wire, 1, sizeof(*v));
        *v = event_name_hash;
        return true;
    }
    //...
    return false;
}

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_graph_interpreter_api = tm_get_api(reg, tm_graph_interpreter_api);
    tm_add_or_remove_implementation(reg, load, tm_graph_component_compile_data_i, compile_data_to_wire);
}

Entity Component System

This section of the book shall help you understand how to make use of our Entity Component System. Designing games with an ECS can be overwhelming. Especially if you come from a more traditional background: Object Oriented Approaches. This chapter will provide you with the basic understanding of what an ECS is and how you can use it!.

Let us begin with the purpose of the entity system! Its purpose is it to provide a flexible model for objects in a simulation, that allows us to compose complex objects from simpler components in a flexible and performant way. An entity is a game object composed of components. Entities live in a entity context — an isolated world of entities. In the Machinery each of the following tabs has its own Entity Context: Simulate Tab, Scene Tab, Preview Tab. Entities within those tabs only exist within these contexts! Each new instance of the tab has a different context! Entities are composed of components. They are there to hold the needed data while Engines/Systems are there to provide behaviour. Each context (entity context) can have a number of engines or systems registered. (ECS) Engine updates are running on a subset of entities that posses some set of components.

Note: in some entity systems, these are referred to as systems instead, but we choose engine, because it is less ambiguous.

While Systems are just an update with a provided access to the entity context. When we refer to a context in this chapter we mean the entity context.

Table of Content

What is an Entity?

An entity is the fundamental part of the Entity Component System. An entity is a handle to your data. The entity itself does not store any data or behavior. The data is stored in components, which are associated with the Entity. The behavior is defined in Systems and Engines which process those components. Therefore an entity acts as an identifier or key to the data stored in components.

Note: In this example both entities have the same set of components, but they do not own the data they just refer to it!

Entities are managed by the Entity API and exist within an Entity Context. An Entity struct refers to an entity, but is not a real reference. Rather the Entity struct contains an index used to access entity data.

What is an Entity Context?

The Entity Context is the simulation world. It contains all the Entities and Systems/Engines as well as owns all the Component Data. There can be multiple Entity Contexts in the Editor. For example the Simulate tag, Preview Tab have both an Entity Context. When you Register A System/Engine you can decide in which context they shall run. The Default is in all contexts.

Where do Entities live? (Lifecycle)

  • Entities do not live in The Truth. The truth is for assets, not for simulation.
  • Entity data is owned by the entity context and thrown away when the entity context is destroyed.
  • Entities can be spawned from entity assets in The Truth. Multiple entities can be spawned from the same asset.
  • Changes to entity assets can be propagated into a context where those assets are spawned. This is the main way in which we will provide a “preview” of assets in a simulation context.
  • An entity always belongs to a specific entity context and entity IDs are only unique within the entity contexts. Entities can be created and deleted dynamically. When entities are deleted, the existing handles to that entity are no longer valid. Entity IDs act as weak references. If you have an ID you can ask the context whether that entity is still alive or not. tm_entity_api.is_alive()

How is the data stored?

  • An entity is a 64-bit value divided into a 32-bit index and a 32-bit generation.
  • The index points to a slot where entity data is stored.
  • The generation is increased every time we recycle a slot. This allows us to detect stale entity IDs (i.e., weak referencing through is_alive().

What is an Entity type / Archetype?

An Entity Types is a unique combination of component types. The Entity API uses the entity type to group all entities that have the same sets of components.

Note: In this example Entities A-B are of the same entity type while C has a different entity type!

  • An entity type is shared by all entities with a certain component mask.
  • When components are added to or removed from an entity, it’s entity type changes, thus its data must be copied over to the new type.
  • Pointers to component data are thus not permanent.

What are Components?

They are data, that is all they are. Designing them is the most important task you will find yourself doing in an ESC driven game. The reason is that if you change a component you have to update all systems that use it. This data composed together makes up an Entity. It can be changed at runtime, in what ever way required. This data is transformed in Systems/Engines and therefore Systems/Engines provide the Behaviour of our game based on the input/output of other Systems/Engines.

Note: Keep in mind they do not need a Truth Representation. If they do not have one, the Engine cannot display them in the Entity Tree View. This is useful for runtime only components.

  • A component is defined by tm_component_i — it consists of a fixed-size piece of POD data.
  • This data is stored in a huge buffer for each entity type, and indexed by the index.
  • In addition, a component can have a manager.
  • The manager can store additional data for the component that doesn’t fit in the POD data — such as lists, strings, buffers, etc.

You can add callbacks to the component interface which allow you to perform actions on add and remove. The general lifetime of a component is bound to the Entity Context.

It's all about the data

Data is all we have. Data is what we need to transform in order to create a user experience. Data is what we load when we open a document. Data is the graphics on the screen and the pulses from the buttons on your gamepad and the cause of your speakers and headphones producing waves in the air and the method by which you level up and how the bad guy knew where you were to shoot at you and how long the dynamite took to explode and how many rings you dropped when you fell on the spikes and the current velocity of every particle in the beautiful scene that ended the game, that was loaded off the disc and into your life. Any application is nothing without its data. Photoshop without the images is nothing. Word is nothing without the characters. Cubase is worthless without the events. All the applications that have ever been written have been written to output data based on some input data. The form of that data can be extremely complex, or so simple it requires no documentation at all, but all applications produce and need data. (Source)

Best Practice

  • Component Size: Keep them small and atomic. The main reason for this is that it improves caching performance. Besides having a lot of small components allows for more reusability and compostability! Besides if they are atomic units of data, they increase their value to be reused across projects better and can provide more combinations. The biggest disadvantage is that small components make it harder to find them, the larger your project is.
  • Complex component data: Generally speaking you want to avoid storing complex data such as arrays or heap allocated data in a component. It is possible and sometimes not possible to avoid, but it is always good to ask yourself if it is needed.

What is a Component Manager?

  • Can store persistent data from the beginning of the Entity Context till the end

  • Can provide a way to allocate data on adding/removing a component

Game Logic

The behavior is defined in Systems and Engines which process those components. Systems and Engines can be seen as data transformation actions. They take some input (components) and process them to some output (changed component data, different rendering) and a chain of small systems together makes up your game!

What are Engines?

Note: in some entity systems, these are referred to as systems instead, but we choose engine, because it is less ambiguous.

  • An engine is an update that runs for all components matching a certain component mask.

  • Engines registered with the context run automatically on update, in parallel.

  • Parallelization is done automatically, by looking at the components that each engine reads or writes. Before running, an engine waits for the previous engines that wrote to the components that the engine is interested in.

    The following image shows how a time based movement System could look like:

What are Systems?

  • General Update loop that has access to the Entity Context.

  • Can be used for none component specific interactions

  • Can be used for serial interactions that do not interact with the entity system. (Such as Input)

How are Entity Assets translated to ECS Entities?

Since the Truth is an editor concept and our main data model, your scene is stored in the Truth. When you start the simulation your Assets get translated to the ECS via the asset_load() function. In your tm_component_i you can provide this function if you want your component to translate to the ECS world. In there you have access to the Truth, afterwards not anymore. Besides, you can provide some other callbacks for different stages of the translation process.

Important: A Component representation in The Truth may not reflect the runtime ECS representation. This can be used to separate a Truth representation into smaller bits for gameplay programming sake but keep the simplicity for the Front End user.

Example:

You have a Movement Controller Component that can be used via the UI to determine the Entities movement speed. The actual movement system interacts with a Movement Component which keeps track of the actual current speed and can be influenced by other systems while the Movement Controller is only there to keep the fixed const state, but it can be influenced by a Skill Update system or something like this.

Child entities

  • Child entities are entities that are spawned and destroyed together with their parent.

    Note: that we only store child pointers, not parent pointers. Deleting a child entity does not automatically delete it from its parent — it will remain in the parent as a dead pointer.

What are Components?

They are data, that is all they are. Designing them is the most important task you will find yourself doing in an ESC driven game. The reason is that if you change a component you have to update all systems that use it. This data composed together makes up an Entity. It can be changed at runtime, in what ever way required. This data is transformed in Systems/Engines and therefore Systems/Engines provide the Behaviour of our game based on the input/output of other Systems/Engines.

Note: Keep in mind they do not need a Truth Representation. If they do not have one, the Engine cannot display them in the Entity Tree View. This is useful for runtime only components.

  • A component is defined by tm_component_i — it consists of a fixed-size piece of POD data.
  • This data is stored in a huge buffer for each entity type, and indexed by the index.
  • In addition, a component can have a manager.
  • The manager can store additional data for the component that doesn’t fit in the POD data — such as lists, strings, buffers, etc.

You can add callbacks to the component interface which allow you to perform actions on add and remove. The general lifetime of a component is bound to the Entity Context.

It's all about the data

Data is all we have. Data is what we need to transform in order to create a user experience. Data is what we load when we open a document. Data is the graphics on the screen and the pulses from the buttons on your gamepad and the cause of your speakers and headphones producing waves in the air and the method by which you level up and how the bad guy knew where you were to shoot at you and how long the dynamite took to explode and how many rings you dropped when you fell on the spikes and the current velocity of every particle in the beautiful scene that ended the game, that was loaded off the disc and into your life. Any application is nothing without its data. Photoshop without the images is nothing. Word is nothing without the characters. Cubase is worthless without the events. All the applications that have ever been written have been written to output data based on some input data. The form of that data can be extremely complex, or so simple it requires no documentation at all, but all applications produce and need data. (Source)

Best Practice

  • Component Size: Keep them small and atomic. The main reason for this is that it improves caching performance. Besides having a lot of small components allows for more reusability and compostability! Besides if they are atomic units of data, they increase their value to be reused across projects better and can provide more combinations. The biggest disadvantage is that small components make it harder to find them, the larger your project is.
  • Complex component data: Generally speaking you want to avoid storing complex data such as arrays or heap allocated data in a component. It is possible and sometimes not possible to avoid, but it is always good to ask yourself if it is needed.

How can we implement interaction between entities?

There are two problems in an ECS (Entity Component System) regarding the interaction between Entities: The read and the write access.

The truth about the interaction between Entities is that interactions do not genuinely exist. They are hidden beneath the implementation of the underlying relationship. A relationship is then nothing else than the transformation of data.

To choose the right tool for creating those transformations, we need to reason about our code (and what we want to achieve) and ask ourselves the following five questions:

  • On what data do we operate?
  • What is our domain?
  • What is the possible input for our transformation?
  • What is the usage frequency of the data?
  • What are we actually transforming?
  • What could our algorithm look like?
  • How often do we perform our transformation?

For infrequent read access we can easily use the tm_entity_api.get_component() . It allows access to the underlying data directly from a provided entity. It is not recommended to use that for read-access because it is quite slow. You perform random data access. But again, if it is infrequent of the operation and the number of targets (Entities), which are interesting to choose the right tool.

Here, you can use a System better than an Engine, since a System does not run in parallel and provides access to the Entity Context.

The problem

When creating interactions between entities, we mainly face two types of problems:

  1. Read Access: It means we have to read specific properties from a particular entity (object) and react based on this. In terms of games: An Actor needs to query/know some information from another part of the game. For example, within a Quest System: Have all tasks been completed?
  2. Write access: It means we have to write specific properties to a particular entity (object).

The transformation from *Interaction* towards *Relationships*

To start this transformation, we should have a quick look at the first principle of Data-Oriented Design:

Data is not the problem domain. For some, it would seem that data-oriented design is the antithesis of most other programming paradigms because data-oriented design is a technique that does not readily allow the problem domain to enter into the software so readily. It does not recognize the concept of an object in any way, as data is consistent without meaning […] The data-oriented design approach doesn’t build the real-world problem into the code. This could be seen as a failure of the data-oriented approach by veteran object-oriented developers, as many examples of the success of object-oriented design come from being able to bring human concepts to the machine. In this middle ground, a solution can be written in this language that is understandable by both humans and computers. The data-oriented approach gives up some of the human readability by leaving the problem domain in the design document but stops the machine from having to handle human concepts at any level by just that same action — Data Oriented Design Book Chapter 1.2

This principle helps us recognize that interactions do not truly exist. They hide the implementation of the underlying relationship. A relationship is nothing else than a transformation of data. In the case of an ECS, the Entity Manager (In our case, Entity Context) can be seen as a database and the Entity as a Lookup table key that indexes relationships between components.

The systems (or engines) are just here to interpret those relationships and give them meaning. Therefore, a system and engines should only do one job and do this well.

Systems/Engines perform transformations of data. This understanding allows us to create generic systems which are decoupled and easy to reuse, and as such, we should keep the following in mind:

One of the main design goals for Data-Oriented Design-driven applications is to focus on reusability through decoupling whenever possible.

Thus, the Unix philosophy Write programs that do one thing and do it well. Write programs to work together — McIlroy is a good way of expressing what a system/engine should do.

Most ECS's are built with the idea of relationships in mind. When writing systems/engines, we transform data from one state to another to give the data meaning. Therefore systems/engines are defining the purpose of the data relationships. This decoupling provides us with the flexibility we need to design complex software such as video games.

With such a design, we can modify behavior later on without breaking any dependencies.

For example:

You have one movement engine designed for the Player at first. Later on, you want to reuse it for all entities with a movement controller component. It contains the data provided by the Input System, such as which keys have been pressed. Therefore, an AI system can feed this as well for any other Unit (With the Movement Controller component, not the Player). The Movement Engine does not care about where the data comes from or who has it as long as it is present and the other needed component. (E.g. The Physics Mover or Transform)

How do we design Systems?

To implement the before-mentioned relationships, we have to undertake a couple of steps.

These steps are also interesting for programmers who design gameplay systems. Having those fleshed out when they design game mechanics can be good and speed up your work.

We have to ask the following questions:

1. What data transformations are we going to do and on which data?

This question should lead to “what components do we need to create this relationship?” We should always be able to give a reason why we need this data.

2. What is our possible domain? (What kind of inputs do we have?)

When we figure this out, we can make the right decision later. Also, we can reason about our code and how to implement these relationships.

3. How often does the data change?

To determine how often we change the data, we go through component by component and discuss how often we change it. This process is vital to pick the right tool. Knowing those numbers or tendencies is great for reasoning about possible performance bottlenecks and where we could apply optimizations.

4. What are we actually transforming?

Writing down the algorithm (in code or on paper) or the constraints of what we are actually doing with our data is a great solution. To pick the right tool based on the planned algorithm, we need to consider the cost of our algorithm.

What does cost mean? It can mean anything from runtime costs to implementation costs. It is essential first to establish what the proper criteria are. The costs at the end enable us to reason about the code.

To pick the right tool, we need to reason about the costs an algorithm costs us. If we take run time performance as a measurement, it is okay to have a slow algorithm if we do not execute this frequently. If this is not the case, you should consider another solution.

5. How often do we execute the algorithm/transformation?

Based on the information we have already about the data we need for the transformation, it’s pretty easy to determine the execution frequency. The total number of entities/objects is known at this time. (It may be an estimation). Therefore, we can guess how often this might run. Keep in mind that we previously discussed how often we suspect the data to be changed. This leads to transparency, which gives a good idea of the costs of this code.

Keep in mind that the main goal is to keep things simple. A System/Engine should do one job. As the variety of components defines the data type of the Entity. And the combination of Systems/Engines defines the actual game behavior. Therefore you do not need to write diagrams, blueprints, pseudo-code or anything. You may even be able to just write the engine as one goal. It is recommended to do those steps even in your mind before you write your system.

IMPORTANT: When the data changes, the problem changes. Therefore, we have to properly evaluate with the descriptive method the possible outcome and maybe change the implementation.

How to design a Systems & Engines

In the Machinery, you provide the behavior for your gameplay code via Engines and Systems. The difference between Engines and Systems is that Engines provide an explicitly defined subset of components while Systems give you only access to the Entity Context.

Note: Unsure what a System or an Engine is? Please read here

This separation means that Engines are better used for high-frequency operations on many entities. At the same time, Systems are better used for broader operations such as input on a few Entities / Single entities.

Documentation: The difference between engines and systems is that engines are fed component data, whereas systems are not. Thus, systems are useful when the data is stored externally from the components (for example to update a physics simulation), whereas engines are more efficient when the data is stored in the components. (You could use a system to update data in components, but it would be inefficient, because you would have to perform a lot of lookups to access the component data.)

These are a couple of questions you should ask yourself in advance.

  • On what data do we operate?
  • What is our domain?
  • What is the possible input for our transformation?
  • What is the usage frequency of the data?
  • What are we actually transforming?
  • What could our algorithm look like?
  • How often do we perform our transformation?

More details about those questions click here : How entities can interact.

At the end of this, you should be able to answer the following questions:

  • What kind of data am I going to read?
  • What kind of data am I going to write?
  • Should my operation be exclusive? Hence not to be executed in parallel?
  • In which phase does it run?
  • What dependencies do I have?

Those answers are important for the automatic scheduling of the Systems/Engines. Based on all those inputs, the Entity System can determine when and how to schedule what.

Best Practice

  • System/Engine Scope: Systems should be designed to have one job only. This can be difficult at times, especially when designing new features. Therefore it is fine to first create a bigger system and then with time make them smaller. If you find that your engine/system does a lot of things, don't worry. In an ECS things are decoupled from everything else, therefore it is generally pretty easy to split them up into smaller units. This allows you to increase reusability of systems
  • System/Engine Scheduling: Always provide a write list and a list of components your Engine/System is operating on. This is important so the scheduler can do its best! Also do not forget to make use of .before_me, .after_me and .phase more about this in the next chapter!

Example

const tm_engine_i movement_engine = {
    .ui_name = "movement_engine",
    .hash = TM_STATIC_HASH("movement_engine", 0x336880a23d06646dULL),
    .num_components = 4,
    .components = {keyboard_component, movement_component, transform_component,
                   mover_component},
    .writes = {false, false, true, true},
    .update = movement_update,
    .inst = (tm_engine_o *)ctx,
};
tm_entity_api->register_engine(ctx, &movement_engine);

This movement engine will operate on:

  • keyboard_component
  • movement_component
  • transform_component
  • mover_component

components. The scheduler can now look for those components in other engines and determine based on the .write field how to schedule it efficiently.

In this example the scheduler can schedule any engine that writes to the keyboard and the movement component that the same time as this engine if they do not write to the transform and mover component!

What is next?

More details on writing your own system or engine is explained in the next chapter

Defining a System and Engines

You have to pass the tm_entity_system_i or tm_engine_i instance in your register function.

Table of Content

Ask yourself those questions before you design an Engine / System

The following questions are better explained in the chapter: How entities can interact.

  • On what data do we operate?
  • What is our domain?
  • What is the possible input for our transformation?
  • What is the usage frequency of the data?
  • What are we actually transforming?
  • What could our algorithm look like?
  • How often do we perform our transformation?

and my answers

  • What kind of data am I going to read?
  • What kind of data am I going to write?
  • What kind of data do I want to ignore? (only important for engines)
  • Should my operation be exclusive? Hence not to be executed in parallel?
  • In which phase does it run?
  • What dependencies do I have?

Now it is time to define the dependencies / important items for scheduling.

How do those questions translate?

What kind of data am I going to read? && What kind of data am I going to write?

They translate .write and .components. With those fields, we tell the scheduler what components this system operates. From which components it intends to read from and to which one it writes.

What kind of data do I want to ignore? (only important for engines)

In the tm_engine_i you can provide a way to filter your component. Thus you can decide on which components the engine shall run. The field .excluded is used for this in there you can define which components an entity type shall not have. This means that when the engine is scheduled all entities will be ignored with those components.

For more information see Tagging Entities and Filtering Entities

Should my operation be exclusive? Hence not to be executed in parallel?

If we are sure that our system/engine should not run parallel, we need to tell the scheduler by setting the .exclusive flag to true. It will not run in parallel with any other systems or engines in the entity context. If it is false then the components and writes will be used to determine parallelism.

In which phase does it run?

We can define the .phase to tell the system in which phase we want our operation to run.

What dependencies do I have?

We can define dependencies by saying: .before_me and .after_me. We just pass the string hash of the other engine/system to this, and the scheduler does the rest.

What is next?

In the next chapter we translate this to actual code!

Registering a System or an Engine

To register Systems/Engines, you need to provide a register function to the tm_entity_register_engines_simulation_i interface. This function has the signature:

static void entity_register_engines_i(struct tm_entity_context_o *ctx)

For more information check the tm_entity_register_engines_simulation_i .

Whenever the Machinery creates an Entity Context, it calls this function and registers all your Systems / Engines to this context.

The Entity context is the world in which all your entities exist.

For Engines, you pass an instance of the tm_entity_system_i to the register function.

// example:
const tm_entity_system_i winning_system = {
    .ui_name = "winning_system_update",
    .hash = TM_STATIC_HASH("winning_system_update", 0x8f8676e599ca5c7aULL),
    .update = winning_system_update,
    .before_me[0] =
        TM_STATIC_HASH("maze_generation_system", 0x7f1fcbd9ee85c3cfULL),
    .exclusive = true,
};
tm_entity_api->register_system(ctx, &winning_system);

For Systems, you pass an instance of the tm_engine_i to the register function.

static void entity_register_engines_i(struct tm_entity_context_o *ctx) {

  tm_component_type_t keyboard_component = tm_entity_api->lookup_component_type(
      ctx, TM_TT_TYPE_HASH__KEYBOARD_COMPONENT);
  tm_component_type_t movement_component = tm_entity_api->lookup_component_type(
      ctx, TM_TT_TYPE_HASH__MOVEMENT_COMPONENT);
  tm_component_type_t transform_component =
      tm_entity_api->lookup_component_type(
          ctx, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);
  tm_component_type_t mover_component = tm_entity_api->lookup_component_type(
      ctx, TM_TT_TYPE_HASH__MOVER_COMPONENT);
  const tm_engine_i movement_engine = {
      .ui_name = "movement_engine",
      .hash = TM_STATIC_HASH("movement_engine", 0x336880a23d06646dULL),
      .num_components = 4,
      .components = {keyboard_component, movement_component,
                     transform_component, mover_component},
      .writes = {false, false, true, true},
      .update = movement_update,
      .inst = (tm_engine_o *)ctx,
  };
  tm_entity_api->register_engine(ctx, &movement_engine);
}

In the above example the scheduler will schedule this system after the maze_generation_system system! Since we did not provide any further information in .writes or in .components the scheduler has no other information to work with. In this case it is best to not write anything!

Example load function:

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_entity_api = tm_get_api(reg, tm_entity_api);
  tm_add_or_remove_implementation(reg, load,
                                  tm_entity_register_engines_simulation_i,
                                  &entity_register_engines_i);
  tm_add_or_remove_implementation(reg, load,
                                  tm_entity_register_engines_simulation_i,
                                  &register_or_system_engine);
}

Register your system or engine to the Editor

You can use the tm_entity_register_engines_simulation_i to register your engine or system to an entity context that runs only in the Editor. This might be good for components that shall only be used in the Editor.

The function signature is the same as the for the other interface!

Register systems & engines outside of the load function

You also can register your System/Engine outside of the load function wherever you have access to the correct Entity Context.

Write a custom component

This walkthrough shows you how to add a custom component to the Engine. During this walkthrough, we will cover the following topics:

  • How to create a component from scratch.
  • Where and how do we register a component.

You should have basic knowledge about how to write a custom plugin. If not, you might want to check this Guide and the Write a plugin guide. The goal of this walkthrough is to dissect the component plugin provided by the Engine.

Table of Content

Where do we start?

In this example, we want to create a new plugin, which contains our component. We open the Engine go to file -> New Plugin -> Entity Component. The file dialog will pop up and ask us where we want to save our file. Pick a location that suits you.

Tip: Maybe store your plugin in a folder next to your game project.

After this, we see that the Engine created some files for us. Now we need to ensure that we can build our project. In the root folder (The folder with the premake file), we run tmbuild, and if there is no issue, we see that it will build our projects once and generate the .sln file (on windows). If there is an issue, we should ensure we have set up the Environment variables correctly and installed all the needed dependencies. For more information, please read this guide.

Now we can open the .c file with our favourite IDE. The file will contain the following content:

static struct tm_entity_api *tm_entity_api;
static struct tm_transform_component_api *tm_transform_component_api;
static struct tm_temp_allocator_api *tm_temp_allocator_api;
static struct tm_the_truth_api *tm_the_truth_api;
static struct tm_localizer_api *tm_localizer_api;

#include <plugins/entity/entity.h>
#include <plugins/entity/transform_component.h>
#include <plugins/the_machinery_shared/component_interfaces/editor_ui_interface.h>

#include <foundation/api_registry.h>
#include <foundation/carray.inl>
#include <foundation/localizer.h>
#include <foundation/math.inl>
#include <foundation/the_truth.h>
#define TM_TT_TYPE__CUSTOM_COMPONENT "tm_custom_component"
#define TM_TT_TYPE_HASH__CUSTOM_COMPONENT TM_STATIC_HASH("tm_custom_component", 0x355309758b21930cULL)

enum
{
    TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY, // float
    TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE, // float
};

struct tm_custom_component_t
{
    float y0;
    float frequency;
    float amplitude;
};
static const char *component__category(void)
{
    return TM_LOCALIZE("Samples");
}

static tm_ci_editor_ui_i *editor_aspect = &(tm_ci_editor_ui_i){
    .category = component__category};
static void truth__create_types(struct tm_the_truth_o *tt)
{
    tm_the_truth_property_definition_t custom_component_properties[] = {
        [TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY] = {"frequency", TM_THE_TRUTH_PROPERTY_TYPE_FLOAT},
        [TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE] = {"amplitude", TM_THE_TRUTH_PROPERTY_TYPE_FLOAT},
    };

    const tm_tt_type_t custom_component_type = tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__CUSTOM_COMPONENT, custom_component_properties, TM_ARRAY_COUNT(custom_component_properties));
    const tm_tt_id_t default_object = tm_the_truth_api->quick_create_object(tt, TM_TT_NO_UNDO_SCOPE, TM_TT_TYPE_HASH__CUSTOM_COMPONENT, TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY, 1.0f, TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE, 1.0f, -1);
    tm_the_truth_api->set_default_object(tt, custom_component_type, default_object);
    tm_tt_set_aspect(tt, custom_component_type, tm_ci_editor_ui_i, editor_aspect);
}
static bool component__load_asset(tm_component_manager_o *man, struct tm_entity_commands_o *commands, tm_entity_t e, void *c_vp, const tm_the_truth_o *tt, tm_tt_id_t asset)
{
    struct tm_custom_component_t *c = c_vp;
    const tm_the_truth_object_o *asset_r = tm_tt_read(tt, asset);
    c->y0 = 0;
    c->frequency = tm_the_truth_api->get_float(tt, asset_r, TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY);
    c->amplitude = tm_the_truth_api->get_float(tt, asset_r, TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE);
    return true;
}
static void component__create(struct tm_entity_context_o *ctx)
{
    tm_component_i component = {
        .name = TM_TT_TYPE__CUSTOM_COMPONENT,
        .bytes = sizeof(struct tm_custom_component_t),
        .load_asset = component__load_asset,
    };

    tm_entity_api->register_component(ctx, &component);
}
// Runs on (custom_component, transform_component)
static void engine_update__custom_component(tm_engine_o *inst, tm_engine_update_set_t *data, struct tm_entity_commands_o *commands)
{
    TM_INIT_TEMP_ALLOCATOR(ta);

    tm_entity_t *mod_transform = 0;

    struct tm_entity_context_o *ctx = (struct tm_entity_context_o *)inst;
    double t = 0;
    for (const tm_entity_blackboard_value_t *bb = data->blackboard_start; bb != data->blackboard_end; ++bb)
    {
        if (TM_STRHASH_EQUAL(bb->id, TM_ENTITY_BB__TIME))
            t = bb->double_value;
    }
    for (tm_engine_update_array_t *a = data->arrays; a < data->arrays + data->num_arrays; ++a)
    {
        struct tm_custom_component_t *custom_component = a->components[0];
        tm_transform_component_t *transform = a->components[1];

        for (uint32_t i = 0; i < a->n; ++i)
        {
            if (!custom_component[i].y0)
                custom_component[i].y0 = transform[i].world.pos.y;
            const float y = custom_component[i].y0 + custom_component[i].amplitude * sinf((float)t * custom_component[i].frequency);

            transform[i].world.pos.y = y;
            ++transform[i].version;
            tm_carray_temp_push(mod_transform, a->entities[i], ta);
        }
    }
    tm_entity_api->notify(ctx, data->engine->components[1], mod_transform, (uint32_t)tm_carray_size(mod_transform));
    TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
}
static bool engine_filter__custom_component(tm_engine_o *inst, const tm_component_type_t *components, uint32_t num_components, const tm_component_mask_t *mask)
{
    return tm_entity_mask_has_component(mask, components[0]) && tm_entity_mask_has_component(mask, components[1]);
}
static void component__register_engine(struct tm_entity_context_o *ctx)
{
    const tm_component_type_t custom_component = tm_entity_api->lookup_component_type(ctx, TM_TT_TYPE_HASH__CUSTOM_COMPONENT);
    const tm_component_type_t transform_component = tm_entity_api->lookup_component_type(ctx, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);

    const tm_engine_i custom_component_engine = {
        .ui_name = "Custom Component",
        .hash = TM_STATIC_HASH("CUSTOM_COMPONENT", 0xe093a8316a6c2d29ULL),
        .num_components = 2,
        .components = {custom_component, transform_component},
        .writes = {false, true},
        .update = engine_update__custom_component,
        .filter = engine_filter__custom_component,
        .inst = (tm_engine_o *)ctx,
    };
    tm_entity_api->register_engine(ctx, &custom_component_engine);
}

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_entity_api = tm_get_api(reg, tm_entity_api);
    tm_transform_component_api = tm_get_api(reg, tm_transform_component_api);
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);
    tm_localizer_api = tm_get_api(reg, tm_localizer_api);

    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, truth__create_types);
    tm_add_or_remove_implementation(reg, load, tm_entity_create_component_i, component__create);
    tm_add_or_remove_implementation(reg, load, tm_entity_register_engines_simulation_i, component__register_engine);
}

Code structure

Let us dissect the code structure and discuss all the points of interest.

API and include region

The file begins with all includes and API definitions:

static struct tm_entity_api *tm_entity_api;
static struct tm_transform_component_api *tm_transform_component_api;
static struct tm_temp_allocator_api *tm_temp_allocator_api;
static struct tm_the_truth_api *tm_the_truth_api;
static struct tm_localizer_api *tm_localizer_api;

#include <plugins/entity/entity.h>
#include <plugins/entity/transform_component.h>
#include <plugins/the_machinery_shared/component_interfaces/editor_ui_interface.h>

#include <foundation/api_registry.h>
#include <foundation/carray.inl>
#include <foundation/localizer.h>
#include <foundation/math.inl>
#include <foundation/the_truth.h>

The code will fill the API definitions with life in the tm_load_plugin function.

Define your Data

The next part contains the Truth Definition of the component and the plain old data struct (POD). In production, we should separate those aspects into a header file!

Note: All components should be plain old data types.

#define TM_TT_TYPE__CUSTOM_COMPONENT "tm_custom_component"
#define TM_TT_TYPE_HASH__CUSTOM_COMPONENT                                      \
  TM_STATIC_HASH("tm_custom_component", 0x355309758b21930cULL)

enum {
  TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY, // float
  TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE, // float
};

struct tm_custom_component_t {
  float y0;
  float frequency;
  float amplitude;
};

Add your component to the Truth

After this, we have the region in which we define the category of our component. The Editor will call it to categorize the component into the correct section.

We need to define a tm_ci_editor_ui_i object which uses this function. Later we register this function to the tm_ci_editor_ui_i aspect of our truth type. If you do not add this aspect later to your Truth Type, the Editor will not know that this Component Type exists, and you can not add it via the Editor, but in C.

Note: More about aspects you can read in the aspects guide.

static const char *component__category(void) { return TM_LOCALIZE("Samples"); }

static tm_ci_editor_ui_i *editor_aspect =
    &(tm_ci_editor_ui_i){.category = component__category};

In this region, we create our component truth type. It is important to remember that the Truth will not reflect the runtime data, just the data you can edit in the Editor. On the other hand, the Entity Context will store your runtime data, the plain old data struct you have defined above. More about how this works later in this section.

Let us take this code apart one more time:

static void truth__create_types(struct tm_the_truth_o *tt) {
  tm_the_truth_property_definition_t custom_component_properties[] = {
      [TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY] =
          {"frequency", TM_THE_TRUTH_PROPERTY_TYPE_FLOAT},
      [TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE] =
          {"amplitude", TM_THE_TRUTH_PROPERTY_TYPE_FLOAT},
  };

  const tm_tt_type_t custom_component_type =
      tm_the_truth_api->create_object_type(
          tt, TM_TT_TYPE__CUSTOM_COMPONENT, custom_component_properties,
          TM_ARRAY_COUNT(custom_component_properties));
  const tm_tt_id_t default_object = tm_the_truth_api->quick_create_object(
      tt, TM_TT_NO_UNDO_SCOPE, TM_TT_TYPE_HASH__CUSTOM_COMPONENT,
      TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY, 1.0f,
      TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE, 1.0f, -1);
  tm_the_truth_api->set_default_object(tt, custom_component_type,
                                       default_object);
  tm_tt_set_aspect(tt, custom_component_type, tm_ci_editor_ui_i, editor_aspect);
}
  1. We define the component's properties.
  2. We create the actual type in the Truth.
  3. We create an object of our type with quick_create_object and provide a default object to our component. It makes sure that when you add the component to an Entity, you have the expected default values. It is not needed, just a nice thing to have.
  4. Add our tm_ci_editor_ui_i aspect to the type. It tells the Editor that you can add the component via the Editor. If you do not provide it, the Editor will not suggest this component to you and cannot store it in the Truth. It does not mean you cannot add this component via C.

Define your component

You can register a component to the tm_entity_create_component_i in your plugin load function. This interface expects a function pointer to a create component function of the signature: void tm_entity_create_component_i(struct tm_entity_context_o *ctx).

The Engine will call this function whenever it creates a new Entity Context to populate the context with all the known components. It usually happens at the beginning of the Simulation.

Within this function, you can define your component and register it to the context. The tm_entity_api provides a function tm_entity_api.register_component() which expects the current context and an instance of the tm_component_i. We define one in our function and give it the needed information:

  • A name should be the same as the Truth Type
  • The size of the component struct
  • A load asset function
static void component__create(struct tm_entity_context_o *ctx) {
  tm_component_i component = {
      .name = TM_TT_TYPE__CUSTOM_COMPONENT,
      .bytes = sizeof(struct tm_custom_component_t),
      .load_asset = component__load_asset,
  };

  tm_entity_api->register_component(ctx, &component);
}

As mentioned before, the Truth does not reflect the runtime data and only holds the data you can edit in the Editor. This is why there needs to be some translation between The Truth and the ECS. This magic is happening in the tm_component_i.load_asset(). This function allows you to translate a tm_tt_id_t asset to the plain old data of the component.

static bool component__load_asset(tm_component_manager_o *man,
                                  struct tm_entity_commands_o *commands,
                                  tm_entity_t e, void *c_vp,
                                  const tm_the_truth_o *tt, tm_tt_id_t asset) {
  struct tm_custom_component_t *c = c_vp;
  const tm_the_truth_object_o *asset_r = tm_tt_read(tt, asset);
  c->y0 = 0;
  c->frequency = tm_the_truth_api->get_float(
      tt, asset_r, TM_TT_PROP__CUSTOM_COMPONENT__FREQUENCY);
  c->amplitude = tm_the_truth_api->get_float(
      tt, asset_r, TM_TT_PROP__CUSTOM_COMPONENT__AMPLITUDE);
  return true;
}

The first step is that we cast the given void* of the component data c_vp to the correct data type. After that, we load the data from the Truth and store it in the component. In the end, we return true because no error occurred.

Define your engine update

In the Machinery, gameplay code is mainly driven by Systems and Engines. They define the behaviour while the components the data describes.

Note: in some entity systems, these are referred to as systems instead, but we choose Engine because it is less ambiguous.

This next section of the code is about defining an Engine.

// Runs on (custom_component, transform_component)
static void
engine_update__custom_component(tm_engine_o *inst, tm_engine_update_set_t *data,
                                struct tm_entity_commands_o *commands) {
  TM_INIT_TEMP_ALLOCATOR(ta);

  tm_entity_t *mod_transform = 0;

  struct tm_entity_context_o *ctx = (struct tm_entity_context_o *)inst;
  double t = 0;
  for (const tm_entity_blackboard_value_t *bb = data->blackboard_start;
       bb != data->blackboard_end; ++bb) {
    if (TM_STRHASH_EQUAL(bb->id, TM_ENTITY_BB__TIME))
      t = bb->double_value;
  }
  for (tm_engine_update_array_t *a = data->arrays;
       a < data->arrays + data->num_arrays; ++a) {
    struct tm_custom_component_t *custom_component = a->components[0];
    tm_transform_component_t *transform = a->components[1];

    for (uint32_t i = 0; i < a->n; ++i) {
      if (!custom_component[i].y0)
        custom_component[i].y0 = transform[i].world.pos.y;
      const float y = custom_component[i].y0 +
                      custom_component[i].amplitude *
                          sinf((float)t * custom_component[i].frequency);

      transform[i].world.pos.y = y;
      ++transform[i].version;
      tm_carray_temp_push(mod_transform, a->entities[i], ta);
    }
  }
  tm_entity_api->notify(ctx, data->engine->components[1], mod_transform,
                        (uint32_t)tm_carray_size(mod_transform));
  TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
}

The first thing we do is use a temp allocator for any future allocation that will not leave this function. After that, we cast the tm_engine_o* inst to the tm_entity_context_o* so we have access to the entity context later on.

The next step is to get the time from the Blackboard Values.

double t = 0;
for (const tm_entity_blackboard_value_t *bb = data->blackboard_start;
     bb != data->blackboard_end; ++bb) {
  if (TM_STRHASH_EQUAL(bb->id, TM_ENTITY_BB__TIME))
    t = bb->double_value;
}

The Engine provides a bunch of useful Blackboard values. They are defined in the plugins/entity/entity.h.

  • TM_ENTITY_BB__SIMULATION_SPEED - Speed that the simulation is running at. Defaults to 1.0 for normal speed.
  • TM_ENTITY_BB__DELTA_TIME - Blackboard item representing the simulation delta time of the current frame.
  • TM_ENTITY_BB__TIME - Blackboard item representing the total elapsed time in the Simulation.
  • TM_ENTITY_BB__WALL_DELTA_TIME - Blackboard item representing the wall delta time of the current frame. (Wall delta time is not affected by the Simulation being paused or run in slow motion.)
  • TM_ENTITY_BB__WALL_TIME - Blackboard item representing the total elapsed wall time in the Simulation.
  • TM_ENTITY_BB__CAMERA - Blackboard items for the current camera.
  • TM_ENTITY_BB__EDITOR - Blackboard item that indicates that we are running in Editor mode. This may disable some components and/or simulation engines.
  • TM_ENTITY_BB__SIMULATING_IN_EDITOR - Set to non-zero if the Simulation runs from within the Editor, such as running a game in the simulation tab. It will be zero when we run a game from the Runner. Note the distinction from TM_ENTITY_BB__EDITOR.

The tm_engine_update_set_t gives us access to the needed data, and we can modify our components. The first important information we get are the number of entity types (also known Archetypes). This number is stored in data->num_arrays. Now that we know this information we can iterate over them and access the components per entity type. tm_engine_update_array_t a = data->arrays (Gives us the current entity type's components). a->n is the number of matching components / entities of this entity type.

for (tm_engine_update_array_t *a = data->arrays;
     a < data->arrays + data->num_arrays; ++a) {
  struct tm_custom_component_t *custom_component = a->components[0];
  tm_transform_component_t *transform = a->components[1];

  for (uint32_t i = 0; i < a->n; ++i) {
    if (!custom_component[i].y0)
      custom_component[i].y0 = transform[i].world.pos.y;
    const float y = custom_component[i].y0 +
                    custom_component[i].amplitude *
                        sinf((float)t * custom_component[i].frequency);

    transform[i].world.pos.y = y;
    ++transform[i].version;
    tm_carray_temp_push(mod_transform, a->entities[i], ta);
  }
}

Note: In case you are not that familiar with C this loop:

 for (tm_engine_update_array_t* a = data->arrays; a < data->arrays + data->num_arrays; ++a) {

is kind of the C equivalent to C++'s for each loop: for(auto a : data->arrays)

As the last step, we add a notifier function call to notify all entities that their components have changed.

tm_entity_api->notify(ctx, data->engine->components[1], mod_transform,
                      (uint32_t)tm_carray_size(mod_transform));

Register your Engine to the system

You can register a component to the tm_entity_register_engines_simulation_i in your plugin load function. This interface expects a function pointer to a create component function of the signature: void tm_entity_register_engines_i(struct tm_entity_context_o *ctx).

The function itself looks as follows:

static void component__register_engine(struct tm_entity_context_o *ctx) {
  const tm_component_type_t custom_component =
      tm_entity_api->lookup_component_type(ctx,
                                           TM_TT_TYPE_HASH__CUSTOM_COMPONENT);
  const tm_component_type_t transform_component =
      tm_entity_api->lookup_component_type(
          ctx, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);

  const tm_engine_i custom_component_engine = {
      .ui_name = "Custom Component",
      .hash = TM_STATIC_HASH("CUSTOM_COMPONENT", 0xe093a8316a6c2d29ULL),
      .num_components = 2,
      .components = {custom_component, transform_component},
      .writes = {false, true},
      .update = engine_update__custom_component,
      .filter = engine_filter__custom_component,
      .inst = (tm_engine_o *)ctx,
  };
  tm_entity_api->register_engine(ctx, &custom_component_engine);
}

The first thing we do is to look up the component type. Did we register the type? If not, we will not get the correct type. Here we are using the name we defined beforehand in our component create function.

Then we ask for the transform component next because our Engine shall run on those two components.

After this, we define the actual instance of our engine struct

tm_engine_i.

We provide a .ui_name used in the Profiler to identify our Engine. Moreover, we add a unique string hash identifying this engine/system. This is used for scheduling the engine/system concerning other engines and systems, using thebefore_me and after_me fields.

Then we tell the system how many components the Engine shall operate on and which ones we will modify. This is used for scheduling the engines later one.

At last, we provide the needed update function, which we have discussed earlier, and a filter function.

static bool engine_filter__custom_component(
    tm_engine_o *inst, const tm_component_type_t *components,
    uint32_t num_components, const tm_component_mask_t *mask) {
  return tm_entity_mask_has_component(mask, components[0]) &&
         tm_entity_mask_has_component(mask, components[1]);
}

The filter function will be called on all entity types to determine if the Engine shall run on them or not. To provide this function is optional. If present it specifies a filter function called for each entity type (as

represented by its component mask) to determine if the Engine should run on that entity type. If no tm_engine_i.filter() function is supplied and no excludes[] flags are set, the update will run on entity types that have all the components in the components array. If some excludes[] flags are set, the Engine will run on all entity types that do not have any of the components whose excludes[] flags are set, but have all the other components in the components array.

Note: For more information, check the documentation.

The last thing the register function needs to do is register the Engine to the Entity Context.

 tm_entity_api->register_engine(ctx, &custom_component_engine);

The plugin load function

The most important lines here are the once in which we register our truth types, the component and the engine.

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_entity_api = tm_get_api(reg, tm_entity_api);
  tm_transform_component_api = tm_get_api(reg, tm_transform_component_api);
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
  tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);
  tm_localizer_api = tm_get_api(reg, tm_localizer_api);

  tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i,
                                  truth__create_types);
  tm_add_or_remove_implementation(reg, load, tm_entity_create_component_i,
                                  component__create);
  tm_add_or_remove_implementation(reg, load,
                                  tm_entity_register_engines_simulation_i,
                                  component__register_engine);
}

Tagging Entities

The Machinery knows 2 kind of ways to Tag Entities:

  1. using the Tag Component
  2. using a Tag Component to filter the Entity Type

Table of Content

Using the Tag Component

The difference is that the first solution can be used via the tag_component_api and you can add Tags via the Editor to any Entity that has a Tag Component. Later on in your System or Engine you can access the Tagged Entity.

Note: This is not the most performant solution but an easy way for entities which do not exist many times in the world. It is a nice way to identify one or two specific entities for some specific logic.

Adding them via The Editor

You need to select an Entity and Right Click -> Add Component

This will add the Entity Tag Component. When selected you have the chance to add Tags to the Entity by using a simple autocomplete textbox.

Beware: The Engine will create an entity tag folder in your root folder. This is also the place where the Entity Tag API will search for the assets.

Adding and Accessing Tags via C

You can also add tags via the tag_component_api but you need access to the Tag Component Manager. In your System or on Simulate Entry start():

tm_tag_component_manager_o *tag_mgr = (tm_tag_component_manager_o *)tm_entity_api->component_manager(ctx, tag_component);
tm_tag_component_api->add_tag(tag_mgr, my_to_tagged_entity, TM_STATIC_HASH("player", 0xafff68de8a0598dfULL));

You can also receive entities like this:

tm_tag_component_manager_o *tag_mgr = (tm_tag_component_manager_o *)tm_entity_api->component_manager(ctx, tag_component);
tm_entity_t upper_bounding_box = tm_tag_component_api->find_first(tag_mgr, TM_STATIC_HASH("upper_bounding_box", 0x1afc9d34ecb740ecULL));

And than you can read the data from the entity via get_component. This is where you will perform a random look up and this might be slow. Therefore it is mostly recommended to use this for simple interactions where performance is not needed.

Note: Tags do not need to exist in the Asset Browser, therefore you can add any label to the entity. Keep in mind that they will not be created in the Asset Browser!

Tag Components - Entity Type Filter

On the other hand you can define a Tag Component which should not be confused with the previously explained Tag Component. A tag component is a simple typedef of a unit64_t (or something else) or an empty struct in C++ to a component without properties. A tag is a component that does not have any data. The function of this component is to modify the Entity Type / Archetype to group entities together with them.

Example:

You have the following components:

  • Component A
  • Component B

And 2 systems :

  • System A
  • System B

They both shall operate on Component A & B but have different logic based on what the Components represent. To archive this you just add a tag component to each Entity:

#1 Entity:
- Component A
- Component B
- My Tag For System A
#2 Entity:
- Component A
- Component B
- My Tag for System A

In this example System B would not operate on both Entities if we use the .excluded filter to exclude My Tag For System A from the System.

Filtering

To see a real world application of Tag components to filter entity types checkout the next chapter: Filtering

Filtering Entities

The Machinery knows 2 kind of ways to Tag Entities:

  1. using the Tag Component
  2. using a Tag Component to filter the Entity Types

Table of Content

Filtering Entities

In an Engine (tm_engine_i) you can define the .excluded field. This tells the scheduler that this engine shall not run on any entity type that contains these components.

Let us assume we have the following entities:

#1 Entity:
- Component A
- Component B
- Component C
#2 Entity:
- Component A
- Component B
- Component D

Now we have an Engine that shall operate on (Component A,Component B) but we do not want it to operate on entities with Component D we could just check in our update loop:

static void entity_register_engines(struct tm_entity_context_o *ctx) {

  tm_component_type_t component_a =
      tm_entity_api->lookup_component_type(ctx, TM_TT_TYPE_HASH__A_COMPONENT);
  tm_component_type_t component_b =
      tm_entity_api->lookup_component_type(ctx, TM_TT_TYPE_HASH__B_COMPONENT);
  tm_component_type_t component_d =
      tm_entity_api->lookup_component_type(ctx, TM_TT_TYPE_HASH__D_COMPONENT);
  const tm_engine_i movement_engine = {
      .ui_name = "movement_engine",
      .hash = TM_STATIC_HASH("movement_engine", 0x336880a23d06646dULL),
      .num_components = 2,
      .components = {component_a, component_b},
      .writes = {false, true},
      .excluded = {component_d},
      .num_excluded = 1,
      .update = movement_update,
      .inst = (tm_engine_o *)ctx,
  };
  tm_entity_api->register_engine(ctx, &movement_engine);
}

or we could define a component mask and use this to filter but both methods are slow. This is because get_component_by_hash or get_component require us to look up internally the entity + the components and search for them. Its aka a random memory access!

To avoid all of this we can just tell the engine to ignore all entity types which contain the component_d via the .excluded field in the tm_engine_i.

const tm_engine_i movement_engine = {
    .ui_name = "movement_engine",
    .hash = TM_STATIC_HASH("movement_engine", 0x336880a23d06646dULL),
    .num_components = 2,
    .components = {component_a, component_b},
    .writes = {false, true},
    .excluded = {component_d},
    .num_excluded = 1,
    .update = movement_update,
    .inst = (tm_engine_o *)ctx,
};

Filtering Entities by using Tag Components

Note: You can define a Tag Component which should not be confused with the Tag Component. A tag component is a simple typedef of a unit64_t (or something else) or an empty struct in C++ to a component without properties. The function of this component is it to modify the Entity Type / Archetype to group entities together with them.For more information see the Tagging Entities Chapter.

You have a Movement / Input System which should always work. At some point you do not want an entity to receive any input.

Solution 1

To solve this issue you could remove the Movement Component but that would be annoying because you would loose its state, which might be important.

Better Solution

First you define the component:

#define TM_TT_TYPE__PLAYER_NO_MOVE_TAG_COMPONENT "tm_player_no_move_t"
#define TM_TT_TYE_HASH__PLAYER_NO_MOVE_TAG_COMPONENT                           \
  TM_STATIC_HASH("tm_player_no_move_t", 0xc58cb6ade683ca88ULL)
static void component__create(struct tm_entity_context_o *ctx) {
  tm_component_i component = (tm_component_i){
      .name = TM_TT_TYPE__PLAYER_NO_MOVE_TAG_COMPONENT,
      .bytes = sizeof(uint64_t), // since we do not care of its content we can
                                 // just pick any 8 byte type
  };
  tm_entity_api->register_component(ctx, &component);
}

Then you filter in your update for the Input Engine/ Movement Engine any Entity that has a No Movement Tag:

tm_component_type_t transform_component = tm_entity_api->lookup_component_type(
    ctx, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);
tm_component_type_t mover_component =
    tm_entity_api->lookup_component_type(ctx, TM_TT_TYPE_HASH__MOVER_COMPONENT);
tm_component_type_t movement_component = tm_entity_api->lookup_component_type(
    ctx, TM_TT_TYPE_HASH__MOVEMENT_COMPONENT);
tm_component_type_t no_movement_tag_component =
    tm_entity_api->lookup_component_type(
        ctx, TM_TT_TYE_HASH__PLAYER_NO_MOVE_TAG_COMPONENT);

const tm_engine_i movement_engine = {
    .ui_name = "movement_engine",
    .hash = TM_STATIC_HASH("movement_engine", 0x336880a23d06646dULL),
    .num_components = 3,
    .components = {movement_component, transform_component, mover_component},
    .writes = {false, true, true},
    .excluded = {no_movement_tag_component},
    .num_excluded = 1,
    .update = movement_update,
    .inst = (tm_engine_o *)ctx,
};
tm_entity_api->register_engine(ctx, &movement_engine);

Whenever another engine/system decides that an entity should not move anymore it just adds a no_movement_tag_component to the entity.

static void my_other_system(tm_engine_o *inst, tm_engine_update_set_t *data,
                            struct tm_entity_commands_o *commands) {
  struct tm_entity_context_o *ctx = (struct tm_entity_context_o *)inst;
  tm_component_type_t no_movement_tag_component =
      tm_entity_api->lookup_component_type(
          ctx, TM_TT_TYE_HASH__PLAYER_NO_MOVE_TAG_COMPONENT);
  // code ..
  for (tm_engine_update_array_t *a = data->arrays;
       a < data->arrays + data->num_arrays; ++a) {
    // code...
    for (uint32_t x = 0; x < a->n; ++x) {
      // code...
      if (player_should_not_walk_anymore) {
        tm_entity_commands_api->add_component(commands, a->entities[x],
                                              no_movement_tag_component);
      }
    }
  }
}

As you can see the Movement Engine will now update all other entities in the game which do not have the No Movement Tag.

Overview of the Entity Context Lifecycle

This page describes the lifecycle of the entity context / the simulation and all its stages.

Update Phases

In your Engine / Systems you an define in which Phase of the Update loop, your Engine / System shall run. This can be managed via the: .before_me, .after_me and .phase fields of your engine or system definition.

Please keep in mind that the Scheduler will order your system based on what kind of components you might modify or not! This is why it is always recommended to say what kind of components your system/engine will operate on and what they will do with them (Write to them or not). Depending on your dependencies the scheduler will decide if your engine/system can run in parallel.

The Engine has default phases:

NameWhen
TM_PHASE__ANIMATIONPhase for animation jobs.
TM_PHASE__PHYSICSPhase for physics jobs.
TM_PHASE__CAMERAPhase for camera jobs.
TM_PHASE__GRAPHPhase for the visual scripting graph update.
TM_PHASE__RENDERPhase for render jobs.

Note: that phases are just string hashes and you can extend the systems with more phases if desired.

Gameplay Entry Point Comparison

In the Machinery you have multiple entry points for your game play code to live in. You can make use of Simulation Entries, Entity Component System: Systems or Engines and also make use of a Entity Graph or custom scripting language component. The question is more when to use which of the tools? The answer to this depends on your game's needs. To summarize it in the Engine you have about four built-in entry points for your game play code which you can use all at the same time and which one to use depends on your use case.

The following table will give a brief overview of the different types and their properties:

TypeParallel ExecutionLifetime based on entityRandom Memory Access by defaultRuns only on a Subset of EntitiesRuns per entityExecution order can be set
Simulation EntryNoYesYesNoNoYes*
ECS SystemMaybeNoYesNoNoYes*
ECS EngineMaybeNoNoYesNoYes*
Entity Graph (Graph Component)NoYesYesNoYesNo

** via .phase or .before_me and .after_me when defining the interface*

Recommendation

Note: These recommendations are no guidelines! You do not have to follow them they are just here to give another more example driven overview of the different types of gameplay entry points.

System vs Engine

Systems

It is recommended to use a System over an Engine when you try to write a complex gameplay system that will handle a few different entity types simultaneously. The number of entities here is important. Since a System uses random memory access through get_component() which may lead to cache misses.

Moreover, a System is a preferred way of updating when the data in the component is just a pointer into some external system. (This is the case, for example, for PhysX components). In the case of PhysX, it is assumed to store its data in a cache-friendly order, which means we do not want to iterate over the entities in the order they are stored in the entity system since this would cause a pointer chasing in the external System. Instead, we just want to send a single update to the external System. It will process the entities in its own (cache-friendly) order.

Another reason to use System over an Engine is that you can use it to execute things on initialization and on the game's shutdown since only Systems have a init() and a shutdown() function, Engines do not.

Engines

It is recommended to use an Engine over a System when you try to write a complex gameplay system that will handle a lot of entities with the same set of components simultaneously. The ECS scheduler will gather all entities with the set of components and enable you to iterate over them in a cache-friendly manner. This allows you to write an engine that can manipulate a lot of entities simultaneously without any loss of performance.

Simulation Entry vs System

It is recommended to use a Simulation Entry when you want to tie the lifetime of the underlying System to an Entity lifetime. This is a very similar concept to the "GameObject" Script concept in Unity. Suppose the entity that hosts the Simulation Entry Component is destroyed. In that case, the Update function will not be ticked anymore, and the System is destroyed. This can be a useful concept for level-specific Gameplay moments. A Simulation Entry also has a start() and stop() function. They are executed when the Simulation Entry is added or removed.

It is not recommended to use a Simulation Entry to handle a large mass of entities. For the same reason as the System is not used for this kind of purpose. The Simulation Entry will not run parallel and have random memory access.

Entity Graph

It is recommended to use a Entity Graph when you want to tie the lifetime to an Entity and if you want to execute unperformant code since the Entity Graph is a Visual Scripting language that is interpreted. It will be naturally slow. The Entity Graph is also good to handle UI/UX elements or for quick prototyping when performance is not important. Keep in mind that an Entity Graph is not executed in parallel and also only access memory via random access.

Tutorials

Creation Graph

In The Machinery, we provide full control over how data enters the engine and what data-processing steps get executed, allowing technical artists to better optimize content and setup custom, game-specific asset pipelines.

This is handled through Creation Graphs. A Creation Graph is essentially a generic framework for processing arbitrary data on the CPUs and GPUs, exposed through a graph front-end view. While Creation Graphs are mostly used in the context of rendering, they can be used for any type of data processing.

The following section will guide you from basic use cases to more advanced use cases.

We have a couple of blog posts which you may also find interesting:

Creation Graph Introduction

This walkthrough shows you some basics of the Creation Graph. During this walkthrough series you will familiarize yourself with the following concepts:

  • What Are Creation Graphs?
  • Simple Texture Compression
  • Graph Prototypes
  • Custom Import Settings
  • Materials
  • DCC Mesh Graph & Render Component

Video

Introduction

This walkthrough series makes use of the following free assets:

During this series we will modify the assets. If you are interested in how to import assets into the Engine you can watch the video Import and rigging tutorial or you follow theGuide Import assets. Besides this, this series has one main goal: to familiarize you with the "Core Creation Graphs" that we ship with the Engine in the core folder. Most creation graphs that the engine uses are built on top of those.

Texture Compression

This walkthrough shows you some basics of the Creation Graph. In this part we discuss texture compression.

This tutorial will teach you:

  • How to compress a texture
  • What differentiates a creation graph from each other.

Note: walkthrough series makes use of the following free assets: KhronosGroup/glTF-Sample-Models.

Setup

Download the flight helmet asset from the git repo Download Now. After we have downloaded and extracted all parts of the dcc asset as described in Import and rigging tutorial or you follow this Guide Import assets. We can follow the rest of the tutorial.

How do we identify uncompressed images?

When we select a texture in the Asset Browser, lets select the leather of the helmet.

{image}

In the preview tab we can see that this texture is uncompressed indicated by the text line in the bottom of the preview.

{image}

How do we compress the texture?

We double click the selected texture and open its creation graph. This will open the instanced version of the core dcc_texturecreation graph. This creation graph looks as following:

{image}

Dissection

Let us dissect the graph step by step:

Input Node
DCC Asset Image
Image Settings

The Input node takes a dcc asset image this is a data container that contains the raw image data within the dcc asset. This raw data needs to be translated to a GPU image with the next node.

DCC Images
DCC Asset Image
GPU Image

After this translation we can make use of the Import Settings to filter the Image with the Filter Image. This node filters for example mipmap.

Filter Image
Settings
Image
GPU Image

The output is a modified GPU Image which we pass to the Image Output node.

How to Distinguish between creation graphs and creation graph.

The output nodes define the types of the creation graph. In this particular case the creation graph represents a texture now.

Add the compression node

We are using the crunch library for our image compression node. We can just press space and search for the "Compression Node" and add it to the graph. Since we are working in an Instance of the core graph we can modify this graph and the results will not change the prototype.

Compression Node
Image
GPU Image
Input Colour Space
Output Format
Release GPU Input

This node outputs an Image that is compressed and we can then connect this node to the Image Output node. Important to remember is that we need to remove the connection from the output node to the image filter node and remove the original connection.

{image}

If we now investigate the image in the asset browser we can see that the texture is compressed.

{image}

Creation Graph Prototypes

This walkthrough shows you some basics of the Creation Graph. In this part we discuss creation graph prototypes. To read more in general about the prototype system please checkout the following Guide: Prototypes

Note: The walkthrough series makes use of the following free assets: KhronosGroup/glTF-Sample-Models.

This tutorial will teach you:

  • How to create Creation Graph prototypes
  • How to apply them to multiple assets
  • How to add extra input

Setup

We could do this by opening all textures and following the steps described in the Simple Texture Compression Walkthrough but this would be a time consuming and error prone job. It would be easier to just create one prototype for all.

Adding texture compression to all textures in the project

Creating a Creation Graph Prototype

There are 2 effective ways of doing this:

  1. Change the prototype of all dcc images use: core/creation_graphs/dcc-image

We open this prototype and apply all our changes described in Simple Texture Compression Walkthrough. Changes to the prototype will propagate to all creation graph instances of this prototype.

  1. Create a new creation graph and base it on core/creation_graphs/dcc-image

In this alternative approach we create with Right Click in the asset browser and then New -> Creation Graph a new creation graph. This graph is a empty graph. When we select the newly created asset we can chose in the property view tab the prototype of the asset.

{image}

In this selection we search for dcc-image and base (inherit) our new creation graph on the existing creation graph. After this we can modify this graph as described in the Simple Texture Compression Walkthrough.

Applying the new prototype to all texture assets

Now we select all assets in the asset browser and change their creation graph to point to our newly creation creation graph

{image}

Expose compression settings to the outside world

Now one problem is left it is that some of these are normal maps and for those you want different compression settings. The fix for this is to expose the Compression Node settings to the outside world. We open our newly creation creation graph connect the compression settings of the compression node with the input connector of the Input node. The original default value stays the default value of our graph Do not forget to mark them as public in the input node properties.

{image}

When this is done you can see the exposed settings when ever you select any asset in the asset browser.

{image}

What ever you change here will be passed to the compression node. This makes sure that normal maps can be treated how they are supposed to.

Custom Import Settings

This walkthrough shows you some basics of the Creation Graph. In this part we discuss how to use custom import settings. This part is build on top of walkthroughs: Creation Graph Prototype and Texture Compression

Note: walkthrough series makes use of the following free assets: KhronosGroup/glTF-Sample-Models.

This tutorial will teach you:

  • How add a custom Import Setting
  • When creation graphs are imported from a DCC asset they make use of these settings

Setup

Before you can follow this tutorial you need to follow the following steps:

Create a Import Settings

Default settings are just an asset in the Asset browser. To create one you open the new menu with Right Click and than you select New in the context menu. There you select the Import Settings . This will create a new Import Settings asset in the root folder.

Note: This Import setting will be used from now on as the default one for any asset that is imported in this folder. This allows you to create import settings for different folders. Remember that this means that if the engine cannot find a import setting in the current folder it will check its parent folder and so on. In case it cannot find anything it will use the default settings.

When you select the asset you can see many different things:

This tutorial will only handle the DCC Asset - Creation Graph - Images and the Import - Creation Graphs - Images

DCC Image Creation Graph

Now we make use of the previously created dcc-image-compress. This creation graph enables image compression and exposed the compression settings as a public input.

{image}

Now can can change in the Import Settings the for the DCC Asset Creation Graphs: Images

We change this to our dcc-image-compress creation graph.

Import Asset

Now when we import the dcc asset we can see that the textures are auto compressed when we extract all the textures from the dcc asset.

What about importing just a texture file?

Another way of important a texture is by simply drag and drop a png or other texture file into the asset browser. Important to note here is that images that are imported and not part of a dcc asset do not use the same creation graph. Those imports make use o the Import - Creation Graphs. The major difference between the graphs is that the previously used creation graph:

{image}

Extracts its texture from a dcc asset while in case of importing a texture directly we read the data from disc.

{image}

We just apply the same technique as we already did in the dcc-image-compress.

Create the Import Image Compress creation graph

  1. We create a new creation graph in the asset browser
  2. We base it on the image-import creation graph
  3. We add the Compress Image node between the Input Image Archive and the Image->GPU Image node
  4. We also expose the compression settings as new input and also set it via the property panel to public

The graph should look like this:

Apply the correct settings to the Import Settings

What is left to do is change the creation graph for Import Images to our new import-image-compress creation graph in the Import Settings.

Custom GPU nodes

The Creation Graph is a powerful visual scripting language that can generate shader code through its GPU nodes. Extending this with custom nodes allows for more complex algorithms, custom material types and much more. In this tutorial we will demonstrate how to create some basic GPU nodes. To learn the difference between CPU and GPU nodes, check out Node Types.

A creation graph GPU node needs to be in a .tmsl file, these can be compiled by the shader system. Note that there can only be one creation graph node per .tmsl file, additional definitions will be ignored. If these shaders are placed in the bin/data/shaders/ directory, they will be loaded automatically. .tmsl files are written in a Simplified JSON format with less strict punctuation requirements. For a full reference on the shader files, check out the Shader System Reference.

Cube Node

function: [[
	output.res = x * x * x;
]]

creation_graph_node: {
	name: "tm_cube_node"
	display_name: "Cube"
	category: "Shader/Math"

	inputs: [ 
		{ name: "x" display_name: "X" } 
	]
	outputs: [
		{ name: "res" display_name: "Result" type: { type_of: "x" } }
	]
}

This node shows you the absolute basics of making a creation graph GPU node. All GPU nodes require two blocks. The function block is where you put the actual shader code. The creation_graph_node is a meta node that defines node I/O and general information.

In this example, the creation_graph_node has several fields, but more can be defined:

  • name must be a unique identifier for the node. It’s a good idea to prefix this with your namespace to make sure it doesn't inadvertently collide with nodes created by other people.
  • display_name is optional and specifies node name to show in the UI. If this is empty, a display name will be generated from the name field.
  • category is an optional path-type string that allows you to group related nodes.
  • inputs is an array of input parameters for the node. A type can be specified for each parameter but it is not required. If you don't specify a type, the type will be generic.
  • outputs is an array of output values for the node.

Note that we didn’t specify a type parameter for our input field. This makes it a fuzzy input and anything that supports the multiplication operator can be passed. Our output parameter does have a type field, but instead of defining a fixed type, it uses a generic syntax that sets the output type to whatever the input type was. For more information about this syntax see the Shader System Reference.

Depth Output Node

Output nodes are more complex than function nodes. Instead of a single function block, these nodes take the form of a render pass that can have variations based on the systems used with it and the connected inputs. The example above creates a very simple material node that displays a gray-scale interpretation of the object’s distance to the viewing camera.

depth_stencil_states: {
	depth_test_enable: true
	depth_write_enable: true
	depth_compare_op: "greater_equal"
}

raster_states: {
	front_face: "ccw"
}

imports: [
	{ name: "tm" type: "float4x4" }
]

vertex_shader: {
	import_system_semantics: [ "vertex_id" ]

	code: [[
		tm_vertex_loader_context ctx;
		init_vertex_loader_context(ctx);
		float4 vp = load_position(ctx, vertex_id, 0);

		float4 wp = mul(vp, load_tm());
		output.position = mul(wp, load_camera_view_projection());
		return output;
	]]
}

pixel_shader: {
	code: [[
		float2 near_far = load_camera_near_far();
		float depth = linearize_depth(input.position.z, near_far.x, near_far.y) * 0.01f;

		output.buffer0 = float4(linear_to_gamma2(depth), 1); // Base color, alpha
		output.buffer1 = float4(1, 1, 0, 1); // Normal (encoded in signed oct)
		output.buffer2 = float4(0, 0, 0, 1); // Specular, Roughness
		output.velocity = float2(0, 0);
		return output;
	]]
}

The creation_graph_node block for this node is very small. If no outputs are specified, the output will be a Shader Instance. These can be passed to other nodes for rendering, like the Draw Call and Shader Instance output nodes.

creation_graph_node: {
	name: "depth_output"
	display_name: "Depth"
	category: "Shader/Output"
}

In this example the compile block has the following fields:

  • includes specifies which common shaders this shader is dependent on. In this example, that is the common.tmsl shader because we use the linear_to_gamma2() function from that shader.
  • contexts specifies how this pass should be executed depending on the context. In this example, we only support one context, the viewport. In this context, we want to run during the gbuffer phase so we specify that as our layer. We also want to enable the gbuffer_system as we will be writing to it. Finally we specify that in this context we will enable the gbuffer configuration.
  • configurations are groups of settings. In this example we have one configuration group: gbuffer. This configuration requests three systems, if these systems are not present then we cannot run:
    • The viewer_system is needed to query the camera information.
    • The gbuffer_system allows us to render to the G-Buffer in the opaque pass of the default render pipeline.
    • The vertex_buffer_system allows us to query vertex information from the mesh.
compile: {
	includes: [ "common" ]

	configurations: {
		gbuffer: [{ 
			variations: [{ 
				systems: [ "viewer_system", "gbuffer_system", "vertex_buffer_system" ]
			}]
		}]
	}

	contexts: {
		viewport: [
			{ layer: "gbuffer" enable_systems: [ "gbuffer_system" ] configuration: "gbuffer" }
		]
	}
}

Note that the available contexts are defined by the application. Some examples of these in The Machinery editor are viewport, shadow_caster and ray_trace_material.

Note that the layers are defined by the render pipeline used. Some examples from the default render pipeline are: gbuffer, skydome, hdr-transparency, ui.

Creating custom CPU nodes

In this tutorial we will create a simple CPU node for the Creation Graph. The definition for these nodes is based on the Entity Graph Nodes, so there is some overlap. For this example we will create a node that generates a random uint32_t node with a settable maximum. To learn the difference between CPU and GPU nodes, check out Node Types.

Let’s first create the code for this node. This function will be called by the creation graph every time it needs to evaluate the node. Our only input to this function is the context of the creation graph. The first thing we will do is read our input from the context. We can query wires from the tm_creation_graph_interpreter_api using the read_wire() function. If this wire is not connected (or set directly) we early out with an error. After this, we start writing to our output wire. Note that this uses a very similar syntax, expect that we write to a pre-allocated pointer.

Note that the indices of these wires is relative to the way they are defined. Our input wire is defined first so its index is 0. The output wire is defined second so it gets the index 1.

static void random_node__run(tm_creation_graph_interpreter_context_t *ctx) {
  tm_creation_graph_interpreter_wire_content_t max_wire =
      tm_creation_graph_interpreter_api->read_wire(ctx->instance,
                                                   ctx->wires[0]);
  if (!TM_ASSERT(max_wire.n, "Max wire was not connected to random node!"))
    return;

  uint32_t *res = (uint32_t *)tm_creation_graph_interpreter_api->write_wire(
      ctx->instance, ctx->wires[1], TM_TT_TYPE_HASH__UINT32_T, 1,
      sizeof(uint32_t));
  *res =
      tm_random_to_uint32_t(tm_random_api->next()) % *(uint32_t *)max_wire.data;
}

We need to register this node to the creation graph API. This is done through the creation graph node interface. We define the general information to the node like its name, display_name and I/O connectors (wires), and the actual function to run:

static tm_creation_graph_node_type_i random_node = {
    .name = "tm_random",
    .display_name = "Random Uint",
    .static_connectors.in =
        {
            {.name = "max",
             .display_name = "Max",
             .type_hash = TM_TT_TYPE_HASH__UINT32_T},
        },
    .static_connectors.num_in = 1,
    .static_connectors.out =
        {
            {.name = "res",
             .display_name = "Result",
             .type_hash = TM_TT_TYPE_HASH__UINT32_T},
        },
    .static_connectors.num_out = 1,
    .run = random_node__run,
};
// register in the load function
tm_add_or_remove_implementation(reg, load, tm_creation_graph_node_type_i,
                                &random_node);

This is the full code to define this creation graph CPU node:

static struct tm_error_api *tm_error_api;
static struct tm_random_api *tm_random_api;
static struct tm_creation_graph_interpreter_api *tm_creation_graph_interpreter_api;

#include <foundation/api_registry.h>
#include <foundation/error.h>
#include <foundation/random.h>
#include <foundation/the_truth_types.h>

#include <plugins/creation_graph/creation_graph.h>
#include <plugins/creation_graph/creation_graph_interpreter.h>
#include <plugins/creation_graph/creation_graph_node_type.h>

static void random_node__run(tm_creation_graph_interpreter_context_t *ctx)
{
    tm_creation_graph_interpreter_wire_content_t max_wire = tm_creation_graph_interpreter_api->read_wire(ctx->instance, ctx->wires[0]);
    if (!TM_ASSERT(max_wire.n, "Max wire was not connected to random node!"))
        return;

    uint32_t *res = (uint32_t *)tm_creation_graph_interpreter_api->write_wire(ctx->instance, ctx->wires[1], TM_TT_TYPE_HASH__UINT32_T, 1, sizeof(uint32_t));
    *res = tm_random_to_uint32_t(tm_random_api->next()) % *(uint32_t *)max_wire.data;
}

static tm_creation_graph_node_type_i random_node = {
    .name = "tm_random",
    .display_name = "Random Uint",
    .static_connectors.in = {
        {.name = "max", .display_name = "Max", .type_hash = TM_TT_TYPE_HASH__UINT32_T},
    },
    .static_connectors.num_in = 1,
    .static_connectors.out = {
        {.name = "res", .display_name = "Result", .type_hash = TM_TT_TYPE_HASH__UINT32_T},
    },
    .static_connectors.num_out = 1,
    .run = random_node__run,
};

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_error_api = tm_get_api(reg, tm_error_api);
    tm_random_api = tm_get_api(reg, tm_random_api);
    tm_creation_graph_interpreter_api = tm_get_api(reg, tm_creation_graph_interpreter_api);
    tm_add_or_remove_implementation(reg, load, tm_creation_graph_node_type_i, &random_node);
}

Creating Custom Geometry Nodes

In this tutorial we well be creating a CPU node for the Creation Graph that creates a mesh that can be used by rendering nodes. This tutorial expects some basic knowledge of the creation graph and node creation. it is recommended to read Creating custom CPU nodes before reading this.

The main output of this node will be a tm_gpu_geometry_t and a tm_renderer_draw_call_info_t. Together these will make out GPU Geometry output. Additionally we will be outputting a bounding box for the triangle that can be used for culling and other calculations. But before we can populate those, we’ll need to consider the vertex format of our mesh. For this example, this will be a simple position, normal, and color per vertex:

typedef struct tm_triangle_vertex_t {
  tm_vec3_t pos;
  tm_vec3_t normal;
  tm_vec3_t color;
} tm_triangle_vertex_t;

The tm_renderer_draw_call_info_t is constant for our example so we can populate it as follows: (note that this node doesn’t create an index buffer and thusly uses TM_RENDERER_DRAW_TYPE_NON_INDEXED)

const uint32_t geometry_wire_size =
    sizeof(tm_gpu_geometry_t) + sizeof(tm_renderer_draw_call_info_t);
uint8_t *geometry_wire_data = tm_creation_graph_interpreter_api->write_wire(
    ctx->instance, ctx->wires[0], TM_TYPE_HASH__GPU_GEOMETRY, 1,
    geometry_wire_size);
memset(geometry_wire_data, 0, geometry_wire_size);

tm_renderer_draw_call_info_t *draw_call =
    (tm_renderer_draw_call_info_t *)(geometry_wire_data +
                                     sizeof(tm_gpu_geometry_t));
*draw_call = (tm_renderer_draw_call_info_t){
    .primitive_type = TM_RENDERER_PRIMITIVE_TYPE_TRIANGLE_LIST,
    .draw_type = TM_RENDERER_DRAW_TYPE_NON_INDEXED,
    .non_indexed.num_vertices = 3,
    .non_indexed.num_instances = 1};

Creating the geometry for this node requires us to take several things into consideration. First, we need to store the vertex buffer, constant buffer, and resource binder somewhere. Thankfully, the creation graph has a resource caching system that will handle this storage for us. Second, we need to define the system required to query our mesh primitives. For most use cases, the default vertex_buffer_system is the best option. Third, we need to ask ourselves what this geometry will be used for. If the geometry should be visible to the ray tracing pipeline for instance. This is a design choice that should be made by the node creator. In this example, we will take ray tracing into account.

First, let us query the default vertex_buffer_system, if this is not available our node will not work so we can early out:

gpu_geometry->vfetch_system = tm_shader_repository_api->lookup_system(
    context->shader_repository,
    TM_STATIC_HASH("vertex_buffer_system", 0x6289889fc7c40280ULL));

Next we will be creating the resources needed for our node. This will be a tm_shader_constant_buffer_instance_t, a tm_shader_resource_binder_instance_t, and a GPU buffer:

tm_creation_graph_node_cache_t *node_cache =
    tm_creation_graph_api->lock_resource_cache(context->tt, ctx->graph_id,
                                               ctx->node_id);
tm_shader_io_o *io = tm_shader_api->system_io(gpu_geometry->vfetch_system);

tm_shader_constant_buffer_instance_t *cbuffer =
    (tm_shader_constant_buffer_instance_t *)node_cache->scratch_pad;
tm_shader_resource_binder_instance_t *rbinder =
    (tm_shader_resource_binder_instance_t *)node_cache->scratch_pad +
    sizeof(tm_shader_constant_buffer_instance_t);

if (!cbuffer->instance_id)
  tm_shader_api->create_constant_buffer_instances(io, 1, cbuffer);
if (!rbinder->instance_id)
  tm_shader_api->create_resource_binder_instances(io, 1, rbinder);

gpu_geometry->vfetch_system_cbuffer = cbuffer->instance_id;
gpu_geometry->vfetch_system_rbinder = rbinder->instance_id;

if (!node_cache->handles[0].resource) {
  const tm_renderer_buffer_desc_t vbuf_desc = {
      .size = 3 * sizeof(tm_triangle_vertex_t),
      .usage_flags = TM_RENDERER_BUFFER_USAGE_STORAGE |
                     TM_RENDERER_BUFFER_USAGE_ACCELERATION_STRUCTURE,
      .debug_tag = "geometry__triangle_vbuf"};

  tm_triangle_vertex_t *vbuf_data;
  node_cache->handles[0] =
      tm_renderer_api->tm_renderer_resource_command_buffer_api
          ->map_create_buffer(res_buf, &vbuf_desc,
                              TM_RENDERER_DEVICE_AFFINITY_MASK_ALL, 0,
                              (void **)&vbuf_data);

Now that our buffer has been created; we can start populating it with our vertex data:

vbuf_data[0] = (tm_triangle_vertex_t){.pos = (tm_vec3_t){0.0f, 1.0f, 0.0f},
                                      .normal = (tm_vec3_t){0.0f, 0.0f, 1.0f},
                                      .color = (tm_vec3_t){1.0f, 0.0f, 0.0f}};
vbuf_data[2] = (tm_triangle_vertex_t){.pos = (tm_vec3_t){1.0f, -1.0f, 0.0f},
                                      .normal = (tm_vec3_t){0.0f, 0.0f, 1.0f},
                                      .color = (tm_vec3_t){0.0f, 1.0f, 0.0f}};
vbuf_data[1] = (tm_triangle_vertex_t){.pos = (tm_vec3_t){-1.0f, -1.0f, 0.0f},
                                      .normal = (tm_vec3_t){0.0f, 0.0f, 1.0f},
                                      .color = (tm_vec3_t){0.0f, 0.0f, 1.0f}};

Finally, we need to tell the vertex_buffer_system which primitives are available in our mesh and how it should access them. This is what the constant buffer and resource binder are for. Note that the layout for the vertex buffer system can be included from the vertex_buffer_system.inl file:

tm_shader_vertex_buffer_system_t constants = {0};
constants.vertex_buffer_header[0] |= (1 << TM_VERTEX_SEMANTIC_POSITION) |
                                     (1 << TM_VERTEX_SEMANTIC_NORMAL) |
                                     (1 << TM_VERTEX_SEMANTIC_COLOR0);

uint32_t *offsets = (uint32_t *)&constants.vertex_buffer_offsets;
offsets[TM_VERTEX_SEMANTIC_POSITION] = tm_offset_of(tm_triangle_vertex_t, pos);
offsets[TM_VERTEX_SEMANTIC_NORMAL] = tm_offset_of(tm_triangle_vertex_t, normal);
offsets[TM_VERTEX_SEMANTIC_COLOR0] = tm_offset_of(tm_triangle_vertex_t, color);

uint32_t *strides = (uint32_t *)&constants.vertex_buffer_strides;
strides[TM_VERTEX_SEMANTIC_POSITION] = sizeof(tm_triangle_vertex_t);
strides[TM_VERTEX_SEMANTIC_NORMAL] = sizeof(tm_triangle_vertex_t);
strides[TM_VERTEX_SEMANTIC_COLOR0] = sizeof(tm_triangle_vertex_t);

const void *cbuf = (const void *)&constants;
tm_shader_api->update_constants_raw(io, res_buf, &cbuffer->instance_id, &cbuf,
                                    0, sizeof(tm_shader_vertex_buffer_system_t),
                                    1);

uint32_t pos_buffer_slot, normal_buffer_slot, color_buffer_slot;
tm_shader_api->lookup_resource(io,
                               TM_STATIC_HASH("vertex_buffer_position_buffer",
                                              0x1ef08bede3820d69ULL),
                               NULL, &pos_buffer_slot);
tm_shader_api->lookup_resource(io,
                               TM_STATIC_HASH("vertex_buffer_normal_buffer",
                                              0x781ed2624b12ebbcULL),
                               NULL, &normal_buffer_slot);
tm_shader_api->lookup_resource(io,
                               TM_STATIC_HASH("vertex_buffer_color0_buffer",
                                              0xb808f20e2f260026ULL),
                               NULL, &color_buffer_slot);

const tm_shader_resource_update_t res_updates[] = {
    {.instance_id = rbinder->instance_id,
     .resource_slot = pos_buffer_slot,
     .num_resources = 1,
     .resources = node_cache->handles},
    {.instance_id = rbinder->instance_id,
     .resource_slot = normal_buffer_slot,
     .num_resources = 1,
     .resources = node_cache->handles},
    {.instance_id = rbinder->instance_id,
     .resource_slot = color_buffer_slot,
     .num_resources = 1,
     .resources = node_cache->handles}};

tm_shader_api->update_resources(io, res_buf, res_updates,
                                TM_ARRAY_COUNT(res_updates));

And now we have our triangle, we just have to unlock the resource cache again and set the bounding volume outputs:

tm_creation_graph_api->unlock_resource_cache(node_cache);
}

tm_vec3_t *bounds_min = tm_creation_graph_interpreter_api->write_wire(
    ctx->instance, ctx->wires[1], TM_TT_TYPE_HASH__VEC3, 1, sizeof(tm_vec3_t));
tm_vec3_t *bounds_max = tm_creation_graph_interpreter_api->write_wire(
    ctx->instance, ctx->wires[2], TM_TT_TYPE_HASH__VEC3, 1, sizeof(tm_vec3_t));

*bounds_min = (tm_vec3_t){-1.0f, -1.0f, 0.0f};
*bounds_max = (tm_vec3_t){1.0f, 1.0f, 0.0f};

This is the full source code to define this creation graph CPU node:

static struct tm_creation_graph_api *tm_creation_graph_api;
static struct tm_creation_graph_interpreter_api *tm_creation_graph_interpreter_api;
static struct tm_shader_api *tm_shader_api;
static struct tm_shader_repository_api *tm_shader_repository_api;
static struct tm_renderer_api *tm_renderer_api;

#include <foundation/api_registry.h>
#include <foundation/the_truth_types.h>
#include <foundation/macros.h>
#include <foundation/atomics.inl>

#include <plugins/creation_graph/creation_graph.h>
#include <plugins/creation_graph/geometry_nodes.h>
#include <plugins/creation_graph/creation_graph_node_type.h>
#include <plugins/creation_graph/creation_graph_interpreter.h>
#include <plugins/renderer/renderer.h>
#include <plugins/renderer/commands.h>
#include <plugins/renderer/resources.h>
#include <plugins/renderer/render_command_buffer.h>
#include <plugins/shader_system/shader_system.h>
#include <plugins/renderer/render_backend.h>

#include <string.h>

#include <plugins/creation_graph/resource_cache.inl>
typedef struct tm_triangle_vertex_t
{
    tm_vec3_t pos;
    tm_vec3_t normal;
    tm_vec3_t color;
} tm_triangle_vertex_t;

static void triangle_node__compile(tm_creation_graph_interpreter_context_t *ctx, tm_creation_graph_compile_context_t *compile_ctx)
{
    tm_creation_graph_context_t *context = *(tm_creation_graph_context_t **)tm_creation_graph_interpreter_api->read_wire(ctx->instance, TM_CREATION_GRAPH__STATIC_WIRE__CONTEXT).data;
    if (!context)
        return;
    const uint32_t geometry_wire_size = sizeof(tm_gpu_geometry_t) + sizeof(tm_renderer_draw_call_info_t);
    uint8_t *geometry_wire_data = tm_creation_graph_interpreter_api->write_wire(ctx->instance, ctx->wires[0], TM_TYPE_HASH__GPU_GEOMETRY, 1, geometry_wire_size);
    memset(geometry_wire_data, 0, geometry_wire_size);

    tm_renderer_draw_call_info_t *draw_call = (tm_renderer_draw_call_info_t *)(geometry_wire_data + sizeof(tm_gpu_geometry_t));
    *draw_call = (tm_renderer_draw_call_info_t){
        .primitive_type = TM_RENDERER_PRIMITIVE_TYPE_TRIANGLE_LIST,
        .draw_type = TM_RENDERER_DRAW_TYPE_NON_INDEXED,
        .non_indexed.num_vertices = 3,
        .non_indexed.num_instances = 1};

    tm_gpu_geometry_t *gpu_geometry = (tm_gpu_geometry_t *)geometry_wire_data;
    gpu_geometry->vfetch_system = tm_shader_repository_api->lookup_system(context->shader_repository, TM_STATIC_HASH("vertex_buffer_system", 0x6289889fc7c40280ULL));
    if (gpu_geometry->vfetch_system)
    {
#include <the_machinery/shaders/vertex_buffer_system.inl>

        tm_renderer_resource_command_buffer_o *res_buf = context->res_buf[TM_CREATION_GRAPH_RESOURCE_BUFFERS__PRE_CMD];
        tm_creation_graph_node_cache_t *node_cache = tm_creation_graph_api->lock_resource_cache(context->tt, ctx->graph_id, ctx->node_id);
        tm_shader_io_o *io = tm_shader_api->system_io(gpu_geometry->vfetch_system);

        tm_shader_constant_buffer_instance_t *cbuffer = (tm_shader_constant_buffer_instance_t *)node_cache->scratch_pad;
        tm_shader_resource_binder_instance_t *rbinder = (tm_shader_resource_binder_instance_t *)node_cache->scratch_pad + sizeof(tm_shader_constant_buffer_instance_t);

        if (!cbuffer->instance_id)
            tm_shader_api->create_constant_buffer_instances(io, 1, cbuffer);
        if (!rbinder->instance_id)
            tm_shader_api->create_resource_binder_instances(io, 1, rbinder);

        gpu_geometry->vfetch_system_cbuffer = cbuffer->instance_id;
        gpu_geometry->vfetch_system_rbinder = rbinder->instance_id;

        if (!node_cache->handles[0].resource)
        {
            const tm_renderer_buffer_desc_t vbuf_desc = {
                .size = 3 * sizeof(tm_triangle_vertex_t),
                .usage_flags = TM_RENDERER_BUFFER_USAGE_STORAGE | TM_RENDERER_BUFFER_USAGE_ACCELERATION_STRUCTURE,
                .debug_tag = "geometry__triangle_vbuf"};

            tm_triangle_vertex_t *vbuf_data;
            node_cache->handles[0] = tm_renderer_api->tm_renderer_resource_command_buffer_api->map_create_buffer(res_buf, &vbuf_desc, TM_RENDERER_DEVICE_AFFINITY_MASK_ALL, 0, (void **)&vbuf_data);
            vbuf_data[0] = (tm_triangle_vertex_t){.pos = (tm_vec3_t){0.0f, 1.0f, 0.0f}, .normal = (tm_vec3_t){0.0f, 0.0f, 1.0f}, .color = (tm_vec3_t){1.0f, 0.0f, 0.0f}};
            vbuf_data[2] = (tm_triangle_vertex_t){.pos = (tm_vec3_t){1.0f, -1.0f, 0.0f}, .normal = (tm_vec3_t){0.0f, 0.0f, 1.0f}, .color = (tm_vec3_t){0.0f, 1.0f, 0.0f}};
            vbuf_data[1] = (tm_triangle_vertex_t){.pos = (tm_vec3_t){-1.0f, -1.0f, 0.0f}, .normal = (tm_vec3_t){0.0f, 0.0f, 1.0f}, .color = (tm_vec3_t){0.0f, 0.0f, 1.0f}};
            tm_shader_vertex_buffer_system_t constants = {0};
            constants.vertex_buffer_header[0] |= (1 << TM_VERTEX_SEMANTIC_POSITION) | (1 << TM_VERTEX_SEMANTIC_NORMAL) | (1 << TM_VERTEX_SEMANTIC_COLOR0);

            uint32_t *offsets = (uint32_t *)&constants.vertex_buffer_offsets;
            offsets[TM_VERTEX_SEMANTIC_POSITION] = tm_offset_of(tm_triangle_vertex_t, pos);
            offsets[TM_VERTEX_SEMANTIC_NORMAL] = tm_offset_of(tm_triangle_vertex_t, normal);
            offsets[TM_VERTEX_SEMANTIC_COLOR0] = tm_offset_of(tm_triangle_vertex_t, color);

            uint32_t *strides = (uint32_t *)&constants.vertex_buffer_strides;
            strides[TM_VERTEX_SEMANTIC_POSITION] = sizeof(tm_triangle_vertex_t);
            strides[TM_VERTEX_SEMANTIC_NORMAL] = sizeof(tm_triangle_vertex_t);
            strides[TM_VERTEX_SEMANTIC_COLOR0] = sizeof(tm_triangle_vertex_t);

            const void *cbuf = (const void *)&constants;
            tm_shader_api->update_constants_raw(io, res_buf, &cbuffer->instance_id, &cbuf, 0, sizeof(tm_shader_vertex_buffer_system_t), 1);

            uint32_t pos_buffer_slot, normal_buffer_slot, color_buffer_slot;
            tm_shader_api->lookup_resource(io, TM_STATIC_HASH("vertex_buffer_position_buffer", 0x1ef08bede3820d69ULL), NULL, &pos_buffer_slot);
            tm_shader_api->lookup_resource(io, TM_STATIC_HASH("vertex_buffer_normal_buffer", 0x781ed2624b12ebbcULL), NULL, &normal_buffer_slot);
            tm_shader_api->lookup_resource(io, TM_STATIC_HASH("vertex_buffer_color0_buffer", 0xb808f20e2f260026ULL), NULL, &color_buffer_slot);

            const tm_shader_resource_update_t res_updates[] = {
                {.instance_id = rbinder->instance_id,
                 .resource_slot = pos_buffer_slot,
                 .num_resources = 1,
                 .resources = node_cache->handles},
                {.instance_id = rbinder->instance_id,
                 .resource_slot = normal_buffer_slot,
                 .num_resources = 1,
                 .resources = node_cache->handles},
                {.instance_id = rbinder->instance_id,
                 .resource_slot = color_buffer_slot,
                 .num_resources = 1,
                 .resources = node_cache->handles}};

            tm_shader_api->update_resources(io, res_buf, res_updates, TM_ARRAY_COUNT(res_updates));
        }
        tm_creation_graph_api->unlock_resource_cache(node_cache);
    }

    tm_vec3_t *bounds_min = tm_creation_graph_interpreter_api->write_wire(ctx->instance, ctx->wires[1], TM_TT_TYPE_HASH__VEC3, 1, sizeof(tm_vec3_t));
    tm_vec3_t *bounds_max = tm_creation_graph_interpreter_api->write_wire(ctx->instance, ctx->wires[2], TM_TT_TYPE_HASH__VEC3, 1, sizeof(tm_vec3_t));

    *bounds_min = (tm_vec3_t){-1.0f, -1.0f, 0.0f};
    *bounds_max = (tm_vec3_t){1.0f, 1.0f, 0.0f};
}

static tm_creation_graph_node_type_i triangle_node = {
    .name = "tm_geometry_triangle",
    .display_name = "Triangle",
    .category = "Geometry",
    .static_connectors.num_out = 3,
    .static_connectors.out = {
        {.name = "gpu_geometry", .display_name = "GPU Geometry", .type_hash = TM_TYPE_HASH__GPU_GEOMETRY},
        {.name = "bounds_min", .display_name = "Bounds Min", .type_hash = TM_TT_TYPE_HASH__VEC3, .optional = true},
        {.name = "bounds_max", .display_name = "Bounds Max", .type_hash = TM_TT_TYPE_HASH__VEC3, .optional = true}},
    .compile = triangle_node__compile};

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_creation_graph_api = tm_get_api(reg, tm_creation_graph_api);
    tm_creation_graph_interpreter_api = tm_get_api(reg, tm_creation_graph_interpreter_api);
    tm_shader_api = tm_get_api(reg, tm_shader_api);
    tm_shader_repository_api = tm_get_api(reg, tm_shader_repository_api);
    tm_renderer_api = tm_get_api(reg, tm_renderer_api);

    tm_add_or_remove_implementation(reg, load, tm_creation_graph_node_type_i, &triangle_node);
}

Calling Creation Graphs from Code

In this tutorial we will create a very simple component that uses a Creation Graph to render to the viewport. The Creation Graph used for this example can be seen in the image below.

The goal of this Creation Graph is to create an image output that we can copy to the viewport. In this example, the image is created by the creation graph and the viewport UV is rendered onto it using an unlit pass. Notice that no geometry has to be defined, as we use the Construct Quad node in clip space. This will procedurally encompass the entire viewport.

Contents

Using the Creation Graph API

The component itself is very simple, it only has a single property which is our creation graph asset:

static const tm_the_truth_property_definition_t properties[] = {
    [TM_TT_PROP__CREATION_GRAPH_TEST_COMPONENT__CREATION_GRAPH] = {
        "creation_graph", TM_THE_TRUTH_PROPERTY_TYPE_SUBOBJECT,
        .type_hash = TM_TT_TYPE_HASH__CREATION_GRAPH}};

However multiple fields are defined in the runtime component struct, all of these are dependent on our creation graph:

typedef struct tm_component_t {
  // The truth ID of the creation graph subobject.
  tm_tt_id_t creation_graph;
  // An instance of the `creation_graph` (created in `shader_ci__init`).
  tm_creation_graph_instance_t instance;

  // The handle to the output image.
  tm_renderer_handle_t image_handle;
  // The resource state of the output image.
  uint32_t resource_state;
  // The description of the output image.
  tm_renderer_image_desc_t desc;
  // The name of the output image.
  tm_strhash_t name;
} tm_component_t;

In the example, we only call the creation graph once (during the initialization phase). The workflow is as follows. The creation graph subobject is added by The Truth, so we don’t have to do any UI or linking code for it. In the initialize function, we instantiate this creation graph asset with a default context. This updates our image output node and all the nodes it is dependent upon:

// Create the context for the creation graph, only the bare minimum is defined
// for this tutorial. This is not production-level code.
tm_creation_graph_context_t ctx = {
    .rb = manager->rb,
    .device_affinity_mask = TM_RENDERER_DEVICE_AFFINITY_MASK_ALL,
    .entity_ctx = manager->ctx,
    .tt = tm_entity_api->the_truth(manager->ctx)};

for (uint32_t i = 0; i < num_components; ++i) {

  // Skip any component that don't have a creation graph defined.
  tm_component_t *cur = cdata[i];
  if (!cur->creation_graph.u64)
    continue;

  // Instantiate the creation graph if this is the first time.
  if (!cur->instance.graph.u64)
    cur->instance = tm_creation_graph_api->create_instance(
        ctx.tt, cur->creation_graph, &ctx);
}

Next we query all the image output nodes from the graph and pick the first one. The information we get from the output node is enough to copy our image to the viewport:

// Query the creation graph for image outputs, if non are defined then we skip
// the update step.
tm_creation_graph_output_t image_outputs = tm_creation_graph_api->output(
    &cur->instance, TM_CREATION_GRAPH__IMAGE__OUTPUT_NODE_HASH, &ctx, NULL);
if (image_outputs.num_output_objects > 0) {
  const tm_creation_graph_image_data_t *image_data =
      (const tm_creation_graph_image_data_t *)image_outputs.output;

  cur->image_handle = image_data->handle;
  cur->resource_state = image_data->resource_state;
  cur->desc = image_data->desc;
  cur->name = image_data->resource_name;
}

To do this we register it to the viewport's render graph using register_gpu_image() and then pass it to the debug_visualization_resources for easy rendering to the screen:

// Loop through all components until we find one that has a valid image output.
uint32_t i;
const tm_component_t **cdata = (const tm_component_t **)data;
for (i = 0; i < num_components; ++i) {
  const tm_component_t *cur = cdata[i];
  if (!cur->image_handle.resource)
    continue;

  tm_render_graph_api->register_gpu_image(args->render_graph, cur->name,
                                          cur->image_handle,
                                          cur->resource_state, &cur->desc);
  break;
}

// None of the components had a valid image output, so skip the copy step.
if (i == num_components)
  return;

// Instead of making our own copy call, the debug visualization pass is used to
// copy to the viewport. This is not a proper copy, but it's good enough for
// this tutorial.
tm_render_graph_blackboard_value value;
tm_render_graph_api->read_blackboard(
    args->render_graph,
    TM_STATIC_HASH("debug_visualization_resources", 0xd0d50436a0f3fcb9ULL),
    &value);
tm_debug_visualization_resources_t *resources =
    (tm_debug_visualization_resources_t *)value.data;

const uint32_t slot = resources->num_resources;
resources->resources[slot].name = cdata[i]->name,
    resources->resources[slot].contents = CONTENT_COLOR_RGB;
++resources->num_resources;

Remarks

Note that this is a very simple example of the creation graph. We don’t update it every frame so it will only render once. This makes use of the Time node useless in this example. Note as well that we are not triggering any wires, this also means that the Init event node will never be called by the component.

Also, all destruction code has been omitted from the code sample to shorten it. In a production implementation, the creation graph instance and the component should be destroyed.

Full Code

static struct tm_allocator_api *tm_allocator_api;
static struct tm_api_registry_api *tm_api_registry_api;
static struct tm_creation_graph_api *tm_creation_graph_api;
static struct tm_entity_api *tm_entity_api;
static struct tm_render_graph_api *tm_render_graph_api;
static struct tm_shader_system_api *tm_shader_system_api;
static struct tm_the_truth_api *tm_the_truth_api;

#include <foundation/allocator.h>
#include <foundation/api_registry.h>
#include <foundation/macros.h>
#include <foundation/the_truth.h>

#include <plugins/creation_graph/creation_graph.h>
#include <plugins/creation_graph/creation_graph_output.inl>
#include <plugins/creation_graph/image_nodes.h>
#include <plugins/editor_views/graph.h>
#include <plugins/entity/entity.h>
#include <plugins/render_graph/render_graph.h>
#include <plugins/render_graph_toolbox/toolbox_common.h>
#include <plugins/renderer/render_backend.h>
#include <plugins/renderer/render_command_buffer.h>
#include <plugins/shader_system/shader_system.h>
#include <plugins/the_machinery_shared/component_interfaces/editor_ui_interface.h>
#include <plugins/the_machinery_shared/component_interfaces/shader_interface.h>
#include <plugins/the_machinery_shared/render_context.h>

#include <string.h>

#define TM_TT_TYPE__CREATION_GRAPH_TEST_COMPONENT "tm_creation_graph_test_component"

enum
{
    TM_TT_PROP__CREATION_GRAPH_TEST_COMPONENT__CREATION_GRAPH
};
typedef struct tm_component_t
{
    // The truth ID of the creation graph subobject.
    tm_tt_id_t creation_graph;
    // An instance of the `creation_graph` (created in `shader_ci__init`).
    tm_creation_graph_instance_t instance;

    // The handle to the output image.
    tm_renderer_handle_t image_handle;
    // The resource state of the output image.
    uint32_t resource_state;
    // The description of the output image.
    tm_renderer_image_desc_t desc;
    // The name of the output image.
    tm_strhash_t name;
} tm_component_t;

typedef struct tm_component_manager_o
{
    tm_allocator_i allocator;
    tm_entity_context_o *ctx;
    tm_renderer_backend_i *rb;
} tm_component_manager_o;

// This function is called when the component is initialized,
// this happens at engine startup for the scene tab,
// at the start of the simulation for the simulate tab,
// or once an entity is selected for the preview tab.
static void shader_ci__init(tm_component_manager_o *manager, const tm_entity_t *entities, const uint32_t *entity_indices, void **data, uint32_t num_components)
{
    tm_component_t **cdata = (tm_component_t **)data;
    // Create the context for the creation graph, only the bare minimum is defined for this tutorial.
    // This is not production-level code.
    tm_creation_graph_context_t ctx = {
        .rb = manager->rb,
        .device_affinity_mask = TM_RENDERER_DEVICE_AFFINITY_MASK_ALL,
        .entity_ctx = manager->ctx,
        .tt = tm_entity_api->the_truth(manager->ctx)};

    for (uint32_t i = 0; i < num_components; ++i)
    {

        // Skip any component that don't have a creation graph defined.
        tm_component_t *cur = cdata[i];
        if (!cur->creation_graph.u64)
            continue;

        // Instantiate the creation graph if this is the first time.
        if (!cur->instance.graph.u64)
            cur->instance = tm_creation_graph_api->create_instance(ctx.tt, cur->creation_graph, &ctx);
        // Query the creation graph for image outputs, if non are defined then we skip the update step.
        tm_creation_graph_output_t image_outputs = tm_creation_graph_api->output(&cur->instance, TM_CREATION_GRAPH__IMAGE__OUTPUT_NODE_HASH, &ctx, NULL);
        if (image_outputs.num_output_objects > 0)
        {
            const tm_creation_graph_image_data_t *image_data = (const tm_creation_graph_image_data_t *)image_outputs.output;

            cur->image_handle = image_data->handle;
            cur->resource_state = image_data->resource_state;
            cur->desc = image_data->desc;
            cur->name = image_data->resource_name;
        }
    }
}

// This function is called every frame and allows us to update our shader variables.
static void shader_ci__update(tm_component_manager_o *manager, tm_render_args_t *args, const tm_entity_t *entities,
                              const struct tm_transform_component_t *transforms, const uint32_t *entity_indices, void **data,
                              uint32_t num_components, const uint8_t *frustum_visibilty)
{
    // Loop through all components until we find one that has a valid image output.
    uint32_t i;
    const tm_component_t **cdata = (const tm_component_t **)data;
    for (i = 0; i < num_components; ++i)
    {
        const tm_component_t *cur = cdata[i];
        if (!cur->image_handle.resource)
            continue;

        tm_render_graph_api->register_gpu_image(args->render_graph, cur->name, cur->image_handle, cur->resource_state, &cur->desc);
        break;
    }

    // None of the components had a valid image output, so skip the copy step.
    if (i == num_components)
        return;

    // Instead of making our own copy call, the debug visualization pass is used to copy to the viewport.
    // This is not a proper copy, but it's good enough for this tutorial.
    tm_render_graph_blackboard_value value;
    tm_render_graph_api->read_blackboard(args->render_graph, TM_STATIC_HASH("debug_visualization_resources", 0xd0d50436a0f3fcb9ULL), &value);
    tm_debug_visualization_resources_t *resources = (tm_debug_visualization_resources_t *)value.data;

    const uint32_t slot = resources->num_resources;
    resources->resources[slot].name = cdata[i]->name,
    resources->resources[slot].contents = CONTENT_COLOR_RGB;
    ++resources->num_resources;
}

static void create_truth_types(struct tm_the_truth_o *tt)
{
    static tm_ci_editor_ui_i editor_aspect = {0};

    static tm_ci_shader_i shader_aspect = {
        .init = shader_ci__init,
        .update = shader_ci__update};
    static const tm_the_truth_property_definition_t properties[] = {
        [TM_TT_PROP__CREATION_GRAPH_TEST_COMPONENT__CREATION_GRAPH] = {"creation_graph", TM_THE_TRUTH_PROPERTY_TYPE_SUBOBJECT, .type_hash = TM_TT_TYPE_HASH__CREATION_GRAPH}};
    const tm_tt_type_t component_type = tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__CREATION_GRAPH_TEST_COMPONENT, properties, TM_ARRAY_COUNT(properties));
    tm_creation_graph_api->create_truth_types(tt);
    tm_the_truth_api->set_default_object_to_create_subobjects(tt, component_type);

    // The editor aspect has to be defined if we want our component to be usable in the editor.
    // The shader aspect is used to update the creation graph and our final output.
    tm_tt_set_aspect(tt, component_type, tm_ci_editor_ui_i, &editor_aspect);
    tm_tt_set_aspect(tt, component_type, tm_ci_shader_i, &shader_aspect);
}

static bool component__load_asset(tm_component_manager_o *manager, struct tm_entity_commands_o *commands, tm_entity_t e, void *data, const tm_the_truth_o *tt, tm_tt_id_t asset)
{
    tm_component_t *c = data;
    tm_tt_id_t creation_graph = tm_the_truth_api->get_subobject(tt, tm_tt_read(tt, asset), TM_TT_PROP__CREATION_GRAPH_TEST_COMPONENT__CREATION_GRAPH);

    // We only want update if the creation graph has changed,
    // Note that we set the entire component to zero if this happens,
    // this is because all fields are dependent on the creation graph.
    if (c->creation_graph.u64 != creation_graph.u64)
    {
        memset(c, 0, sizeof(tm_component_t));
        c->creation_graph = creation_graph;
        return true;
    }

    return false;
}

static void component__create_manager(tm_entity_context_o *ctx)
{
    tm_allocator_i a;
    tm_entity_api->create_child_allocator(ctx, TM_TT_TYPE__CREATION_GRAPH_TEST_COMPONENT, &a);

    tm_renderer_backend_i *backend = tm_single_implementation(tm_api_registry_api, tm_renderer_backend_i);

    tm_component_manager_o *manager = tm_alloc(&a, sizeof(tm_component_manager_o));
    *manager = (tm_component_manager_o){
        .allocator = a,
        .ctx = ctx,
        .rb = backend,
    };

    const tm_component_i component = {
        .name = TM_TT_TYPE__CREATION_GRAPH_TEST_COMPONENT,
        .bytes = sizeof(tm_component_t),
        .manager = manager,
        .load_asset = component__load_asset,
    };

    tm_entity_api->register_component(ctx, &component);
}

TM_DLL_EXPORT void load_plugin(struct tm_api_registry_api *reg, bool load)
{

    tm_allocator_api = tm_get_api(reg, tm_allocator_api);
    tm_api_registry_api = tm_get_api(reg, tm_api_registry_api);
    tm_creation_graph_api = tm_get_api(reg, tm_creation_graph_api);
    tm_entity_api = tm_get_api(reg, tm_entity_api);
    tm_render_graph_api = tm_get_api(reg, tm_render_graph_api);
    tm_shader_system_api = tm_get_api(reg, tm_shader_system_api);
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);

    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, create_truth_types);
    tm_add_or_remove_implementation(reg, load, tm_entity_create_component_i, component__create_manager);
}

Creating a Raymarching Creation Graph Output Node

In this tutorial, we'll learn a little bit more about the Creation Graph system by doing a custom raymarching output node.

Note: There are many resources about raymarching on the internet, so we'll focus only on integrating it on The Machinery.

Table of Contents

Introduction

Note that in this tutorial, we don't use any geometric data, our node only queries the signed distance field. Therefore you can do any kind of geometric figure using the other creation graph nodes. You can extend it to play with volumetric effects or other kinds of nice effects.

When you create a shader for The Machinery, you'll need to put it in bin/data/shaders.

Note: We will improve this workflow, but for now, you can use a custom rule in premake5 to copy your custom shaders to the correct directory.

The source code and an example project with a SDF plane and sphere can be found at tm_raymarch_tutorial, and you can look at shader_system_reference for a complete overview of concepts used in this tutorial.

What Are Our Goals?

Ok, so what are our requisites?

  • To use the scene camera as our start point for raymarching;
  • To blend our results with objects in the viewport;
  • And to be able to query the signed distance in each loop interaction;

Let's forget for a moment that we're creating an output node. The shader language is basically hlsl inside JSON blocks. We start by defining some basic blocks.

Enable Alpha Blending:

blend_states : {
   logical_operation_enable: false
   render_target_0 : {
       blend_enable: true
       write_mask : "red|green|blue|alpha"
       source_blend_factor_color : "source_alpha"
       destination_blend_factor_color : "one_minus_source_alpha"
   }
} 

Disable Face Culling:

raster_states : {
    polygon_mode: "fill"
    cull_mode: "none" 
    front_face : "ccw"
}

Getting the Entity's World Transform

We want to access the entity's world transform. Later we'll export it to shader function nodes, so our SDF can take it in account:

imports : [    
    { name: "tm" type: "float4x4" }
]

common : [[
#define MAX_STEPS 1000
#define MAX_DIST 1000.0
#define SURF_DIST 0.01
]]

The Vertex Shader

Now we can look at vertex shader. Our viewport quad is constructed with a tringle that will be clipped later. You can explicitly create a quad with four vertices too. We are doing this because it has some performance gain and is consistent with other shaders in engine.

The shader consists of the following parts:

  • An import semantics block that we use to query vertex_id, which is translated to SV_VertexID;
  • A exports block used to export camera ray and world position. Looking at the shader below we see for the first time the channel concept. By adding channel_requested: true, the value can be requested by other nodes, and will define TM_CHANNEL_AVAILABLE_i* that other nodes can look. If some node request a channel TM_CHANNEL_REQUESTED_* will be defined too. tm_graph_io_t generated struct will have the world_position field, and we use it to expose entity's world position to the graph, at the end call tm_graph_write() that will write world_position to shader output.
vertex_shader : {
   import_system_semantics : [ "vertex_id" ]
   exports : [
       { name: "camera_ray" type: "float3"}
       { name: "world_position" type: "float3" channel_requested: false }
   ]

   code : [[
       tm_graph_io_t graph;
       #if defined(TM_CHANNEL_REQUESTED_world_position)
           graph.world_position = load_tm()._m30_m31_m32;
       #endif

       static const float4 pos[3] = {
           { -1,  1,  0, 1 },
           {  3,  1,  0, 1 },
           { -1, -3,  0, 1 },
       };
   
       output.position = pos[vertex_id];


       float4x4 inverse_view = load_camera_inverse_view();
       float4 cp = float4(pos[vertex_id].xy, 0, 1);
       float4 p = mul(cp, load_camera_inverse_projection());
       output.camera_ray = mul(p.xyz, (float3x3)inverse_view);        

       tm_graph_write(output, graph);
       return output;
   ]]
}

The Generic Output Node

As mentioned before, our goal is to have a generic output node, the signed distance used for raymarching we be supplied by the graph, so now is good moment to define our creation graph node block:

creation_graph_node : {
   name: "raymarch_output"
   display_name: "Raymarch"
   category: "Shader/Output"

   inputs : [
       { name: "distance" display_name: "Distance" type: "float" evaluation_stage: ["pixel_shader"] evaluation_contexts : ["distance"] optional: false }
       { name: "color" display_name: "Color (3)" type: "float3" evaluation_stage: ["pixel_shader"] evaluation_contexts : ["default", "color"] optional: false }
       { name: "light_position" display_name: "Light Pos (3)" type: "float3" evaluation_stage: ["pixel_shader"] optional: false }
   ]
}

You can see that we can define the evaluation stage for inputs. In our case we'll only need these inputs in the pixel shader. A shader accesses these inputs by using the appropriate field in tm_graph_io_t. Before a shader can access them, we have to call the evaluate function. If we do not specify an evaluation context, the input will be added to the default evaluation context. The Shader system will generate tm_graph_evaluate() function for the default context and tm_graph_evaluate_context_name() for remaining contexts.

Note that an input can be in more than one evaluation context.

All this will be useful because we need to query the signed distance field in every loop iteration. By using an evaluation context, the process is cheaper, because only functions related to this input will be called.

Pixel Shader

Below you can see the final pixel shader. As we need to evaluate the graph, we can't put the raymarching code in the common block, as tm_graph_io_t for the pixel shader isn't defined at this point:

pixel_shader : {
   exports : [
       { name : "color" type: "float4" }
       { name: "sample_position" type: "float3" channel_requested: true }
   ]

   code : [[
       float3 world_pos = load_camera_position();
       float3 world_dir = normalize(input.camera_ray);

       tm_graph_io_t graph;
       tm_graph_read(graph, input);
       tm_graph_evaluate(graph);

       // Get distance
       float d = 0.0;
       float amb = 0.0;
       float alpha = 1.0;
       for (int i = 0; i < MAX_STEPS; i++) {
           float3 p = world_pos + world_dir * d;
           #if defined(TM_CHANNEL_REQUESTED_sample_position)
               graph.sample_position = p;
           #endif        
           tm_graph_evaluate_distance(graph);
           float ds = graph.distance;
           d += ds;

           if (ds < SURF_DIST) {
               amb = 0.01;
               break;
           }
           if (d > MAX_DIST) {
               alpha = 0.0;
               break;
           }
       }
       
       float3 p = world_pos + world_dir * d;

       // Normal calculation
       #if defined(TM_CHANNEL_REQUESTED_sample_position)
           graph.sample_position = p;
       #endif        
       tm_graph_evaluate_distance(graph);
       d = graph.distance;

       float2 e = float2(0.01, 0);

       #if defined(TM_CHANNEL_REQUESTED_sample_position)
           graph.sample_position = p - e.xyy;
       #endif        
       tm_graph_evaluate_distance(graph);
       float n1 = graph.distance;

       #if defined(TM_CHANNEL_REQUESTED_sample_position)
           graph.sample_position = p - e.yxy;
       #endif        
       tm_graph_evaluate_distance(graph);
       float n2 = graph.distance;

       #if defined(TM_CHANNEL_REQUESTED_sample_position)
           graph.sample_position = p - e.yyx;
       #endif        
       tm_graph_evaluate_distance(graph);
       float n3 = graph.distance;

       float3 n = float3(d, d, d) - float3(n1, n2, n3);
       n = normalize(n);
       
       // Light calculation
       float3 light_pos = graph.light_position;
       float3 l = normalize(light_pos - p);
       float dif = saturate(dot(n, l));

       d = 0.f;
       for (int j = 0; j < MAX_STEPS; j++) {
           float3 pos = (p + n * SURF_DIST * 2.0) + l * d;
           #if defined(TM_CHANNEL_REQUESTED_sample_position)
               graph.sample_position = pos;
           #endif        
           tm_graph_evaluate_distance(graph);
           float ds = graph.distance;
           d += ds;

           if (d > MAX_DIST || ds < SURF_DIST) 
               break;
       }

       if (d < length(light_pos))
           dif *= 0.1;

       float3 col = graph.color;
       col = col * dif + amb;

       output.color = float4(col, alpha);

       return output;
   ]]
}

The compile block

Finally we define the compile block. The compile block allows you to specify the compilation environment for the shader. This block HAS to be added in order for the shader to be compiled, although the shader can still be included without this. There are two things you can do in this block:

  • Include additional shader files to be appended to this shader file.
  • Enable systems or configurations based on the shader conditional language or whether a system is active.

The contexts defined in the contexts block define the compilation environment(s) for the shader. There will always be one "default" instance of the shader compiled if no context is specified so context is optional.

We only need the viewer_system to access camera related constants and render our result at hdr-transparency layer:

compile : {
   configurations: {
       default: [
           { 
               variations : [
                   { systems: [ "viewer_system" ] }
               ]
           }
       ]

   }

   contexts: {
       viewport: [
           { layer: "hdr-transparency" configuration: "default" }
       ]
   }
}

Note: that the configuration block can become very complex because of its recursive nature. A configuration can have several systems that need to be enabled for the configuration to run. But it might also have variations on those systems. This can continue recursively.

Physics

You should read the Physics Introduction before you continue here.

In this section we will discuss some of the Samples in more detail.

You can download the Physics Sample projects from the Download Tab: Help → Download Sample Projects

Where to find the samples

After you have downloaded the Physics Sample Project, you can open it:

Where to find the scene

The Scenes folder contains all sample scenes (entities). Double click on a sample scene of your choice to load it.

Each of the Sample Scenes is composed of multiple Entity Prototypes. You can find them in the Special Objects or in the Shapes folder:

Special Objects folder

Special Objects are entities with Graph Components attached that have associated logic.

Shapes are reused Entities to demonstrate different kind of Physic Bodies.

Note: If you change the Prototypes, all the instances will change as well. Keep this in mind when playing around with the samples. If you mess things up, you can always re-download a fresh sample project.

Triggers

This walkthrough shows you how to create a trigger with the Graph Component. You can find the "source code" in our Physic Samples.

Table of Content

Assemble a Trigger

What is a trigger?

Something that reacts when something intersects/touches them, either constantly or just the first/last time.

Note: In Unreal Engine, this might be called Trigger Actors / Trigger Box.

In the Machinery, we have two types of triggers we can use:

  • PhysX's Trigger- Physical Based Trigger
  • Volume-Component - The Trigger is based on the Volume Component.

In this walkthrough, we are focused on PhysX's Trigger Event.

This walkthrough will be to create a Trigger that adds Velocity to a Ball Shot from the Camera. Therefore we need to make the following Entities:

  • The Trigger
  • A world (plane)
  • A Ball

Create the Trigger Entity

Let us create a folder in the Project root and call it "Special Objects". It will be the folder in which we keep all our Special Objects for now and for what might come.

In this folder, we create an Entity with the name "Trigger". We add two extra components:

  • A Graph Component for some logic
  • A Physic Shape to make sure the Physics World can see it

When adding the Physic Shape, we need to consider the Type. By default, the Type is Sphere, but that would not suit our needs since we want it to be a red box. We change the Type to Box and tick the Checkbox "Is Trigger" to make sure it is a Trigger. We can also change the Half Extent value if we like.

If you look now into the Scene Tab, you see nothing. To change that, you can turn on the Debug Visualization:

Having a Trigger that cannot be seen might be applicable for some games. In our case, we choose to make the Trigger Visible with a box.

Luckily the core provides a Box Entity for us: core/geometry/box.entity.

This location is something we keep in mind!

Let us also add the Box (core/geometry/box.entity) from the core to our Entity as a child. This Box makes it easier for us to test it later because we can see it in the Scene Window.

Add the logic to the graph

We double-click the Graph Component to open the Graph Editor. The Graph Editor is Empty. We want to add an Init Event and then a "Physx On Trigger Event". We need to connect the Start Listing connector with the "Init Event Connector".

To get the current Entity, we add the node "Scene Entity" and connect the outgoing connector with the "Physx On Trigger Event" Entity connector.

The goal was to apply Velocity to any entity that touches the Trigger the first time. That is why we add a connector from the "First Touch" to the newly added Physx Set Velocity node.

We connect the Entity from the "Physx Set Velocity" entity connector to the Touched Entity Connector at the "Physx On Trigger Event" node.

We need to get the current Velocity of this Entity. We can do this by using the "Physx Get Velocity" node. The result we modify with, let us say -1 and apply it at the end. (The lower the value, the stronger the ball will bounce off.)

This is how our trigger Entity looks like:

Note: The Box Entity will be displayed yellow because it is a prototype instance of the Entity within the core/geometry/ folder. Any changes to this prototype will apply to this instance as well.

Create the ball

The Trigger is quite useless unless it can interact with something! That is why we want to shoot a ball from the Camera to the player.

Again the core comes to our rescue and provides us with a Sphere in the core/geometry/! We will use this for our ball!

We open the "Special Objects" folder and add a new Entity called "Ball". With a double-click, we open it and add a "Physics Shape" and "Physics Body" Component. In the "Physics Shape Component," we leave the Type to Sphere.

Note: We can also visualize the Sphere Physics Shape in the Scene the same way we visualized them for the Box.

After this, we need to ensure that our ball has Continuous Collision Detection (CCD) enabled. Also, the Inertia Tensor should be set to 0.4, and Angular Damping should be set to 0.05.

Now that we have adjusted all the components let us add the actual ball. Again we can drag and drop the sphere.entity from the core/geometry/ folder onto our Entity.

Creating the Scene

After we have nearly all the components to create our little Scene, all that is missing is the playground. The playground can be defined as just a plane with a trigger on it.

We can create a new Folder in the Asset Browser root and call it "Scenes". In there, we create a new Entity and call it "Triggers". We open this Entity.

The first thing we do is add a new Empty Child entity. We call it Floor or Plane.

Note: Right-click on the Main Entity "Add Child Entity."

We add a Physics Shape Component to this Entity and change its Type to Plane.

If we do not use the Physics Visualization mode, we see nothing in the Scene Tab. We can change this by adding a new Child Entity to our floor Entity. We right-click on the Plane / Floor Entity ->Add Child Entity -> From Asset, and we search for the Plane Entity. It is also located in the core.

When we look at the Scene Tab now, we see our new floor entity! Let us drag in our Trigger. We need to drag and drop the Trigger Entity from the Asset Browser in the Scene and adjust it with the Tools within the Scene Tab.

The result could look like this:

Note: We should add a Light Entity. Otherwise, it might be quite dark. Luckily the core has our back also here. We can just right-click the main Entity and Add Child Entity -> From Asset -> Light.

Spawn balls

The Scene itself is not what we want because we cannot spawn balls yet. To do this, we add a graph to the Scene itself.

In there, we add a "Tick Event" we need to poll every tick if the space key has was pressed. If you pressed space, we would spawn the ball from the camera direction.

We push the ball via "Physx Push" with a calculated velocity.

Conclusion

All of the above described "code you can" find when you download the Physics Sample projects from the Download Tab: Help -> Download Sample Projects

The Arkanoid scene shows you a physically accurate version of the famous Pong game. The goal is to bounce the ball around and hit as many bricks as possible: the Bat can be controlled by moving the mouse left/right.

Arkanoid Scene breakdown

Bat (Prototype location: Special Objects/Bat.entity)

The Bat is the Entity that the player controls to hit the ball. Relevant components:

  • Physics Shape, to make the Bat collide with the ball and bounce it back.

    By examining the Physics Shape properties we can see that bot the Material and the Collision properties are set.

The Bouncy Material (Physics Materials/Bouncy.physics_material) has a Restitution Coefficient of 1. This is what makes the ball "bounce".

image-20220311140003187

The Bat Collision (Physics Materials/Bat.physics_collision) specifies the fact that shapes with this collision should only collide with entities that Have the Default Collision type. (Which can itself be found in the Physics Materials folder). As you can imagine then, the Ball will have the Default collision set on its Shape.

image-20220311140028100

  • Physics Body, to make the Bat movement around the world Physically accurate.

    Examining the Physics Body properties, we notice that the Kinematic checkbox is flagged: this means that the position of the Bat Entity won't be driven by the Physics simulation itself, but that it will be driven by the position that is present in the transform component instead. In this case we'll alter the transform component position via the Entity Graph of the Bat: then the position of the transform component will be reflected automatically in the Physics simulation.

image-20220311140100012

  • Entity Graph, which is used to:

    a) move the Bat via the mouse

    b) Push the ball in the opposite direction when the bat and the ball collides.

image-20220311140128319

You may be wondering why it's not enough to just push the ball in the correct direction to make it bounce, and we have to set the restitution coefficient to 1 in addition to doing so. The answer is that if we leave the restitution coefficient to 0, the ball will loose all of its "energy" when it collides with the Bat, thus even if we later on push it in the correct direction it will still move very slowly.

Walls (Prototype location: Shapes/Wall.entity)

The walls in the scene are just static Entities with a Physic Shape component and a scaled Box child Entity. They're practially very similar to the Bat entity, just that they don't have:

  • The Physics body component (we don't want walls to be able to move around).
  • The Entity Graph component, as we don't need any logic applied to them: walls are just static entities.

image-20220311140301191

Bricks (Prototype location: Special Objects/brick)

The "standard" brick (the one without a fancy blue ball on it) Is nothing more than a static Shape like walls are, with a small addition: we want bricks to be destroyed when they collide with the ball.

To accomplish that, they:

  • Have the Notify Touch collision type, so that collisions with the ball are notified.
  • Have an Entity Graph component that implements this very simple logic: they register the Physx on contact event and, once that triggers, they just delete themselves from the game by calling destroy entity.

image-20220311140414076

Special Bricks (Prototype location: Special Objects/Multiball Brick.entity)

This special kind of brick works exactly like a standard brick, but with a simple addition: its Entity Graph is made so that when hit it will spawn an additional ball spawn in the game.

image-20220311140520748

To see how it works, lets dive into his Entity Graph component, which is pretty similar to the standard brick's one.

In there we can see that, just after self-destroying itself, it will execute the Spawn a new Ball subgraph, which will simply spawn a new ball and push it.

image-20220311140558642

Notice how we're using a vec3 variable to store the position at which we want to spawn the additional ball in the Save Position for New Ball subgraph.

Lost ball trigger

The Lost ball trigger is the Entity responsible for re-spawning the ball when it goes out of bounds. The way it works is by having a Physic Shape (you can see it in yellow by clicking on it in the Entity Tree) With the Notify Touch Collision type and with the is_trigger checkbox flagged:

image-20220311140709526

Physics Shapes flagged with the is_trigger checkbox will still exists in the Physical world, but when a collision with them happen the collision will be notified but the objects will be allowed to "interpenetrate" with each other.

This means that it won't collide with the Ball (which has the Default collision, remember), and once the collision happens the Physx On Trigger event will be triggered: (We can find it in the Entity Graph)

image-20220311140758376

Here we can see that once it's triggered, this event will:

  • destroy the ball (Which is the Touched entity, notice the difference between this trigger and the one in the Brick's graph: there we are deleting the brick itself)
  • call the Lost Ball event on the parent entity (the parent entity is found via the Entity From Path Node specifying .. as the path, which in this case is the Arkanoid Entity itself).

Arkanoid

In addition to containing all the other entities as children, the Arkanoid entity also has the Entity Graph component that is used for Spawning a ball when the game starts (or when the ball goes out of bounds).

image-20220311140926242

You can see that the Spawn Ball event is called both at the beginning of the Game (When the Init event will be called) but also when a Lost ball event is triggered.

This concludes the tour of the Arkanoid scene, try to experiment it a bit and don't be scared of breaking stuff, you can always re-download the sample if you screw things up.

Contacts scene breakdown

image-20220318094339252

Plane

The Plane Entity is just a simple Entity with a Shape attached to it.

Spawn Pile

The Spawn Pile doesn't have any Physics component attached to it, but its Entity Graph will spawn a new Box every Three seconds and Push it down towards the Ground.

image-20220317093117838

Ball thrower (Prototype location: Special Objects/Ball Thrower)

The Ball Thrower doesn't have any Physics component either, but it will spawn a new ball in the viewing direction if the spacebar is pressed. The Logic is pretty similar to the one of the Spawn Pile: Spawn a new entity and use the Physx Push Node to push it in a specific direction, in this case the camera viewing direction.

image-20220317093549002

That's it for the Contacts scene, it's a pretty simple one.

You will notice that the Boxes only collide with the blue spheres and the Ground plane: as an exercise try to make it so that the boxes also collides with each other.

Kinematic Scene breakdown

image-20220317102629794

Plane

Just static geometry like we saw in previous scenes.

Walls

Just static geometry. (Physics Shape component)

Sweeper

The sweeper is a simple rigid body with a rectangular shape that will perpetually rotate on it's own Y axis.

The Rigid body component has the "Kinematic" flag set so that Physx knows that the position of the entity will be driven by its transform component.

The Velocity component is used to apply (in this case) a constant angular velocity to the entity to make it rotate on its own axis.

image-20220318094033356

Spawners

There are four different spawners in the scene, placed at the four corners of the plane, and each one of them will spawn a physic object once every second.

Notice that the spawned entities won't have any force applied to them, so they will just fall on the ground. (Until they get swept by the Sweeper, I mean)

You can find the objects that will be spawned for each of the spawners under the Shapes folder.

image-20220317102210569

Ball Thrower (Prototype location: Special Objects/Ball Thrower)

The same ball thrower that we saw in the Contacts scene: press Space to throw a ball in the scene.

Stack Scene Breakdown

image-20220317110041697

Plane

Simple static geometry.

Stack

Simple static geometry.

Ball Thrower

Class Ball thrower that we already saw.

Sniper

The sniper will cast a ray into the scene using the Physx Raycast node if the Left mouse button is pressed and, if something is hit, it will push the hit entity in the camera direction.

image-20220317110122631

Tutorials

This section introduces you to some more complex topics such as How to Create your Own Asset Type.

For more information checkout the documentation and these blog posts

Creating a Custom Asset Type

This walkthrough series shows you how to add a custom asset type to the Engine. You should have basic knowledge about how to write a custom plugin. If not, you might want to check this Guide. The goal for this walkthrough is to create a text file asset type.

We will cover the following topics:

Create a Custom Asset Type: Part 1

This part will cover the following topics:

  • What to think of in advance?
  • Creating a The Truth type
  • Exposing the asset to the asset browser
    • Via the context menu
    • Via code

The next part will explore how to store more complex data in an asset and how to get this data back into the Engine.

You can find the whole source code in its git repo: example-text-file-asset

Table of Content

First Step: What Kind of Asset Do We Want to Create?

The Machinery comes with some predefined asset types, but these types might not cover all types of data you want to represent in the engine. Luckily, you can extend the asset types supported by using plugins.

In this example we will extend the engine with support for text file assets. The steps shown below will be similar regardless of what kind of data representation you are creating.

Creating a Type in The Truth

In The Machinery, all data is represented in The Truth. To add a new kind of data, we need to register a new Truth Type.

Note that not all Truth types are Asset types. An Asset type is a Truth type that can exist as an independent object in the Asset Browser. For example, Vector3 is a Truth type representing an (x, y, z) vector, but it is not an asset type, because we can't create a Vector3 asset in the Asset Browser. A Vector3 is always a subobject of some other asset (such as an Entity).

A Truth type is defined by a name (which must be unique) and a list of properties. Properties are identified by their indices and each property has a specific type.

Typically, we put the type name, the hashed name and the list of properties in the header file, so that they can be accessed by other parts of the code. (Though if you have a Truth type that will only be used internally, in your own code, you could put it in the .c file.)

Example header file my_asset.h:

#pragma once
#include <foundation/api_types.h>
//... more code
#define TM_TT_TYPE__MY_ASSET "tm_my_asset"
#define TM_TT_TYPE_HASH__MY_ASSET TM_STATIC_HASH("tm_my_asset", 0x1e12ba1f91b99960ULL)

Do not forget to run hash.exe whenever you use TM_STATIC_HASH() in your code. This will ensure that the correct value for the hash is cached in the macro.

If you are creating a complicated type it may have subobjects that themselves have custom types.

To make The Truth aware of this custom type we must register it with The Truth. This is done with a callback function that is typically called create_truth_types() or truth__create_types(), but the name doesn't really matter. We register this callback function under the tm_the_truth_create_types_i interface. That way, The Truth will now to call this function to register all the types whenever a new Truth object is created (this happens for example when you open a project using File → Open).

Note: Interfaces and APIs are the main mechanisms for extending The Machinery. The difference is that an API only has a single implementation (there is only one tm_the_truth_api for instance), whereas there can be many implementations of an interface. Each plugin that creates new truth type will implement the tm_the_truth_create_types_i interface.

Example tm_load_plugin function for my_asset.c:

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
  tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i,
                                  create_truth_types);
  tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i,
                                  &asset_browser_create_my_asset);
}

Now we can implement the actual create_truth_types() function. We use tm_the_truth_api->create_object_type() to create a Truth type with a specified name and properties.

At this point, we have a newThe Truth type. But it's not yet an asset!

What Is the Difference between a Truth Type and an Asset?

An Asset in The Machinery is just a Truth object of the type TM_TT_TYPE__ASSET.

It looks like this:

enum {
    // Name of the asset.
    TM_TT_PROP__ASSET__NAME, // string

    // Directory where the asset resides. For top-level assets, this is `NULL`.
    TM_TT_PROP__ASSET__DIRECTORY, // reference [[TM_TT_TYPE__ASSET_DIRECTORY]]

    // Labels applied to this asset.
    TM_TT_PROP__ASSET__UUID_LABELS, // subobject_set(UINT64_T) storing the UUID of the associated label.

    // Subobject with the actual data of the asset. The type of this subobject depends on the type
    // of data storedin this asset.
    TM_TT_PROP__ASSET__OBJECT, // subobject(*)

    // Thumbnail image associated with asset
    TM_TT_PROP__ASSET__THUMBNAIL, // subobject(TM_TT_TYPE__ASSET_THUMBNAIL)
};

The most important part here is TM_TT_PROP__ASSET__OBJECT. This is the actual object that the asset contains. For an Entity asset, this will be an object of type TM_TT_TYPE__ENTITY, etc.

So the Asset object is just a wrapper that adds some metadata to the actual data object (found in TM_TT_PROP__ASSET__OBJECT). This is where our new TM_TT_TYPE__MY_ASSET will be found.

Truth types that are used as assets need to define an extension. This be shown in the Asset Browser. For example, entities have the extension "entity", so an entity named world is shown in the Asset Browser as world.entity. The extension is also used when the project is saved to disk, but in this case it will automatically be prefixed with tm_. So if you look at the project on disk, world.entity will be saved as world.tm_entity. The reason for this is to be able to easily tell The Machinery files from other disk files.

We set the extension by adding a Truth aspect of type TM_TT_ASPECT__FILE_EXTENSION to our type

Here's the full code for creating the type and registering the extension:

static void create_truth_types(struct tm_the_truth_o *tt) {
  // we have properties this is why the last arguments are "0, 0"
  const tm_tt_type_t type =
      tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_ASSET, 0, 0);
  tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "my_asset");
}

Exposing the Type to the Asset Browser

Even though we now have a Truth Type as well as an Extension, we still don't have any way of creating objects of this type in the Asset Browser. To enable that, there's another interface we have to implement: tm_asset_browser_create_asset_i:

// Interface that can be implemented to make it possible to create assets in the Asset Browser,
// using the **New** context menu.
typedef struct tm_asset_browser_create_asset_i
{
    struct tm_asset_browser_create_asset_o *inst;

    // [[TM_LOCALIZE_LATER()]] name of menu option to display for creating the asset (e.g. "New
    // Entity").
    const char *menu_name;

    // [[TM_LOCALIZE_LATER()]] name of the newly created asset (e.g. "New Entity");
    const char *asset_name;

    // Create callback, should return The Truth ID for the newly created asset.
    tm_tt_id_t (*create)(struct tm_asset_browser_create_asset_o *inst, struct tm_the_truth_o *tt,
        tm_tt_undo_scope_t undo_scope);
} tm_asset_browser_create_asset_i;

Source: plugins/editor_views/asset_browser.h

If you implement this interface, your Truth type will appear in the New → context menu of the asset browser and you can create new objects of the type from there.

For our basic type, this interface can be defined as follows:

// -- asset browser regsiter interface
static tm_tt_id_t
asset_browser_create(struct tm_asset_browser_create_asset_o *inst,
                     tm_the_truth_o *tt, tm_tt_undo_scope_t undo_scope) {
  const tm_tt_type_t type = tm_the_truth_api->object_type_from_name_hash(
      tt, TM_TT_TYPE_HASH__MY_ASSET);
  return tm_the_truth_api->create_object_of_type(tt, type, undo_scope);
}
static tm_asset_browser_create_asset_i asset_browser_create_my_asset = {
    .menu_name = TM_LOCALIZE_LATER("New My Asset"),
    .asset_name = TM_LOCALIZE_LATER("New My Asset"),
    .create = asset_browser_create,
};
  • The menu_name specified in the interface is the name that will appear in the New → menu.
  • The asset_name is the name that will be given to the newly created asset.
  • The asset_browser_create() function creates the object of our type. If we wanted to, we could do more advanced things here to set up the asset.

This interface is registered by the tm_load_plugin() function, just as all the other interfaces:

Example tm_load_plugin() function for my_asset.c

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
  tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i,
                                  create_truth_types);
  tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i,
                                  &asset_browser_create_my_asset);
}

The asset can now be created from the Asset Browser:

So far things are not that exciting. But we are getting there!

What is next?

In the next part we will refactor code and show how to make the asset more useful by adding some actual data to it.

Part 2

Appendix: Creating an Asset from Code

The Asset Browser lets you create new assets using the UI, but you may also want to create assets from code. You can do this by using the tm_asset_browser_add_asset_api provided by the Asset Browser plugin. It lets you create new assets and adds them to the current project.

To create an asset:

  1. Create a Truth object of the desired type and add it to the project using tm_asset_browser_add_asset_api->add().
  2. If you want the action to be undoable, you need to create an undo scope for it and add it to the undo stack.
  3. If you want the asset to selected in the asset browser, you need to pass true for the should_select parameter.

The following code example demonstrate how to add an asset of the TM_TT_TYPE__MY_ASSET type to the project.

// ... other includes
#include <foundation/the_truth.h>
#include <foundation/undo.h>

#include <plugins/editor_views/asset_browser.h>

#include "my_asset.h"
//... other code

static void add_my_asset_to_project(tm_the_truth_o *tt, struct tm_ui_o *ui, const char *asset_name, tm_tt_id_t target_dir)
{
    const tm_tt_type_t my_asset_type_id = tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET);
    const tm_tt_id_t asset_id = tm_the_truth_api->create_object_of_type(tt, my_asset_type_id, TM_TT_NO_UNDO_SCOPE);
    struct tm_asset_browser_add_asset_api *add_asset = tm_get_api(tm_global_api_registry, tm_asset_browser_add_asset_api);
    const tm_tt_undo_scope_t undo_scope = tm_the_truth_api->create_undo_scope(tt, TM_LOCALIZE("Add My Asset to Project"));
    bool should_select = true;
    // we do not have any asset label therefore we do not need to pass them thats why the last
    // 2 arguments are 0 and 0!
    add_asset->add(add_asset->inst, target_dir, asset_id, asset_name, undo_scope, should_select, ui, 0, 0);
}

Full Example of Basic Asset

my_asset.h

#pragma once
#include <foundation/api_types.h>
//... more code
#define TM_TT_TYPE__MY_ASSET "tm_my_asset"
#define TM_TT_TYPE_HASH__MY_ASSET TM_STATIC_HASH("tm_my_asset", 0x1e12ba1f91b99960ULL)

(Do not forget to run hash.exe when you create a new TM_STATIC_HASH())

my_asset.c

// -- api's
static struct tm_the_truth_api *tm_the_truth_api;
// -- inlcudes
#include <foundation/api_registry.h>
#include <foundation/localizer.h>
#include <foundation/the_truth.h>
#include <foundation/the_truth_assets.h>
#include <foundation/undo.h>

#include <plugins/editor_views/asset_browser.h>

#include "txt.h"

// -- create truth type
static void create_truth_types(struct tm_the_truth_o *tt)
{
    // we have properties this is why the last arguments are "0, 0"
    const tm_tt_type_t type = tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_ASSET, 0, 0);
    tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "my_asset");
}
// -- asset browser regsiter interface
static tm_tt_id_t asset_browser_create(struct tm_asset_browser_create_asset_o *inst, tm_the_truth_o *tt, tm_tt_undo_scope_t undo_scope)
{
    const tm_tt_type_t type = tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET);
    return tm_the_truth_api->create_object_of_type(tt, type, undo_scope);
}
static tm_asset_browser_create_asset_i asset_browser_create_my_asset = {
    .menu_name = TM_LOCALIZE_LATER("New My Asset"),
    .asset_name = TM_LOCALIZE_LATER("New My Asset"),
    .create = asset_browser_create,
};
// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, create_truth_types);
    tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i, &asset_browser_create_my_asset);
}

Create a Custom Asset Type: Part 2

This part will cover the following topics:

  • How to store data in a buffer that is associated with the asset file.
  • How to give the asset a custom UI in the Property View.

The next part shows how to write an importer for the asset.

You can find the whole source code in its git repo: example-text-file-asset

Table of Content

Adding More Properties to the The Truth Type

The Truth type we created in Part 1 cannot do much, because it doesn't have any properties:

static void create_truth_types(struct tm_the_truth_o *tt) {
  // we have properties this is why the last arguments are "0, 0"
  const tm_tt_type_t type =
      tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_ASSET, 0, 0);
  tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "my_asset");
}

To actually store some data in the objects, we want to add some properties to the Truth type. Note that we pass in an array of properties when we create the type with tm_the_truth_api->create_object_type().

For our text file objects that are two pieces of data that we want to store:

  1. The text data itself.
  2. The path on disk (if any) that the text file was imported from.

Storing the import path is not strictly necessary, but we'll use it to implement a "reimport" feature. This lets our data type work nicely with text files that are edited in external programs.

Here's how we can define these properties:

static tm_the_truth_property_definition_t my_asset_properties[] = {
    {"import_path", TM_THE_TRUTH_PROPERTY_TYPE_STRING},
    {"data", TM_THE_TRUTH_PROPERTY_TYPE_BUFFER},
};

Note: The type tm_the_truth_property_definition_t has a lot more options. For example, is it possible to hide properties from the editor, etc. For more information, read the documentation here.

In this case we decided to store the text as a buffer instead of a string. Buffers can be streamed in and out of memory easily, so if we expect the text files to be large, using a buffer makes more sense than using a string.

We can now create the Truth type with these properties:

static void create_truth_types(struct tm_the_truth_o *tt) {
  tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "txt");
}

Let's also change the asset name to something more meaningful than my_asset. We'll call it txt. We need to update this new name in four places:

  • Asset Name
  • Menu Name
  • File extension
  • The source file: my_asset.c/h -> txt.c/h

This will change the code as follows:

static void create_truth_types(struct tm_the_truth_o *tt) {
  tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "txt");
}
// .. other code
static tm_asset_browser_create_asset_i asset_browser_create_my_asset = {
    .menu_name = TM_LOCALIZE_LATER("New Text File"),
    .asset_name = TM_LOCALIZE_LATER("New Text File"),
    .create = asset_browser_create,
};
}

Let's have a look at how it looks in the editor:

creating a new asset

If we create a new Text file and select it, this is what we will see in the Properties View:

The Data property is nil because we haven't loaded any data into the file yet. Let's add a UI that let's us import text files from disk.

(Another option would be to add a Text Editor UI that would let us edit the text data directly in the editor. However, writing a good text editor is a big task, so for this tutorial, let's use an import workflow instead.)

Custom UI

To show an Import button in the Properties View, we need to customize the Properties View UI of our type. We can do this by adding a TM_TT_ASPECT__PROPERTIES to the Truth type.

The TM_TT_ASPECT__PROPERTIES is implemented with a tm_properties_aspect_i struct. This struct has a lot of field that can be used to customize various parts of the Properties View (for more information on them, check out the documentation). For our purposes, we are interested in the custom_ui() field that lets us use a custom callback for drawing the type in the Properties View.

custom_ui() wants a function pointer of the type float (*custom_ui)(struct tm_properties_ui_args_t *args, tm_rect_t item_rect, tm_tt_id_t object).

Let us quickly go over this:

ArgumentData TypeDescription
argstm_properties_ui_args_tA struct with information from the Properties View that can be used in drawing the UI. For example, this has the ui instance as well as the uistyle which you will need in any tm_ui_api calls. For more information check the documentation.
item_recttm_rect_tThe rect in the Properties View UI where the item should be drawn. Note that the height in this struct (item_rect.h) is the height of a standard property field. You can use more or less height to draw your type as long as you return the right y value (see below).
objecttm_tt_id_tThe ID of the Truth object that the Properties View wants to draw.
Return valueDescription
floatReturns the y coordinate where the next item in the Propertiew View should be drawn. This should be item_rect.y + however much vertical space your controls are using.

To implement the custom_ui() function we can make use of the functions for drawing property UIs found in tm_properties_view_api, or we can draw UI directly using tm_ui_api. Once we've implemented custom_ui() we need a instance of tm_properties_aspect_i to register. This instance must have global lifetime so it doesn't get destroyed:

//.. other code
static float properties__custom_ui(struct tm_properties_ui_args_t *args,
                                   tm_rect_t item_rect, tm_tt_id_t object) {
  return item_rect.y;
}
static tm_properties_aspect_i properties_aspect = {
    .custom_ui = properties__custom_ui,
};
// .. other code

Now we can register this aspect with tm_truth_api:

//.. other code

static void create_truth_types(struct tm_the_truth_o *tt) {
  static tm_properties_aspect_i properties_aspect = {
      .custom_ui = properties__custom_ui,
  };
  tm_tt_set_aspect(tt, type, tm_properties_aspect_i, &properties_aspect);
}
//... the other code

In the editor, the change is imminently visible. The UI is gone, because it is now using our custom_ui() function, but our custom_ui() function isn't drawing anything.

Let's add the Imported Path property back to the UI. We can look at the properties view API for a suitable function to draw this property (if we can't find anything we may have to write a custom drawer ourselves).

We could use tm_properties_view_api->ui_property_default(). This would use the default editor based on the property type. For a STRING property, this is just a text edit field, the same thing that we saw before implementing our custom_ui() function. (If we don't have a custom UI, the default UI for each property will be used.)

We chould also use tm_properties_view_api->ui_string(). This is just another way of drawing the default STRING UI.

But for our purposes, tm_properties_view_api->ui_open_path() is better. This is a property UI specifically for file system path. It will draw a button, and if you click the button a system file dialog is shown that let's you pick a path.

Note that in order to use tm_properties_view_api we need to load it in our tm_load_plugin() function:

static struct tm_properties_view_api *tm_properties_view_api;
#include <plugins/editor_views/properties.h>
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_properties_view_api = tm_get_api(reg, tm_properties_view_api);
}

Now we can call ui_open_path() . Let's start by looking at its signature:

float (*ui_open_path)(struct tm_properties_ui_args_t *args, tm_rect_t item_rect, const char *name, const char *tooltip, tm_tt_id_t object, uint32_t property, const char *extensions, const char *description, bool *picked)

ArgumentData TypeDescription
argstm_properties_ui_args_tFor this argument, we should pass along the args pointer we got in our custom_ui() function.
item_recttm_rect_tThe rect where we want the UI of the control to be drawn (including the label).
nameconst char*The label that the Properties View UI will display in front of the button.
tooltipconst char*Tooltip that will be shown if the mouse is hovered over the label.
objecttm_tt_id_tThe Truth object that holds the path STRING that should be edited.
propertyuint32_tThe index of the STRING property that should be edited.
extensionconst char*List of file extensions that the open file dialog should show (separated by space).
descriptionconst char*Description of the file to open shown in the open file dialog.
pickedbool*Optional out pointer that is set to true if a new file was picked in the file dialog.
Return valueDescription
floatThe y coordinate where the next property should be drawn.

We can now implement the function:

bool picked = false;
item_rect.y = tm_properties_view_api->ui_open_path(
    args, item_rect, TM_LOCALIZE_LATER("Import Path"),
    TM_LOCALIZE_LATER("Path that the text file was imported from."), object,
    TM_TT_PROP__MY_ASSET__FILE, "txt", "text files", &picked);
if (picked) {
}

Note that we are using the property index TM_TT_PROP__MY_ASSET__FILE that we defined in the header file hearlier:

#pragma once
#include <foundation/api_types.h>
//... more code
#define TM_TT_TYPE__MY_ASSET "tm_my_asset"
#define TM_TT_TYPE_HASH__MY_ASSET TM_STATIC_HASH("tm_my_asset", 0x1e12ba1f91b99960ULL)

enum
{
    TM_TT_PROP__MY_ASSET__FILE,
    TM_TT_PROP__MY_ASSET__DATA,
};

We can now test this in the engine. We see an Import Path label with a button and when we click it, we get asked to import a file.

Next, we want to make sure that when the user picks a file using this method, we load the file and store it in our DATA buffer.

To load files we can use the tm_os_api which gives us access to OS functionality. tm_os_api has a lot of sub-APIs for different purposes (files, memory, threading, etc). In our case, what we need is tm_os_api->file_io which provides access to File I/O functionality:

//other includes
#include <foundation/os.h>
#include <foundation/buffer.h>
//.. other code
static float properties__custom_ui(struct tm_properties_ui_args_t *args,
                                   tm_rect_t item_rect, tm_tt_id_t object) {
  return item_rect.y;
}
static tm_properties_aspect_i properties_aspect = {
    .custom_ui = properties__custom_ui,
};

When a new file is picked in the UI (checked with the picked variable) we get the file path from The Truth, read the file data and store it in The Truth.

To manage buffers, we make use of the interface in buffers.h. Creating a buffer is a three step process:

  • Allocating the memory for the buffer (based on the file size).
  • Filling the buffer with content (in this case, from the text file).
  • Adding the buffer to the tm_buffers_i object.

Once we have created the buffer, we need to set the BUFFER data item in the Truth object to this buffer. Changing a value in The Truth is another three step process:

  • Ask the Truth for a write pointer to the object using write().
  • Set the buffer for the write pointer using set_buffer().
  • Commit the changes to the Truth using commit().

We need this somewhat complicated procedure because objects in The Truth are immutable by default. This ensures that The Truth can be used from multiple threads simulatenously. When you change a Truth object using the write()/commit() protocol, the changes are applied atomically. I.e., other threads will either see the old Truth object or the new one, never a half-old, half-new object.

If you want the change to go into the undo stack so that you can revert it with Edit → Undo, you need some additional steps:

  • Create an undo scope for the action using create_undo_scope().
  • Pass that undo scope into commit().
  • Register the undo scope with the application's undo stack (found in args->undo_stack).

To simplify this example, we've skipped that step and instead we use TM_TT_NO_UNDO_SCOPE for the commit() action which means the action will not be undoable.

What Is Next?

In the next part we'll show how to add an Importer for our asset type. This will let us drag and drop text files from the explorer into the asset browser.

Part 3

Full Example of Basic Asset

my_asset.h

#pragma once
#include <foundation/api_types.h>
//... more code
#define TM_TT_TYPE__MY_ASSET "tm_my_asset"
#define TM_TT_TYPE_HASH__MY_ASSET TM_STATIC_HASH("tm_my_asset", 0x1e12ba1f91b99960ULL)

enum
{
    TM_TT_PROP__MY_ASSET__FILE,
    TM_TT_PROP__MY_ASSET__DATA,
};

(Do not forget to run hash.exe when you create a TM_STATIC_HASH)

my_asset.c

// -- api's
static struct tm_the_truth_api *tm_the_truth_api;
static struct tm_properties_view_api *tm_properties_view_api;
static struct tm_os_api *tm_os_api;
// -- inlcudes
#include <foundation/api_registry.h>
#include <foundation/buffer.h>
#include <foundation/localizer.h>
#include <foundation/macros.h>
#include <foundation/os.h>
#include <foundation/the_truth.h>
#include <foundation/the_truth_assets.h>
#include <foundation/undo.h>

#include <plugins/editor_views/asset_browser.h>
#include <plugins/editor_views/properties.h>

#include "txt.h"

//custom ui
static float properties__custom_ui(struct tm_properties_ui_args_t *args, tm_rect_t item_rect, tm_tt_id_t object)
{
    tm_the_truth_o *tt = args->tt;
    bool picked = false;
    item_rect.y = tm_properties_view_api->ui_open_path(args, item_rect, TM_LOCALIZE_LATER("Import Path"), TM_LOCALIZE_LATER("Path that the text file was imported from."), object, TM_TT_PROP__MY_ASSET__FILE, "txt", "text files", &picked);
    if (picked)
    {
        const char *file = tm_the_truth_api->get_string(tt, tm_tt_read(tt, object), TM_TT_PROP__MY_ASSET__FILE);
        tm_file_stat_t stat = tm_os_api->file_system->stat(file);
        tm_buffers_i *buffers = tm_the_truth_api->buffers(tt);
        void *buffer = buffers->allocate(buffers->inst, stat.size, false);
        tm_file_o f = tm_os_api->file_io->open_input(file);
        tm_os_api->file_io->read(f, buffer, stat.size);
        tm_os_api->file_io->close(f);
        const uint32_t buffer_id = buffers->add(buffers->inst, buffer, stat.size, 0);
        tm_the_truth_object_o *asset_obj = tm_the_truth_api->write(tt, object);
        tm_the_truth_api->set_buffer(tt, asset_obj, TM_TT_PROP__MY_ASSET__DATA, buffer_id);
        tm_the_truth_api->commit(tt, asset_obj, TM_TT_NO_UNDO_SCOPE);
    }
    return item_rect.y;
}
// -- create truth type
static void create_truth_types(struct tm_the_truth_o *tt)
{
    static tm_the_truth_property_definition_t my_asset_properties[] = {
        {"import_path", TM_THE_TRUTH_PROPERTY_TYPE_STRING},
        {"data", TM_THE_TRUTH_PROPERTY_TYPE_BUFFER},
    };
    const tm_tt_type_t type = tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_ASSET, my_asset_properties, TM_ARRAY_COUNT(my_asset_properties));
    tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "txt");
    static tm_properties_aspect_i properties_aspect = {
        .custom_ui = properties__custom_ui,
    };
    tm_tt_set_aspect(tt, type, tm_properties_aspect_i, &properties_aspect);
}

// -- asset browser regsiter interface
static tm_tt_id_t asset_browser_create(struct tm_asset_browser_create_asset_o *inst, tm_the_truth_o *tt, tm_tt_undo_scope_t undo_scope)
{
    const tm_tt_type_t type = tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET);
    return tm_the_truth_api->create_object_of_type(tt, type, undo_scope);
}
static tm_asset_browser_create_asset_i asset_browser_create_my_asset = {
    .menu_name = TM_LOCALIZE_LATER("New Text File"),
    .asset_name = TM_LOCALIZE_LATER("New Text File"),
    .create = asset_browser_create,
};
// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_properties_view_api = tm_get_api(reg, tm_properties_view_api);
    tm_os_api = tm_get_api(reg, tm_os_api);
    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, create_truth_types);
    tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i, &asset_browser_create_my_asset);
}

Create a Custom Asset Type: Part 3

This part will cover the following topics:

  • How to write an importer

You can find the whole source code in its git repo: example-text-file-asset

Table of Content

Custom importer for text files

In this part, we will add the ability to import a text file into the Engine. To implement an importer, we need the following APIs:

Nameheader fileDescription
tm_asset_io_apifoundation/asset_io.hThis API has the importer interface.
tm_temp_allocator_apifoundation/temp_allocator.hWe will use this to allocate temporary memory.
tm_allocator_apifoundation/allocator.hWe will use this for permanent memory allocations. tm_allocator_api supports a number of different allocators, for example the system allocator. We need this one later when we rewrite our reimport.
tm_path_apifoundation/path.hUsed for splitting and joining file system paths.
tm_api_registry_apifoundation/api_registry.hWe use this to get access to APIs from the API registry.
tm_task_system_apifoundation/task_system.hAllows us to spawm tasks

We include these header files and retrieve the APis from the API registry.

Note: tm_api_registry_api can be retrived from the reg parameter in the tm_load_plugin function. tm_global_api_registry = reg;

The Machinery has a generic interface for asset importers. It requires a bunch of functions to be able to work as intended. The struct we need to implement is called tm_asset_io_i. It requires us to set the following members:

MemberDescription
enabled()Should return true if the importer is active.
can_import()Optional. Should return true for the file extensions that can be imported by this interface.
can_reimport()Optional. Should return true for Truth assets that can be reimported.
importer_extensions_string()Optional. Extensions that can be imported by this interface.
importer_description_string()Optional. Descriptions for the extensions in importer_extensions_string().
import_asset()Implements the import. Since imports can be slow, they are typically implemented as background tasks and this function should return the ID of the background task from tm_task_system_api.

All these members expect a function pointer. Therefore, we need to provide the functionality.

To implement the first functions, we need to do the following steps:

//... other includes
#include <foundation/carray_print.inl>
#include <foundation/string.inl>
#include <foundation/localizer.h>
//... other code
static bool asset_io__enabled(struct tm_asset_io_o *inst) { return true; }
static bool asset_io__can_import(struct tm_asset_io_o *inst,
                                 const char *extension) {
  return tm_strcmp_ignore_case(extension, "txt") == 0;
}
static bool asset_io__can_reimport(struct tm_asset_io_o *inst,
                                   struct tm_the_truth_o *tt,
                                   tm_tt_id_t asset) {
  const tm_tt_id_t object = tm_the_truth_api->get_subobject(
      tt, tm_tt_read(tt, asset), TM_TT_PROP__ASSET__OBJECT);
  return object.type ==
         tm_the_truth_api
             ->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET)
             .u64;
}
static void asset_io__importer_extensions_string(struct tm_asset_io_o *inst,
                                                 char **output,
                                                 struct tm_temp_allocator_i *ta,
                                                 const char *separator) {
  tm_carray_temp_printf(output, ta, "txt");
}
static void
asset_io__importer_description_string(struct tm_asset_io_o *inst, char **output,
                                      struct tm_temp_allocator_i *ta,
                                      const char *separator) {
  tm_carray_temp_printf(output, ta, ".txt");
}

Let us go through them:

  • enabled() returns true because we want the importer to work.
  • asset_io__can_import() compares the extension with the one we want to support.

Note: string.inl which we need to include for tm_strcmp_ignore_case() uses the tm_localizer_api for some of its functionality, that's why we need it.

  • asset_io__can_reimport() checks if the object type matches our type.

TM_TT_PROP__ASSET__OBJECT is the property of the TM_TT_TYPE__ASSET type which holds the object associated with the asset.

The last two functions append .txt to the file extensions and descriptions. Note that the argument output is a carray. We can use tm_carray_temp_printf() to append to that array.

Note: carray_print.h requires tm_sprintf_api`. Therefore, we need to include the right header here.

Import Task Setup

To run the import as a background task we need to queue a task using the tm_task_manager_api from our asset_io__import_asset() function. Task functions take a single void *userdata argument. Since we typically want to pass more than one thing to the task, we put everything the task needs in a struct and pass a pointer to that struct as the userdata. The task function casts this void * to the desired type and can then make use of the data.

The task needs to know the location of the file that is to be imported. It also needs access to some semi-global objects, such as the Truth that the file should be imported to, and an allocator to use for memory allocations. The struct could look like this:

struct task__import_txt {
  uint64_t bytes;
  struct tm_asset_io_import args;
  char file[8];
};

The tm_asset_io_import field will used be copied from the parameter passed to asset_io__import_asset() to the struct.

The function itself looks like this:

static uint64_t asset_io__import_asset(struct tm_asset_io_o *inst,
                                       const char *file,
                                       const struct tm_asset_io_import *args) {
  const uint64_t bytes = sizeof(struct task__import_txt) + strlen(file);
  struct task__import_txt *task = tm_alloc(args->allocator, bytes);
  *task = (struct task__import_txt){
      .bytes = bytes,
      .args = *args,
  };
  strcpy(task->file, file);
  return task_system->run_task(task__import_txt, task, "Import Text File",
                               tm_tt_task_owner(args->tt), false);
}

Important: The task is the memory owner and needs to clean it up at the end of the execution!

The line task_system->run_task(task__import_txt, task, "Import Text File"); queues the task task import_txt() , with the data task, and returns its id. The id can be used to query for when the background task has completed.

Info: For more information on the task system check the documentation.

Import Task Implementation

The import task should import the data and clean up afterwards.

The function signature is:

static void task__import_txt(void *data, uint64_t task_id) {}

We need to cast ptr to our previously defined data type task__import_txt. The task id can be used by the task callback function to provide task progress updates. In this example, we do not use it.

For more information on how to update the status of a task so that it is shown in the editor, see the documentation.

To implement the import we retrieve the data passed in the struct and then implement the import as in the previous chapter. The reimport works the same as the import, except we add the buffer to an existing object instead of creating a new one:

static void task__import_txt(void *data, uint64_t task_id) {
  struct task__import_txt *task = (struct task__import_txt *)data;
  const struct tm_asset_io_import *args = &task->args;
  const char *txt_file = task->file;
  tm_the_truth_o *tt = args->tt;
}

Another thing we should consider is error checking:

  • Does the file exist?
  • Can we read the expected number of bytes from the file?

Since we are running as a background task, we will report any errors through the logging API: tm_logger_api. Errors reported that way will appear in the Console tab of the UI:

static void task__import_txt(void *data, uint64_t task_id) {
  struct task__import_txt *task = (struct task__import_txt *)data;
  const struct tm_asset_io_import *args = &task->args;
  const char *txt_file = task->file;
  tm_the_truth_o *tt = args->tt;
  tm_file_stat_t stat = tm_os_api->file_system->stat(txt_file);
  if (stat.exists) {
  } else {
    tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not find %s \n",
                          txt_file);
  }
  tm_free(args->allocator, task, task->bytes);
}

Now we combine all the knowledge from this chapter and the previous chapter. We need to create a new asset via code for the import, and for the reimport, we need to update an existing file. Before we do all of this, let us first read the file and create the buffer.

static void task__import_txt(void *data, uint64_t task_id) {
  struct task__import_txt *task = (struct task__import_txt *)data;
  const struct tm_asset_io_import *args = &task->args;
  const char *txt_file = task->file;
  tm_the_truth_o *tt = args->tt;
  tm_file_stat_t stat = tm_os_api->file_system->stat(txt_file);
  if (stat.exists) {
    tm_buffers_i *buffers = tm_the_truth_api->buffers(tt);
    void *buffer = buffers->allocate(buffers->inst, stat.size, false);
    tm_file_o f = tm_os_api->file_io->open_input(txt_file);
    const int64_t read = tm_os_api->file_io->read(f, buffer, stat.size);
    tm_os_api->file_io->close(f);
  } else {
    tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not find %s \n",
                          txt_file);
  }
  tm_free(args->allocator, task, task->bytes);
}

After this, we should ensure that the file size matches the size of the read data.

static void task__import_txt(void *data, uint64_t task_id) {
  struct task__import_txt *task = (struct task__import_txt *)data;
  const struct tm_asset_io_import *args = &task->args;
  const char *txt_file = task->file;
  tm_the_truth_o *tt = args->tt;
  tm_file_stat_t stat = tm_os_api->file_system->stat(txt_file);
  if (stat.exists) {
    tm_buffers_i *buffers = tm_the_truth_api->buffers(tt);
    void *buffer = buffers->allocate(buffers->inst, stat.size, false);
    tm_file_o f = tm_os_api->file_io->open_input(txt_file);
    const int64_t read = tm_os_api->file_io->read(f, buffer, stat.size);
    tm_os_api->file_io->close(f);

    if (read == (int64_t)stat.size) {
    } else {
      tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not read %s\n",
                            txt_file);
    }
  } else {
    tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not find %s \n",
                          txt_file);
  }
  tm_free(args->allocator, task, task->bytes);
}

With this out of the way, we can use our knowledge from the last part.

  • How to add an asset via code.

The first step was to create the new object and add the data to it.

const uint32_t buffer_id = buffers->add(buffers->inst, buffer, stat.size, 0);
const tm_tt_type_t plugin_asset_type =
    tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET);
const tm_tt_id_t asset_id = tm_the_truth_api->create_object_of_type(
    tt, plugin_asset_type, TM_TT_NO_UNDO_SCOPE);
tm_the_truth_object_o *asset_obj = tm_the_truth_api->write(tt, asset_id);
tm_the_truth_api->set_buffer(tt, asset_obj, TM_TT_PROP__MY_ASSET__DATA,
                             buffer_id);
tm_the_truth_api->set_string(tt, asset_obj, TM_TT_PROP__MY_ASSET__FILE,
                             txt_file);
tm_the_truth_api->commit(tt, asset_obj, args->undo_scope);

After that, we can use the tm_asset_browser_add_asset_api to add the asset to the asset browser.

struct tm_asset_browser_add_asset_api *add_asset =
    tm_get_api(tm_global_api_registry, tm_asset_browser_add_asset_api);

We are getting the API first, because we do not need it anywhere else than in this case. Then we need to extract the file name of the imported file. You can do this with the path API's tm_path_api->base() function. Be aware this function requires a tm_str_t which you an create from a normal C string (const char*) via tm_str(). To access the underlaying C string again just call .data on the tm_str_t.

tm_str_t represents strings with a char * and a size, instead of just a char *.

This lets you reason about parts of a string, which you are not able to do with standard NULL-terminated strings.

documentation

We want to add the asset to the folder that currently open in the asset browser. We can ask the tm_asset_browser_add_asset_api what the current folder is. Then we decide if want to select the file. At the end we call tm_asset_browser_add_asset_api->add().

Note: If we wanted to, we could add asset labels to the asset and pass them as the last two arguments of the add() function instead of 0, 0.

struct tm_asset_browser_add_asset_api *add_asset =
    tm_get_api(tm_global_api_registry, tm_asset_browser_add_asset_api);

That's it for the import. Before we move on, we need to clean up! No allocation without deallocation!

    tm_free(args->allocator, task, task->bytes);

Info: If you forget to do this, the Engine will inform you that there is a memory leak in the Console log

Now let's bring it all together:

static void task__import_txt(void *data, uint64_t task_id) {
  struct task__import_txt *task = (struct task__import_txt *)data;
  const struct tm_asset_io_import *args = &task->args;
  const char *txt_file = task->file;
  tm_the_truth_o *tt = args->tt;
  tm_file_stat_t stat = tm_os_api->file_system->stat(txt_file);
  if (stat.exists) {
    tm_buffers_i *buffers = tm_the_truth_api->buffers(tt);
    void *buffer = buffers->allocate(buffers->inst, stat.size, false);
    tm_file_o f = tm_os_api->file_io->open_input(txt_file);
    const int64_t read = tm_os_api->file_io->read(f, buffer, stat.size);
    tm_os_api->file_io->close(f);

    if (read == (int64_t)stat.size) {
      const uint32_t buffer_id =
          buffers->add(buffers->inst, buffer, stat.size, 0);
      const tm_tt_type_t plugin_asset_type =
          tm_the_truth_api->object_type_from_name_hash(
              tt, TM_TT_TYPE_HASH__MY_ASSET);
      const tm_tt_id_t asset_id = tm_the_truth_api->create_object_of_type(
          tt, plugin_asset_type, TM_TT_NO_UNDO_SCOPE);
      tm_the_truth_object_o *asset_obj = tm_the_truth_api->write(tt, asset_id);
      tm_the_truth_api->set_buffer(tt, asset_obj, TM_TT_PROP__MY_ASSET__DATA,
                                   buffer_id);
      tm_the_truth_api->set_string(tt, asset_obj, TM_TT_PROP__MY_ASSET__FILE,
                                   txt_file);
      tm_the_truth_api->commit(tt, asset_obj, args->undo_scope);
      const char *asset_name = tm_path_api->base(tm_str(txt_file)).data;
      struct tm_asset_browser_add_asset_api *add_asset =
          tm_get_api(tm_global_api_registry, tm_asset_browser_add_asset_api);
      const tm_tt_id_t current_dir =
          add_asset->current_directory(add_asset->inst, args->ui);
      const bool should_select =
          args->asset_browser.u64 &&
          tm_the_truth_api->version(tt, args->asset_browser) ==
              args->asset_browser_version_at_start;
      add_asset->add(add_asset->inst, current_dir, asset_id, asset_name,
                     args->undo_scope, should_select, args->ui, 0, 0);
    } else {
      tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not read %s\n",
                            txt_file);
    }
  } else {
    tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not find %s \n",
                          txt_file);
  }
  tm_free(args->allocator, task, task->bytes);
}

Enabling Reimport

Our implementation does not yet support reimports. Let us fix this quickly!

tm_asset_io_import has a field called reimport_into of type tm_tt_id_t. When doing a regular import, the value of this field will be (tm_tt_id_t){0}. When reimporting, it will be the ID of the Truth object that we should import into.

To change an existing object instead of creating a new one, we can use the function tm_the_truth_api->retarget_write(). It will make the commit() operation write the changes to an existing object instead of two the new one we just created. After comitting, we can destroy the new (temporary) object:

if (args->reimport_into.u64) {
  tm_the_truth_api->retarget_write(tt, asset_obj, args->reimport_into);
  tm_the_truth_api->commit(tt, asset_obj, args->undo_scope);
  tm_the_truth_api->destroy_object(tt, asset_id, args->undo_scope);
} else {
  tm_the_truth_api->commit(tt, asset_obj, args->undo_scope);
  const char *asset_name = tm_path_api->base(tm_str(txt_file)).data;
  struct tm_asset_browser_add_asset_api *add_asset =
      tm_get_api(tm_global_api_registry, tm_asset_browser_add_asset_api);
  const tm_tt_id_t current_dir =
      add_asset->current_directory(add_asset->inst, args->ui);
  const bool should_select =
      args->asset_browser.u64 &&
      tm_the_truth_api->version(tt, args->asset_browser) ==
          args->asset_browser_version_at_start;
  add_asset->add(add_asset->inst, current_dir, asset_id, asset_name,
                 args->undo_scope, should_select, args->ui, 0, 0);
}

With these changes, the source code now looks as like this:

static void task__import_txt(void *data, uint64_t task_id) {}

Refactor the Custom UI Import Functionality

The last step before in this part of the tutorial is to update what happens when the user picks a new file in the Properties View of the asset. We want this workflow to make use of the asynchronous import functionality we just added to make the user experience smoother. Besides, this will also remove some code duplication.

Let's reuse our import task. We just need to make sure it has all the data it needs. We can check the documentation of tm_asset_io_import to ensure we do not forget anything important.

Besides the name of the file we're importing, we also need:

  • an allocator
  • the Truth to import into
  • the object to reimport into

Now we can write our reimport task code:

const char *file = tm_the_truth_api->get_string(tt, tm_tt_read(tt, object),
                                                TM_TT_PROP__MY_ASSET__FILE);
{
  tm_allocator_i *allocator = tm_allocator_api->system;
  const uint64_t bytes = sizeof(struct task__import_txt) + strlen(file);
  struct task__import_txt *task = tm_alloc(allocator, bytes);
  *task = (struct task__import_txt){
      .bytes = bytes,
      .args = {.allocator = allocator, .tt = tt, .reimport_into = object}};
  strcpy(task->file, file);
  task_system->run_task(task__import_txt, task, "Import Text File",
                        tm_tt_task_owner(args->tt), false);
}

We'll use the system allocator (a global allocator with the same lifetime as the program) to allocate our task, including the bytes needed for the file name string. Remember the layout of our struct:

// -- struct definitions
struct task__import_txt
{
    uint64_t bytes;
    struct tm_asset_io_import args;
    char file[8];
};
// .. other code

We fill out the struct with the needed data, copy the file name, and then ask the task system to run the task:

static float properties__custom_ui(struct tm_properties_ui_args_t *args,
                                   tm_rect_t item_rect, tm_tt_id_t object) {
  tm_the_truth_o *tt = args->tt;
  bool picked = false;
  item_rect.y = tm_properties_view_api->ui_open_path(
      args, item_rect, TM_LOCALIZE_LATER("Import Path"),
      TM_LOCALIZE_LATER("Path that the text file was imported from."), object,
      TM_TT_PROP__MY_ASSET__FILE, "txt", "text files", &picked);
  if (picked) {
    const char *file = tm_the_truth_api->get_string(tt, tm_tt_read(tt, object),
                                                    TM_TT_PROP__MY_ASSET__FILE);
    {
      tm_allocator_i *allocator = tm_allocator_api->system;
      const uint64_t bytes = sizeof(struct task__import_txt) + strlen(file);
      struct task__import_txt *task = tm_alloc(allocator, bytes);
      *task = (struct task__import_txt){
          .bytes = bytes,
          .args = {.allocator = allocator, .tt = tt, .reimport_into = object}};
      strcpy(task->file, file);
      task_system->run_task(task__import_txt, task, "Import Text File",
                            tm_tt_task_owner(args->tt), false);
    }
  }
  return item_rect.y;
}

(For more information on the structure of these functions, please check the previous part)

The End

This is the final part of this walkthrough. By now, you should have a better idea of:

  • How to work with The Truth
  • How to create an asset
  • How to import assets into the Engine
  • How to create a custom UI.

If you want to see a more complex example of an importer, look at the assimp importer example: samples\plugins\assimp.

Full Example of Basic Asset

my_asset.h

#pragma once
#include <foundation/api_types.h>
//... more code
#define TM_TT_TYPE__MY_ASSET "tm_my_asset"
#define TM_TT_TYPE_HASH__MY_ASSET TM_STATIC_HASH("tm_my_asset", 0x1e12ba1f91b99960ULL)

enum
{
    TM_TT_PROP__MY_ASSET__FILE,
    TM_TT_PROP__MY_ASSET__DATA,
};

(Do not forget to run hash.exe when you create a TM_STATIC_HASH)

my_asset.c

// -- api's
static struct tm_api_registry_api *tm_global_api_registry;
static struct tm_the_truth_api *tm_the_truth_api;
static struct tm_properties_view_api *tm_properties_view_api;
static struct tm_os_api *tm_os_api;
static struct tm_path_api *tm_path_api;
static struct tm_temp_allocator_api *tm_temp_allocator_api;
static struct tm_logger_api *tm_logger_api;
static struct tm_localizer_api *tm_localizer_api;
static struct tm_asset_io_api *tm_asset_io_api;
static struct tm_task_system_api *task_system;
static struct tm_allocator_api *tm_allocator_api;
static struct tm_sprintf_api *tm_sprintf_api;

// -- inlcudes

#include <foundation/api_registry.h>
#include <foundation/asset_io.h>
#include <foundation/buffer.h>
#include <foundation/carray_print.inl>
#include <foundation/localizer.h>
#include <foundation/log.h>
#include <foundation/macros.h>
#include <foundation/os.h>
#include <foundation/path.h>
#include <foundation/sprintf.h>
#include <foundation/string.inl>
#include <foundation/task_system.h>
#include <foundation/temp_allocator.h>
#include <foundation/the_truth.h>
#include <foundation/the_truth_assets.h>
#include <foundation/undo.h>

#include <plugins/editor_views/asset_browser.h>
#include <plugins/editor_views/properties.h>

#include "txt.h"
struct task__import_txt
{
    uint64_t bytes;
    struct tm_asset_io_import args;
    char file[8];
};
/////
// -- functions:
////
// --- importer
static void task__import_txt(void *data, uint64_t task_id)
{
    struct task__import_txt *task = (struct task__import_txt *)data;
    const struct tm_asset_io_import *args = &task->args;
    const char *txt_file = task->file;
    tm_the_truth_o *tt = args->tt;
    tm_file_stat_t stat = tm_os_api->file_system->stat(txt_file);
    if (stat.exists)
    {
        tm_buffers_i *buffers = tm_the_truth_api->buffers(tt);
        void *buffer = buffers->allocate(buffers->inst, stat.size, false);
        tm_file_o f = tm_os_api->file_io->open_input(txt_file);
        const int64_t read = tm_os_api->file_io->read(f, buffer, stat.size);
        tm_os_api->file_io->close(f);

        if (read == (int64_t)stat.size)
        {
            const uint32_t buffer_id = buffers->add(buffers->inst, buffer, stat.size, 0);
            const tm_tt_type_t plugin_asset_type = tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET);
            const tm_tt_id_t asset_id = tm_the_truth_api->create_object_of_type(tt, plugin_asset_type, TM_TT_NO_UNDO_SCOPE);
            tm_the_truth_object_o *asset_obj = tm_the_truth_api->write(tt, asset_id);
            tm_the_truth_api->set_buffer(tt, asset_obj, TM_TT_PROP__MY_ASSET__DATA, buffer_id);
            tm_the_truth_api->set_string(tt, asset_obj, TM_TT_PROP__MY_ASSET__FILE, txt_file);
            if (args->reimport_into.u64)
            {
                tm_the_truth_api->retarget_write(tt, asset_obj, args->reimport_into);
                tm_the_truth_api->commit(tt, asset_obj, args->undo_scope);
                tm_the_truth_api->destroy_object(tt, asset_id, args->undo_scope);
            }
            else
            {
                tm_the_truth_api->commit(tt, asset_obj, args->undo_scope);
                const char *asset_name = tm_path_api->base(tm_str(txt_file)).data;
                struct tm_asset_browser_add_asset_api *add_asset = tm_get_api(tm_global_api_registry, tm_asset_browser_add_asset_api);
                const tm_tt_id_t current_dir = add_asset->current_directory(add_asset->inst, args->ui);
                const bool should_select = args->asset_browser.u64 && tm_the_truth_api->version(tt, args->asset_browser) == args->asset_browser_version_at_start;
                add_asset->add(add_asset->inst, current_dir, asset_id, asset_name, args->undo_scope, should_select, args->ui, 0, 0);
            }
        }
        else
        {
            tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not read %s\n", txt_file);
        }
    }
    else
    {
        tm_logger_api->printf(TM_LOG_TYPE_INFO, "import txt:could not find %s \n", txt_file);
    }
    tm_free(args->allocator, task, task->bytes);
}

static bool asset_io__enabled(struct tm_asset_io_o *inst)
{
    return true;
}
static bool asset_io__can_import(struct tm_asset_io_o *inst, const char *extension)
{
    return tm_strcmp_ignore_case(extension, "txt") == 0;
}
static bool asset_io__can_reimport(struct tm_asset_io_o *inst, struct tm_the_truth_o *tt, tm_tt_id_t asset)
{
    const tm_tt_id_t object = tm_the_truth_api->get_subobject(tt, tm_tt_read(tt, asset), TM_TT_PROP__ASSET__OBJECT);
    return object.type == tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET).u64;
}
static void asset_io__importer_extensions_string(struct tm_asset_io_o *inst, char **output, struct tm_temp_allocator_i *ta, const char *separator)
{
    tm_carray_temp_printf(output, ta, "txt");
}
static void asset_io__importer_description_string(struct tm_asset_io_o *inst, char **output, struct tm_temp_allocator_i *ta, const char *separator)
{
    tm_carray_temp_printf(output, ta, ".txt");
}
static uint64_t asset_io__import_asset(struct tm_asset_io_o *inst, const char *file, const struct tm_asset_io_import *args)
{
    const uint64_t bytes = sizeof(struct task__import_txt) + strlen(file);
    struct task__import_txt *task = tm_alloc(args->allocator, bytes);
    *task = (struct task__import_txt){
        .bytes = bytes,
        .args = *args,
    };
    strcpy(task->file, file);
    return task_system->run_task(task__import_txt, task, "Import Text File", tm_tt_task_owner(args->tt), false);
}
static struct tm_asset_io_i txt_asset_io = {
    .enabled = asset_io__enabled,
    .can_import = asset_io__can_import,
    .can_reimport = asset_io__can_reimport,
    .importer_extensions_string = asset_io__importer_extensions_string,
    .importer_description_string = asset_io__importer_description_string,
    .import_asset = asset_io__import_asset};

// -- asset on its own

//custom ui
static float properties__custom_ui(struct tm_properties_ui_args_t *args, tm_rect_t item_rect, tm_tt_id_t object)
{
    tm_the_truth_o *tt = args->tt;
    bool picked = false;
    item_rect.y = tm_properties_view_api->ui_open_path(args, item_rect, TM_LOCALIZE_LATER("Import Path"), TM_LOCALIZE_LATER("Path that the text file was imported from."), object, TM_TT_PROP__MY_ASSET__FILE, "txt", "text files", &picked);
    if (picked)
    {
        const char *file = tm_the_truth_api->get_string(tt, tm_tt_read(tt, object), TM_TT_PROP__MY_ASSET__FILE);
        {
            tm_allocator_i *allocator = tm_allocator_api->system;
            const uint64_t bytes = sizeof(struct task__import_txt) + strlen(file);
            struct task__import_txt *task = tm_alloc(allocator, bytes);
            *task = (struct task__import_txt){
                .bytes = bytes,
                .args = {
                    .allocator = allocator,
                    .tt = tt,
                    .reimport_into = object}};
            strcpy(task->file, file);
            task_system->run_task(task__import_txt, task, "Import Text File", tm_tt_task_owner(args->tt), false);
        }
    }
    return item_rect.y;
}
// -- create truth type
static void create_truth_types(struct tm_the_truth_o *tt)
{
    static tm_the_truth_property_definition_t my_asset_properties[] = {
        {"import_path", TM_THE_TRUTH_PROPERTY_TYPE_STRING},
        {"data", TM_THE_TRUTH_PROPERTY_TYPE_BUFFER},
    };
    const tm_tt_type_t type = tm_the_truth_api->create_object_type(tt, TM_TT_TYPE__MY_ASSET, my_asset_properties, TM_ARRAY_COUNT(my_asset_properties));
    tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "txt");
    static tm_properties_aspect_i properties_aspect = {
        .custom_ui = properties__custom_ui,
    };
    tm_tt_set_aspect(tt, type, tm_properties_aspect_i, &properties_aspect);
}

// -- asset browser regsiter interface
static tm_tt_id_t asset_browser_create(struct tm_asset_browser_create_asset_o *inst, tm_the_truth_o *tt, tm_tt_undo_scope_t undo_scope)
{
    const tm_tt_type_t type = tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__MY_ASSET);
    return tm_the_truth_api->create_object_of_type(tt, type, undo_scope);
}
static tm_asset_browser_create_asset_i asset_browser_create_my_asset = {
    .menu_name = TM_LOCALIZE_LATER("New Text File"),
    .asset_name = TM_LOCALIZE_LATER("New Text File"),
    .create = asset_browser_create,
};

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
    tm_properties_view_api = tm_get_api(reg, tm_properties_view_api);
    tm_os_api = tm_get_api(reg, tm_os_api);
    tm_path_api = tm_get_api(reg, tm_path_api);
    tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);
    tm_allocator_api = tm_get_api(reg, tm_allocator_api);
    tm_logger_api = tm_get_api(reg, tm_logger_api);
    tm_localizer_api = tm_get_api(reg, tm_localizer_api);
    tm_asset_io_api = tm_get_api(reg, tm_asset_io_api);
    task_system = tm_get_api(reg, tm_task_system_api);
    tm_sprintf_api = tm_get_api(reg, tm_sprintf_api);
    tm_global_api_registry = reg;
    if (load)
        tm_asset_io_api->add_asset_io(&txt_asset_io);
    else
        tm_asset_io_api->remove_asset_io(&txt_asset_io);
    tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i, create_truth_types);
    tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i, &asset_browser_create_my_asset);
}

Adding Drag and Drop to Assets

This walkthrough shows you how to enable a asset to be drag and dropped. You should have basic knowledge about how to write a custom plugin. If not, you might want to check this Guide. The goal of this walkthrough is to enable you to drag and drop your asset into the Scene!

You will learn:

  • how to use an aspect in practice
  • How to extend an already existing asset.
  • How to use an entity-component manager.

This walkthrough will refer to the text asset example as the asset we want to extend! If you have not followed it here is the link: Custom Asset

Table of Content

Adding Drag and Drop to our asset

In this example, we are going back to our text asset sample. In that sample, we have the following function to register the asset to the Truth:

static void create_truth_types(struct tm_the_truth_o *tt) {
  static tm_the_truth_property_definition_t my_asset_properties[] = {
      {"import_path", TM_THE_TRUTH_PROPERTY_TYPE_STRING},
      {"data", TM_THE_TRUTH_PROPERTY_TYPE_BUFFER},
  };
  const tm_tt_type_t type = tm_the_truth_api->create_object_type(
      tt, TM_TT_TYPE__MY_ASSET, my_asset_properties,
      TM_ARRAY_COUNT(my_asset_properties));
  tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "txt");
  static tm_properties_aspect_i properties_aspect = {
      .custom_ui = properties__custom_ui,
  };

  tm_tt_set_aspect(tt, type, tm_properties_aspect_i, &properties_aspect);
  tm_the_truth_property_definition_t story_component_properties[] = {
      [TM_TT_PROP__STORY_COMPONENT__ASSET] = {
          "story_asset", .type = TM_THE_TRUTH_PROPERTY_TYPE_REFERENCE,
          .type_hash = TM_TT_TYPE_HASH__MY_ASSET}};
}

We need to make use of the tm_asset_scene_api aspect. This aspect allows the associated Truth Type to be dragged and dropped to the Scene if wanted! We can find it in the plugins/the_machinery_shared/asset_aspects.h header.

Asset Scene Aspect

What is an aspect? Custom Asset

This aspect expects an instance of the tm_asset_scene_api. When we provide a call-back called

bool (*droppable)(struct tm_asset_scene_o *inst, struct tm_the_truth_o *tt, tm_tt_id_t asset); we tell the scene that we can drag the asset out of the asset browser. We need to return true for this. In this case you need to provide the create_entity function. When you drag your asset into the Scene, this function is called by the Engine. In this function, you can create a new entity and attach it to the parent entity ( which might be the world. If you drag it on top of another entity, you can attach the newly created entity as a child entity to this one.

Before we can make use of this we need some component to be created for our Text Asset! Lets call it Story Component.

The story component

Let us create a use case for our text file. Let us assume we wanted to make a very simple story-based game, and all our text files are the basis for our stories. This means we need to create first a story component.

Note: For more details on how to create a component, follow this guide.

Here we have the whole source code for the story component:

// more apis
static struct tm_entity_api *tm_entity_api;
// more includes
#include <plugins/entity/entity.h>
#include <plugins/the_machinery_shared/component_interfaces/editor_ui_interface.h>
// more code

static const char *component__category(void) { return TM_LOCALIZE("Story"); }

static tm_ci_editor_ui_i *editor_aspect =
    &(tm_ci_editor_ui_i){.category = component__category};

{{insert_code(env.TM_BOOK_CODE_SNIPPETS/custom_assets/drag_drop/txt.c,create_truth_types,off)}}

struct tm_component_manager_o {
  tm_entity_context_o *ctx;
  tm_allocator_i allocator;
};
static bool component__load_asset(tm_component_manager_o *man,
                                  struct tm_entity_commands_o *commands,
                                  tm_entity_t e, void *c_vp,
                                  const tm_the_truth_o *tt, tm_tt_id_t asset) {
  struct tm_story_component_t *c = c_vp;
  const tm_the_truth_object_o *asset_r = tm_tt_read(tt, asset);
  tm_tt_id_t id = tm_the_truth_api->get_reference(
      tt, asset_r, TM_TT_PROP__STORY_COMPONENT__ASSET);
  if (id.u64) {
    tm_tt_buffer_t buffer = tm_the_truth_api->get_buffer(
        tt, tm_tt_read(tt, id), TM_TT_PROP__MY_ASSET__DATA);
    c->text = tm_alloc(&man->allocator, buffer.size);
    c->size = buffer.size;
    memcpy(c->text, buffer.data, buffer.size);
  }
  return true;
}
static void component__remove(tm_component_manager_o *manager,
                              struct tm_entity_commands_o *commands,
                              tm_entity_t e, void *data) {
  tm_story_component_t *sc = (tm_story_component_t *)data;
  tm_free(&manager->allocator, sc->text, sc->size);
}

static void component__destroy(tm_component_manager_o *manager) {
  // Free the actual manager struct and the allocator used to allocate it.
  tm_entity_context_o *ctx = manager->ctx;
  tm_allocator_i allocator = manager->allocator;
  tm_free(&allocator, manager, sizeof(tm_component_manager_o));
  tm_entity_api->destroy_child_allocator(ctx, &allocator);
}
static void component__create(struct tm_entity_context_o *ctx) {
  // Allocate a new manager for this component type (freed in
  // component__destroy).
  tm_allocator_i allocator;
  tm_entity_api->create_child_allocator(ctx, TM_TT_TYPE__STORY_COMPONENT,
                                        &allocator);
  tm_component_manager_o *story_manager =
      tm_alloc(&allocator, sizeof(tm_component_manager_o));

  *story_manager = (tm_component_manager_o){.ctx = ctx, .allocator = allocator};

  tm_component_i component = {.name = TM_TT_TYPE__STORY_COMPONENT,
                              .bytes = sizeof(struct tm_story_component_t),
                              .load_asset = component__load_asset,
                              .destroy = component__destroy,
                              .remove = component__remove,
                              .manager =
                                  (tm_component_manager_o *)story_manager};
  tm_entity_api->register_component(ctx, &component);
};

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
  tm_properties_view_api = tm_get_api(reg, tm_properties_view_api);
  tm_os_api = tm_get_api(reg, tm_os_api);
  tm_path_api = tm_get_api(reg, tm_path_api);
  tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);
  tm_allocator_api = tm_get_api(reg, tm_allocator_api);
  tm_logger_api = tm_get_api(reg, tm_logger_api);
  tm_localizer_api = tm_get_api(reg, tm_localizer_api);
  tm_asset_io_api = tm_get_api(reg, tm_asset_io_api);
  task_system = tm_get_api(reg, tm_task_system_api);
  tm_sprintf_api = tm_get_api(reg, tm_sprintf_api);
  tm_entity_api = tm_get_api(reg, tm_entity_api);
  tm_scene_common_api = tm_get_api(reg, tm_scene_common_api);
  tm_global_api_registry = reg;
  if (load)
    tm_asset_io_api->add_asset_io(&txt_asset_io);
  else
    tm_asset_io_api->remove_asset_io(&txt_asset_io);

  tm_add_or_remove_implementation(reg, load, tm_entity_create_component_i,
                                  component__create);
  tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i,
                                  create_truth_types);
  tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i,
                                  &asset_browser_create_my_asset);
}

First, we create our component, and we need a way to guarantee that our truth data exists during runtime as well. Therefore, we use the Entity Manager to allocate our Story data and store just a pointer and its size to the allocated data in the Manager.

struct tm_component_manager_o {
  tm_entity_context_o *ctx;
  tm_allocator_i allocator;
};
static void component__create(struct tm_entity_context_o *ctx) {
  // Allocate a new manager for this component type (freed in
  // component__destroy).
  tm_allocator_i allocator;
  tm_entity_api->create_child_allocator(ctx, TM_TT_TYPE__STORY_COMPONENT,
                                        &allocator);
  tm_component_manager_o *story_manager =
      tm_alloc(&allocator, sizeof(tm_component_manager_o));

  *story_manager = (tm_component_manager_o){.ctx = ctx, .allocator = allocator};

  tm_component_i component = {.name = TM_TT_TYPE__STORY_COMPONENT,
                              .bytes = sizeof(struct tm_story_component_t),
                              .load_asset = component__load_asset,
                              .destroy = component__destroy,
                              .remove = component__remove,
                              .manager =
                                  (tm_component_manager_o *)story_manager};
  tm_entity_api->register_component(ctx, &component);
};

The most important function here is the component__load_asset function, in which we translate the Truth Representation into an ECS representation. We load the text buffer, allocate it with the Manager, and store a pointer in our component. With this, we could create a reference-counted system in which multiple components point to the same story data, and only when the last component goes we deallocate this. Another alternative would be to avoid loading when we create the component, and the story is already allocated.

static bool component__load_asset(tm_component_manager_o *man,
                                  struct tm_entity_commands_o *commands,
                                  tm_entity_t e, void *c_vp,
                                  const tm_the_truth_o *tt, tm_tt_id_t asset) {
  struct tm_story_component_t *c = c_vp;
  const tm_the_truth_object_o *asset_r = tm_tt_read(tt, asset);
  tm_tt_id_t id = tm_the_truth_api->get_reference(
      tt, asset_r, TM_TT_PROP__STORY_COMPONENT__ASSET);
  if (id.u64) {
    tm_tt_buffer_t buffer = tm_the_truth_api->get_buffer(
        tt, tm_tt_read(tt, id), TM_TT_PROP__MY_ASSET__DATA);
    c->text = tm_alloc(&man->allocator, buffer.size);
    c->size = buffer.size;
    memcpy(c->text, buffer.data, buffer.size);
  }
  return true;
}

When the Entity context gets destroyed, we need to clean up and destroy our Manager. Important that we call call_remove_on_all_entities to make sure all instances of the component are gone.

static void component__remove(tm_component_manager_o *manager,
                              struct tm_entity_commands_o *commands,
                              tm_entity_t e, void *data) {
  tm_story_component_t *sc = (tm_story_component_t *)data;
  tm_free(&manager->allocator, sc->text, sc->size);
}

static void component__destroy(tm_component_manager_o *manager) {
  // Free the actual manager struct and the allocator used to allocate it.
  tm_entity_context_o *ctx = manager->ctx;
  tm_allocator_i allocator = manager->allocator;
  tm_free(&allocator, manager, sizeof(tm_component_manager_o));
  tm_entity_api->destroy_child_allocator(ctx, &allocator);
}

Custom UI

This part is optional!, see next part where we use the TM_TT_PROP_ASPECT__PROPERTIES__ASSET_PICKER.

We also can to provide a custom UI for our reference to the asset! When we define our Truth type we tell the system that it should be a reference only of type TM_TT_TYPE_HASH__TXT_ASSET.

tm_the_truth_property_definition_t story_component_properties[] = {
    [TM_TT_PROP__STORY_COMPONENT__ASSET] = {
        "story_asset", .type = TM_THE_TRUTH_PROPERTY_TYPE_REFERENCE,
        .type_hash = TM_TT_TYPE_HASH__MY_ASSET}};

This makes sure that in the Editor the user cannot store any other Truth type in this property. The Truth will check for it

We need to add the TM_TT_ASPECT__PROPERTIES aspect to our type to make sure it has a custom UI.

static tm_properties_aspect_i properties_component_aspect = {
    .custom_ui = properties__component_custom_ui,
};

const tm_tt_type_t story_component_type = tm_the_truth_api->create_object_type(
    tt, TM_TT_TYPE__STORY_COMPONENT, story_component_properties,
    TM_ARRAY_COUNT(story_component_properties));
tm_tt_set_aspect(tt, story_component_type, tm_ci_editor_ui_i, editor_aspect);
tm_tt_set_aspect(tt, story_component_type, tm_properties_aspect_i,
                 &properties_component_aspect);

And than we need to define our custom UI:

static float
properties__component_custom_ui(struct tm_properties_ui_args_t *args,
                                tm_rect_t item_rect, tm_tt_id_t object) {
  TM_INIT_TEMP_ALLOCATOR(ta);
  tm_tt_type_t asset_type = tm_the_truth_api->object_type_from_name_hash(
      args->tt, TM_TT_TYPE_HASH__ASSET);
  tm_tt_id_t *ids = tm_the_truth_api->all_objects_of_type(
      args->tt,
      tm_the_truth_api->object_type_from_name_hash(args->tt,
                                                   TM_TT_TYPE_HASH__MY_ASSET),
      ta);
  tm_tt_id_t *items = 0;
  const char **names = 0;
  tm_carray_temp_push(names, "Select", ta);
  tm_carray_temp_push(items, (tm_tt_id_t){0}, ta);
  for (uint32_t i = 0; i < tm_carray_size(ids); ++i) {
    tm_tt_id_t owner = tm_the_truth_api->owner(args->tt, ids[i]);
    if (tm_tt_type(owner).u64 == asset_type.u64) {
      tm_carray_temp_push(
          names,
          tm_the_truth_api->get_string(args->tt, tm_tt_read(args->tt, owner),
                                       TM_TT_PROP__ASSET__NAME),
          ta);
      tm_carray_temp_push(items, ids[i], ta);
    }
  }
  item_rect.y = tm_properties_view_api->ui_reference_popup_picker(
      args, item_rect, "Asset", NULL, object,
      TM_TT_PROP__STORY_COMPONENT__ASSET, names, items,
      (uint32_t)tm_carray_size(items));
  TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
  return item_rect.y;
}

In there we get all objects of type TM_TT_TYPE_HASH__TXT_ASSET we know that this type can only exist as a sub object of the Asset Truth Type.

The type can only be created a part of the asset at this point!

This information is important because we need a name for our asset and therefore we just iterate over all our TXT Assets and check if they are owned by a Truth Asset type. If yes we get the name from it and store it in a names array. At the end we add all the IDs of the objects in a array so the user can select them in the drop down menu of the tm_properties_view_api.ui_reference_popup_picker().

Using the Asset Picker Property Aspect

Instead of implementing our own UI, which can be full of boilerplate code we can also use the following aspect on our truth type: TM_TT_PROP_ASPECT__PROPERTIES__ASSET_PICKER or if we want to store a entity an provide a entity from the scene: TM_TT_PROP_ASPECT__PROPERTIES__USE_LOCAL_ENTITY_PICKER. The following code can be adjusted. In your create_truth_types() we need to add this aspect to our type property like this:

static tm_properties_aspect_i properties_component_aspect = {
    .custom_ui = properties__component_custom_ui,
};

const tm_tt_type_t story_component_type = tm_the_truth_api->create_object_type(
    tt, TM_TT_TYPE__STORY_COMPONENT, story_component_properties,
    TM_ARRAY_COUNT(story_component_properties));
tm_tt_set_aspect(tt, story_component_type, tm_ci_editor_ui_i, editor_aspect);
tm_tt_set_aspect(tt, story_component_type, tm_properties_aspect_i,
                 &properties_component_aspect);
tm_tt_set_property_aspect(tt, story_component_type, TM_TT_PROP__STORY_COMPONENT__ASSET, tm_tt_prop_aspect__properties__asset_picker, TM_TT_TYPE__MY_ASSET);
}

Drag and drop a Text Asset into the Scene and create an entity

Finally, we can do what we came here to do: Make our Asset drag and droppable! We make use of the TM_TT_ASPECT__ASSET_SCENE!

We define the aspect as described above:

#include <plugins/the_machinery_shared/asset_aspects.h>
tm_asset_scene_api scene_api = {
    .droppable = droppable,
    .create_entity = create_entity,
};

In the Create Truth Type function you need to add the aspect TM_TT_ASPECT__ASSET_SCENE:

static void create_truth_types(struct tm_the_truth_o *tt) {
  static tm_the_truth_property_definition_t my_asset_properties[] = {
      {"import_path", TM_THE_TRUTH_PROPERTY_TYPE_STRING},
      {"data", TM_THE_TRUTH_PROPERTY_TYPE_BUFFER},
  };
  const tm_tt_type_t type = tm_the_truth_api->create_object_type(
      tt, TM_TT_TYPE__MY_ASSET, my_asset_properties,
      TM_ARRAY_COUNT(my_asset_properties));
  tm_tt_set_aspect(tt, type, tm_tt_assets_file_extension_aspect_i, "txt");
  static tm_properties_aspect_i properties_aspect = {
      .custom_ui = properties__custom_ui,
  };

  tm_tt_set_aspect(tt, type, tm_properties_aspect_i, &properties_aspect);
  tm_the_truth_property_definition_t story_component_properties[] = {
      [TM_TT_PROP__STORY_COMPONENT__ASSET] = {
          "story_asset", .type = TM_THE_TRUTH_PROPERTY_TYPE_REFERENCE,
          .type_hash = TM_TT_TYPE_HASH__MY_ASSET}};
}
//..
}

Than, we provide a droppable() function:

bool droppable(struct tm_asset_scene_o *inst, struct tm_the_truth_o *tt,
               tm_tt_id_t asset) {
  return true;
}

After this, the more important function comes:

#include <plugins/entity/transform_component.h>
#include <plugins/the_machinery_shared/scene_common.h>
// ... more code
tm_tt_id_t create_entity(struct tm_asset_scene_o *inst,
                         struct tm_the_truth_o *tt, tm_tt_id_t asset,
                         const char *name,
                         const tm_transform_t *local_transform,
                         tm_tt_id_t parent_entity, tm_tt_id_t asset_root,
                         struct tm_undo_stack_i *undo_stack,
                         tm_tt_undo_scope_t parent_undo_scope) {
  const tm_tt_undo_scope_t undo_scope = tm_the_truth_api->create_undo_scope(
      tt, TM_LOCALIZE("Create Entity From Creation Graph"));
  const tm_tt_type_t entity_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__ENTITY);
  const tm_tt_id_t entity =
      tm_the_truth_api->create_object_of_type(tt, entity_type, undo_scope);
  tm_the_truth_object_o *entity_w = tm_the_truth_api->write(tt, entity);
  tm_the_truth_api->set_string(tt, entity_w, TM_TT_PROP__ENTITY__NAME, name);
  // add transform:
  {
    const tm_tt_type_t transform_component_type =
        tm_the_truth_api->object_type_from_name_hash(
            tt, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);
    const tm_tt_id_t component = tm_the_truth_api->create_object_of_type(
        tt, transform_component_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }
  // add story:
  {
    tm_tt_type_t asset_type = tm_the_truth_api->object_type_from_name_hash(
        tt, TM_TT_TYPE_HASH__STORY_COMPONENT);
    const tm_tt_id_t component =
        tm_the_truth_api->create_object_of_type(tt, asset_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->set_reference(tt, component_w,
                                    TM_TT_PROP__STORY_COMPONENT__ASSET, asset);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }

  tm_the_truth_api->commit(tt, entity_w, undo_scope);

  tm_scene_common_api->place_entity(tt, entity, local_transform, parent_entity,
                                    undo_scope);

  undo_stack->add(undo_stack->inst, tt, undo_scope);

  return entity;
}

First, we create an entity:

tm_tt_id_t create_entity(struct tm_asset_scene_o *inst,
                         struct tm_the_truth_o *tt, tm_tt_id_t asset,
                         const char *name,
                         const tm_transform_t *local_transform,
                         tm_tt_id_t parent_entity, tm_tt_id_t asset_root,
                         struct tm_undo_stack_i *undo_stack,
                         tm_tt_undo_scope_t parent_undo_scope) {
  const tm_tt_undo_scope_t undo_scope = tm_the_truth_api->create_undo_scope(
      tt, TM_LOCALIZE("Create Entity From Creation Graph"));
  const tm_tt_type_t entity_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__ENTITY);
  const tm_tt_id_t entity =
      tm_the_truth_api->create_object_of_type(tt, entity_type, undo_scope);
  tm_the_truth_object_o *entity_w = tm_the_truth_api->write(tt, entity);
  tm_the_truth_api->set_string(tt, entity_w, TM_TT_PROP__ENTITY__NAME, name);
//...
}

Suppose it needs a transform. If not, we don't! In this case, we add the transform to the entity, just to make a point:

tm_tt_id_t create_entity(struct tm_asset_scene_o *inst,
                         struct tm_the_truth_o *tt, tm_tt_id_t asset,
                         const char *name,
                         const tm_transform_t *local_transform,
                         tm_tt_id_t parent_entity, tm_tt_id_t asset_root,
                         struct tm_undo_stack_i *undo_stack,
                         tm_tt_undo_scope_t parent_undo_scope) {
  const tm_tt_undo_scope_t undo_scope = tm_the_truth_api->create_undo_scope(
      tt, TM_LOCALIZE("Create Entity From Creation Graph"));
  const tm_tt_type_t entity_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__ENTITY);
  const tm_tt_id_t entity =
      tm_the_truth_api->create_object_of_type(tt, entity_type, undo_scope);
  tm_the_truth_object_o *entity_w = tm_the_truth_api->write(tt, entity);
  tm_the_truth_api->set_string(tt, entity_w, TM_TT_PROP__ENTITY__NAME, name);
  // add transform:
  {
    const tm_tt_type_t transform_component_type =
        tm_the_truth_api->object_type_from_name_hash(
            tt, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);
    const tm_tt_id_t component = tm_the_truth_api->create_object_of_type(
        tt, transform_component_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }
    // ...
}

Then, we add the story component to the entity we follow the same steps as before.

tm_tt_id_t create_entity(struct tm_asset_scene_o *inst,
                         struct tm_the_truth_o *tt, tm_tt_id_t asset,
                         const char *name,
                         const tm_transform_t *local_transform,
                         tm_tt_id_t parent_entity, tm_tt_id_t asset_root,
                         struct tm_undo_stack_i *undo_stack,
                         tm_tt_undo_scope_t parent_undo_scope) {
  const tm_tt_undo_scope_t undo_scope = tm_the_truth_api->create_undo_scope(
      tt, TM_LOCALIZE("Create Entity From Creation Graph"));
  const tm_tt_type_t entity_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__ENTITY);
  const tm_tt_id_t entity =
      tm_the_truth_api->create_object_of_type(tt, entity_type, undo_scope);
  tm_the_truth_object_o *entity_w = tm_the_truth_api->write(tt, entity);
  tm_the_truth_api->set_string(tt, entity_w, TM_TT_PROP__ENTITY__NAME, name);
  // add transform:
  {
    const tm_tt_type_t transform_component_type =
        tm_the_truth_api->object_type_from_name_hash(
            tt, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);
    const tm_tt_id_t component = tm_the_truth_api->create_object_of_type(
        tt, transform_component_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }
  // add story:
  {
    tm_tt_type_t asset_type = tm_the_truth_api->object_type_from_name_hash(
        tt, TM_TT_TYPE_HASH__STORY_COMPONENT);
    const tm_tt_id_t component =
        tm_the_truth_api->create_object_of_type(tt, asset_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->set_reference(tt, component_w,
                                    TM_TT_PROP__STORY_COMPONENT__ASSET, asset);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }
}

After all this, we can commit our changes to the Truth. After this we could place it in the scene with the tm_scene_common_api.place_entity(). This step is not needed but is nice to do!

Do not forget to add the undo scope to make sure we can undo our action and return the created entity!

tm_tt_id_t create_entity(struct tm_asset_scene_o *inst,
                         struct tm_the_truth_o *tt, tm_tt_id_t asset,
                         const char *name,
                         const tm_transform_t *local_transform,
                         tm_tt_id_t parent_entity, tm_tt_id_t asset_root,
                         struct tm_undo_stack_i *undo_stack,
                         tm_tt_undo_scope_t parent_undo_scope) {
  const tm_tt_undo_scope_t undo_scope = tm_the_truth_api->create_undo_scope(
      tt, TM_LOCALIZE("Create Entity From Creation Graph"));
  const tm_tt_type_t entity_type =
      tm_the_truth_api->object_type_from_name_hash(tt, TM_TT_TYPE_HASH__ENTITY);
  const tm_tt_id_t entity =
      tm_the_truth_api->create_object_of_type(tt, entity_type, undo_scope);
  tm_the_truth_object_o *entity_w = tm_the_truth_api->write(tt, entity);
  tm_the_truth_api->set_string(tt, entity_w, TM_TT_PROP__ENTITY__NAME, name);
  // add transform:
  {
    const tm_tt_type_t transform_component_type =
        tm_the_truth_api->object_type_from_name_hash(
            tt, TM_TT_TYPE_HASH__TRANSFORM_COMPONENT);
    const tm_tt_id_t component = tm_the_truth_api->create_object_of_type(
        tt, transform_component_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }
  // add story:
  {
    tm_tt_type_t asset_type = tm_the_truth_api->object_type_from_name_hash(
        tt, TM_TT_TYPE_HASH__STORY_COMPONENT);
    const tm_tt_id_t component =
        tm_the_truth_api->create_object_of_type(tt, asset_type, undo_scope);
    tm_the_truth_object_o *component_w = tm_the_truth_api->write(tt, component);
    tm_the_truth_api->set_reference(tt, component_w,
                                    TM_TT_PROP__STORY_COMPONENT__ASSET, asset);
    tm_the_truth_api->add_to_subobject_set(
        tt, entity_w, TM_TT_PROP__ENTITY__COMPONENTS, &component_w, 1);
    tm_the_truth_api->commit(tt, component_w, undo_scope);
  }

  tm_the_truth_api->commit(tt, entity_w, undo_scope);

  tm_scene_common_api->place_entity(tt, entity, local_transform, parent_entity,
                                    undo_scope);

  undo_stack->add(undo_stack->inst, tt, undo_scope);

  return entity;
}

Now we have added the ability to add this to our asset!

Note: This is all not done at run time, since we are dealing with the Truth here and not with the ECS our changes will apply first to the ECS when we simulate the game!

Modify a already existing asset

It is important to understand that you can add this to any truth type even if you do not define them in your plugin. Lets assume you have created a new plugin which uses the txt file asset. You are not the owner of this plugin and its source code so you cannot modify it like we did above. What you can do on the other hand you can add the aspect to the truth type:

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);
  tm_properties_view_api = tm_get_api(reg, tm_properties_view_api);
  tm_os_api = tm_get_api(reg, tm_os_api);
  tm_path_api = tm_get_api(reg, tm_path_api);
  tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);
  tm_allocator_api = tm_get_api(reg, tm_allocator_api);
  tm_logger_api = tm_get_api(reg, tm_logger_api);
  tm_localizer_api = tm_get_api(reg, tm_localizer_api);
  tm_asset_io_api = tm_get_api(reg, tm_asset_io_api);
  task_system = tm_get_api(reg, tm_task_system_api);
  tm_sprintf_api = tm_get_api(reg, tm_sprintf_api);
  tm_entity_api = tm_get_api(reg, tm_entity_api);
  tm_scene_common_api = tm_get_api(reg, tm_scene_common_api);
  tm_global_api_registry = reg;
  if (load)
    tm_asset_io_api->add_asset_io(&txt_asset_io);
  else
    tm_asset_io_api->remove_asset_io(&txt_asset_io);

  tm_add_or_remove_implementation(reg, load, tm_entity_create_component_i,
                                  component__create);
  tm_add_or_remove_implementation(reg, load, tm_the_truth_create_types_i,
                                  create_truth_types);
  tm_add_or_remove_implementation(reg, load, tm_asset_browser_create_asset_i,
                                  &asset_browser_create_my_asset);
}

and then in the create_truth_types function you can add the aspect to the truth type:

// -- create truth type
static void create_truth_types_modify(struct tm_the_truth_o *tt) {
  tm_tt_type_t asset_type = tm_the_truth_api->object_type_from_name_hash(
      tt, TM_TT_TYPE_HASH__MY_ASSET);
  if (asset_type.u64) {
    if (tm_tt_get_aspect(tt, asset_type, tm_asset_scene_api))
      tm_tt_set_aspect(tt, asset_type, tm_asset_scene_api, &scene_api);
  }
}

In here we get the asset_type from the Truth. Important here is we need to make sure the type exists already! if not it makes no sense to add the aspect to it. More over we need to make sure that the asset has not already this aspect. Since a Object can have only one aspect at the same time!

Open a special tab for an Asset

This walkthrough shows you how to enable an asset to open a specific tab. You should have basic knowledge about how to write a custom plugin. If not, you might want to check this Guide. This walkthrough aims to enable you to double-click an asset and open it in a specific tab.

You will learn:

  • Create a basic tab
  • Open an asset
  • How to use a temporary temporray allocator

This walkthrough will refer to the text asset example as the asset we want to extend! If you have not followed it, here is the link: Custom Asset.

Table of Content

New Tab

The first cause of action is to create the Tab we want to open.

Note: We can use the basic tab Template from the Engine: File -> New Plugin -> New Tab.

In this walkthrough, we aim to have a simple tab that shows the content of our text file.

The Steps of creating a tab are similar to the ones of a standard plugin:

  1. We create a new file. For example: txt_tab.c
  2. We make use of the default tab template provided by the engine:
static struct tm_api_registry_api *tm_global_api_registry;

static struct tm_draw2d_api *tm_draw2d_api;
static struct tm_ui_api *tm_ui_api;
static struct tm_allocator_api *tm_allocator_api;

#include <foundation/allocator.h>
#include <foundation/api_registry.h>

#include <plugins/ui/docking.h>
#include <plugins/ui/draw2d.h>
#include <plugins/ui/ui.h>
#include <plugins/ui/ui_custom.h>

#include <the_machinery/the_machinery_tab.h>

#include <stdio.h>
#define TM_CUSTOM_TAB_VT_NAME "tm_custom_tab"
#define TM_CUSTOM_TAB_VT_NAME_HASH TM_STATIC_HASH("tm_custom_tab", 0xbc4e3e47fbf1cdc1ULL)
struct tm_tab_o
{
    tm_tab_i tm_tab_i;
    tm_allocator_i allocator;
};
static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in, tm_rect_t rect)
{
    tm_ui_buffers_t uib = tm_ui_api->buffers(ui);
    tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
    tm_draw2d_style_t *style = &(tm_draw2d_style_t){0};
    tm_ui_api->to_draw_style(ui, style, uistyle);
    style->color = (tm_color_srgb_t){.a = 255, .r = 255};
    tm_draw2d_api->fill_rect(uib.vbuffer, *uib.ibuffers, style, rect);
}
static const char *tab__create_menu_name(void)
{
    return "Custom Tab";
}

static const char *tab__title(tm_tab_o *tab, struct tm_ui_o *ui)
{
    return "Custom Tab";
}
static tm_tab_vt *custom_tab_vt;

static tm_tab_i *tab__create(tm_tab_create_context_t *context, tm_ui_o *ui)
{
    tm_allocator_i allocator = tm_allocator_api->create_child(context->allocator, "Custom Tab");
    uint64_t *id = context->id;
    tm_tab_o *tab = tm_alloc(&allocator, sizeof(tm_tab_o));
    *tab = (tm_tab_o){
        .tm_tab_i = {
            .vt = custom_tab_vt,
            .inst = (tm_tab_o *)tab,
            .root_id = *id,
        },
        .allocator = allocator,
    };
    *id += 1000000;
    return &tab->tm_tab_i;
}
static void tab__destroy(tm_tab_o *tab)
{
    tm_allocator_i a = tab->allocator;
    tm_free(&a, tab, sizeof(*tab));
    tm_allocator_api->destroy_child(&a);
}
static tm_tab_vt *custom_tab_vt = &(tm_tab_vt){
    .name = TM_CUSTOM_TAB_VT_NAME,
    .name_hash = TM_CUSTOM_TAB_VT_NAME_HASH,
    .create_menu_name = tab__create_menu_name,
    .create = tab__create,
    .destroy = tab__destroy,
    .title = tab__title,
    .ui = tab__ui};
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;

    tm_draw2d_api = tm_get_api(reg, tm_draw2d_api);
    tm_ui_api = tm_get_api(reg, tm_ui_api);
    tm_allocator_api = tm_get_api(reg, tm_allocator_api);

    tm_add_or_remove_implementation(reg, load, tm_tab_vt, custom_tab_vt);
}

We modify the following parts of the sample:

  • tab__create_menu_name & tab__title they shall return: "Text Tab"
  • We remove the TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) function since we replace it in the next step.
  • To the tm_tab_o we add: a pointer to the Truth and a asset entry:
struct tm_tab_o {
  tm_tab_i tm_tab_i;
  tm_allocator_i *allocator;

  tm_tt_id_t asset;
  tm_the_truth_o *tt;
};

The most important fields here are the asset and tt filed. They will store the currently used asset and truth. Those fields allow us to access both in various functions.

  • We also change the defines to:

    #define TM_TXT_TAB_VT_NAME "tm_txt_tab"
    #define TM_TXT_TAB_VT_NAME_HASH TM_STATIC_HASH("tm_txt_tab", 0x2cd261be98a99bc3ULL)
    

After those adjustments we continue with creating a new load function on the bottom of the file:

tm_global_api_registry = reg;

tm_ui_api = tm_get_api(reg, tm_ui_api);
tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);

tm_add_or_remove_implementation(reg, load, tm_tab_vt, tab_vt);

In our main file the txt.c we need to add this function and call it:

extern void load_txt_tab(struct tm_api_registry_api *reg, bool load);

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  load_txt_tab(reg, load);
}

Implementing the functions

Now that we have done all the boilerplate code, let us focus on the three functions that count:

  • The create function
  • The UI Update
  • The Set Root function

The Create Function

static const char *tab__create_menu_name(void) { return "Text Tab"; }

static const char *tab__title(tm_tab_o *tab, struct tm_ui_o *ui) {
  return "Text Tab";
}

static tm_tab_vt *tab_vt;

static tm_tab_i *tab__create(tm_tab_create_context_t *context, tm_ui_o *ui) {
  tm_allocator_i *allocator = context->allocator;
  uint64_t *id = context->id;
  tm_tab_o *tab = tm_alloc(allocator, sizeof(tm_tab_o));
  *tab = (tm_tab_o){
      .tm_tab_i =
          {
              .vt = (tm_tab_vt *)tab_vt,
              .inst = (tm_tab_o *)tab,
              .root_id = *id,
          },
      .allocator = allocator,
  };

  *id += 1000000;
  return &tab->tm_tab_i;
}

In this function, we store the allocator first.

Tipp: if you wanted to create a child allocator, you could do this for your Tab. A child allocator may be very useful when doing many allocations: tm_allocator_api.create_child().

We need to initialize the tab interface so other engine parts can communicate with a generic interface to the Tab. We store a pointer to our Tab within this interface so other callers of the standard generic interface can access this instance and pass it along to the functions. After this, we allocate the Tab itself.

tm_tab_o *tab = tm_alloc(allocator, sizeof(tm_tab_o));
*tab = (tm_tab_o){
    .tm_tab_i =
        {
            .vt = (tm_tab_vt *)tab_vt,
            .inst = (tm_tab_o *)tab,
            .root_id = *id,
        },
    .allocator = allocator,
};

In the end, we return the pointer to the interface, so the docking system has it

The Update Function

static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in,
                    tm_rect_t rect) {
  tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
  if (tab->asset.u64) {
    TM_INIT_TEMP_ALLOCATOR(ta);
    tm_tt_buffer_t buffer = tm_the_truth_api->get_buffer(
        tab->tt, tm_tt_read(tab->tt, tab->asset), TM_TT_PROP__MY_ASSET__DATA);
    char *content = tm_temp_alloc(ta, buffer.size + 1);
    tm_strncpy_safe(content, buffer.data, buffer.size);
    tm_ui_text_t *text = &(tm_ui_text_t){.text = content, .rect = rect};
    tm_ui_api->wrapped_text(ui, uistyle, text);
    TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
  } else {
    rect.h = 20;
    tm_ui_text_t *text = &(tm_ui_text_t){.align = TM_UI_ALIGN_CENTER,
                                         .text = "Please open a .txt asset.",
                                         .rect = rect};
    tm_ui_api->text(ui, uistyle, text);
  }
}

At first, we create a copy of the UI Style. With this copy, we can do what we want since the input style is a const pointer. After this, we check if the asset is present. If not, we print a message on the Tab that the user must select a txt asset.

static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in,
                    tm_rect_t rect) {
  tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
  if (tab->asset.u64) {
  } else {
    rect.h = 20;
    tm_ui_text_t *text = &(tm_ui_text_t){.align = TM_UI_ALIGN_CENTER,
                                         .text = "Please open a .txt asset.",
                                         .rect = rect};
    tm_ui_api->text(ui, uistyle, text);
  }
}

We need an allocator to copy the data of the buffer into a proper string. The allocator is needed since we never added a null terminator to the end of the string when we save it into the buffer.

In this case, we need a temporary allocator. Since we do not want to keep the memory forever, we need to initialize a temp allocator with TM_INIT_TEMP_ALLOCATOR and provide a name. Do not forget to free the memory at the end.

static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in,
                    tm_rect_t rect) {
  tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
  if (tab->asset.u64) {
    TM_INIT_TEMP_ALLOCATOR(ta);
    TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
  } else {
    rect.h = 20;
    tm_ui_text_t *text = &(tm_ui_text_t){.align = TM_UI_ALIGN_CENTER,
                                         .text = "Please open a .txt asset.",
                                         .rect = rect};
    tm_ui_api->text(ui, uistyle, text);
  }
}

Note: If the memory is smaller than 1024 bytes, the memory is allocated to the stack. Moreover, an alternative is the frame allocator in which the memory is freed every frame.

The next step is it to ask the truth for the buffer and allocate the right amount of memory. Since we want to copy the data into a null-terminated string, we should add 1 to the data size.

tm_tt_buffer_t buffer = tm_the_truth_api->get_buffer(
    tab->tt, tm_tt_read(tab->tt, tab->asset), TM_TT_PROP__MY_ASSET__DATA);
char *content = tm_temp_alloc(ta, buffer.size + 1);
tm_strncpy_safe(content, buffer.data, buffer.size);

To ensure our string is clean and filled with nulls, we use the inline string file (hence we add the foundation/string.inl to our includes). We have the tm_strncpy_safe within this header file, which fills up a string with null terminator.

tm_strncpy_safe(content, buffer.data, buffer.size);

The last step before we are done is actually to make the text appear on the screen. We make use of the tm_ui_api.wrapped_text() since this function will print the string in multiple lines if need be.

tm_ui_text_t *text = &(tm_ui_text_t){.text = content, .rect = rect};
tm_ui_api->wrapped_text(ui, uistyle, text);

The set root / root function

The last function which we need is the set root function. This function allows us to set a root object of the Tab from the outside. Its code is quite straightforward:

void tab__set_root(tm_tab_o *inst, struct tm_the_truth_o *tt, tm_tt_id_t root) {
  inst->asset = root;
  inst->tt = tt;
}

static tm_tab_vt_root_t tab__root(tm_tab_o *tab) {
  return (tm_tab_vt_root_t){tab->tt, tab->asset};
}

Note: The root function is also called when ever the current "main" Truth changes. This can be used to swap the truth.

Let us test the Tab

Let us open the Tab from the Tabs menu!

[image]

Source Code

static struct tm_api_registry_api *tm_global_api_registry;

static struct tm_ui_api *tm_ui_api;
static struct tm_temp_allocator_api *tm_temp_allocator_api;

extern struct tm_the_truth_api *tm_the_truth_api;

#include <foundation/allocator.h>
#include <foundation/api_registry.h>
#include <foundation/string.inl>
#include <foundation/temp_allocator.h>
#include <foundation/the_truth.h>

#include <plugins/ui/docking.h>
#include <plugins/ui/ui.h>

#include <the_machinery/the_machinery_tab.h>

#include "txt.h"

struct tm_tab_o
{
    tm_tab_i tm_tab_i;
    tm_allocator_i *allocator;

    tm_tt_id_t asset;
    tm_the_truth_o *tt;
};
static void tab__ui(tm_tab_o *tab, tm_ui_o *ui, const tm_ui_style_t *uistyle_in, tm_rect_t rect)
{
    tm_ui_style_t *uistyle = (tm_ui_style_t[]){*uistyle_in};
    if (tab->asset.u64)
    {
        TM_INIT_TEMP_ALLOCATOR(ta);
        tm_tt_buffer_t buffer = tm_the_truth_api->get_buffer(tab->tt, tm_tt_read(tab->tt, tab->asset), TM_TT_PROP__MY_ASSET__DATA);
        char *content = tm_temp_alloc(ta, buffer.size + 1);
        tm_strncpy_safe(content, buffer.data, buffer.size);
        tm_ui_text_t *text = &(tm_ui_text_t){.text = content, .rect = rect};
        tm_ui_api->wrapped_text(ui, uistyle, text);
        TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
    }
    else
    {
        rect.h = 20;
        tm_ui_text_t *text = &(tm_ui_text_t){.align = TM_UI_ALIGN_CENTER, .text = "Please open a .txt asset.", .rect = rect};
        tm_ui_api->text(ui, uistyle, text);
    }
}
static const char *tab__create_menu_name(void)
{
    return "Text Tab";
}

static const char *tab__title(tm_tab_o *tab, struct tm_ui_o *ui)
{
    return "Text Tab";
}

static tm_tab_vt *tab_vt;

static tm_tab_i *tab__create(tm_tab_create_context_t *context, tm_ui_o *ui)
{
    tm_allocator_i *allocator = context->allocator;
    uint64_t *id = context->id;
    tm_tab_o *tab = tm_alloc(allocator, sizeof(tm_tab_o));
    *tab = (tm_tab_o){
        .tm_tab_i = {
            .vt = (tm_tab_vt *)tab_vt,
            .inst = (tm_tab_o *)tab,
            .root_id = *id,
        },
        .allocator = allocator,
    };

    *id += 1000000;
    return &tab->tm_tab_i;
}

static void tab__destroy(tm_tab_o *tab)
{
    tm_free(tab->allocator, tab, sizeof(*tab));
}
void tab__set_root(tm_tab_o *inst, struct tm_the_truth_o *tt, tm_tt_id_t root)
{
    inst->asset = root;
    inst->tt = tt;
}

static tm_tab_vt_root_t tab__root(tm_tab_o *tab)
{
    return (tm_tab_vt_root_t){tab->tt, tab->asset};
}

static tm_tab_vt *tab_vt = &(tm_tab_vt){
    .name = TM_TXT_TAB_VT_NAME,
    .name_hash = TM_TXT_TAB_VT_NAME_HASH,
    .create_menu_name = tab__create_menu_name,
    .create = tab__create,
    .destroy = tab__destroy,
    .title = tab__title,
    .ui = tab__ui,
    .set_root = tab__set_root,
    .root = tab__root,
};

void load_txt_tab(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;

    tm_ui_api = tm_get_api(reg, tm_ui_api);
    tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);

    tm_add_or_remove_implementation(reg, load, tm_tab_vt, tab_vt);
}

Open the Tab

After all the previous steps, we can finally make our Text Asset open this Tab!

At first, we need to remove the static of our truth API since we require it in the txt_tab.c.

struct tm_the_truth_api *tm_the_truth_api;

When that is done, we need to include two more files and get two more APIs.

// open asset
static struct tm_the_machinery_api* tm_the_machinery_api;
static struct tm_docking_api* tm_docking_api;
//...

extern void load_txt_tab(struct tm_api_registry_api *reg, bool load);

// -- load plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load) {
  load_txt_tab(reg, load);
}

To open an asset, we need to add another aspect to our type. It is the TM_TT_ASPECT__ASSET_OPEN aspect! This aspect requires a implementation of the tm_asset_open_aspect_i.

// -- create truth type
static void create_truth_types(struct tm_the_truth_o *tt) {
  tm_tt_set_aspect(tt, type, tm_asset_open_aspect_i, open_i);
}

We add this implementation of the tm_asset_open_aspect_i above the create_truth_types function.

void open_asset(struct tm_application_o *app, struct tm_ui_o *ui,
                struct tm_tab_i *from_tab, tm_the_truth_o *tt, tm_tt_id_t asset,
                enum tm_asset_open_mode open_mode) {
  const tm_docking_find_tab_opt_t opt = {
      .from_tab = from_tab,
      .in_ui = ui,
      .exclude_pinned = true,
  };
  const bool pin = open_mode == TM_ASSET_OPEN_MODE_CREATE_TAB_AND_PIN;
  tm_tab_i *tab = tm_the_machinery_api->create_or_select_tab(
      app, ui, TM_TXT_TAB_VT_NAME, &opt);
  if (pin)
    tm_docking_api->pin_object(tab, tt, asset);
  else
    tab->vt->set_root(tab->inst, tt, asset);
}

static tm_asset_open_aspect_i *open_i = &(tm_asset_open_aspect_i){
    .open = open_asset,
};

The open function gives us all the important information:

  1. The App data
  2. The UI
  3. Which Tab requested this action
  4. The current Truth
  5. the asset
  6. How we will open the Tab.

At first, we define the search criteria for the create_or_select_tab of the Machinery API. In this case we want to exclude pinned tabs since the user might have a reason for why they are pinned!

tm_tab_i *tab = tm_the_machinery_api->create_or_select_tab(
    app, ui, TM_TXT_TAB_VT_NAME, &opt);

Now we can create or select a tab by calling the create_or_select_tab function.

const tm_docking_find_tab_opt_t opt = {
    .from_tab = from_tab,
    .in_ui = ui,
    .exclude_pinned = true,
};

The last step is actually to pass some data along! In this case, we need to check if something is pinned or not! Hence we check if the open_mode is equal to pinning. If yes, we ask the docking API to pin our Tab.

const bool pin = open_mode == TM_ASSET_OPEN_MODE_CREATE_TAB_AND_PIN;
if (pin)
  tm_docking_api->pin_object(tab, tt, asset);
else
  tab->vt->set_root(tab->inst, tt, asset);

Networking

Animation Sample

This tutorial will transform the Animation Sample project to a networked version:

Part 1: Network Assets

In this tutorial you’ll learn how to create new Network Node Assets.

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

In The Machinery, every simulation instance is separated from each other: the simulation that runs in the Simulate Tab, for example, is completely different from the simulation that runs on the Preview Tab, and they Cannot talk to each other: The goal of the Networking layer in The Machinery is to allow different Simulations to send data to each other.

To do that, we introduced a specific type of asset that defines how a specific kind of simulation (Client, Server, Etc) should behave with regards to the other nodes that there are on the Network: The Network Node.

To define a Network Node Asset, go in the asset browser, right click and select New→New Network Node.

Let’s define a Server network Node and a Client Network Node so that we can use them in our project:

Server

We want the server to be able to receive packets from other nodes, so let’s bind the default Simulation Receiver interface in the Properties view.

We also want our Server to Accept incoming connections from other Nodes. For now we’ll bind the “Accept From everyone” Accept interface, meaning that the server will accept connections from everyone.

img

Client

Our Client will need to accept connections from the Server (In The Machinery the concept of “connection” is unilateral, so the Client will open a connection to the Server but in turn the Server will open a connection in the opposite direction) so let’s bind the “Accept from everyone” accept interface as well.

The Client will also need to receive packets (the updates to the Gamestate that come from the Server), so make sure to bind the default simulation receiver to the Client as well.

We want our Client to immediately connect to the Server when it’s started: let’s bind the “Connect to local Server” bootstrap interface. (It will run immediately after the Client instance is created)

We also know that our Server will send Gamestate updates to our Client: so we want the Client to start with an empty world, assuming that all the necessary updates will later come from the Server. For this reason, make sure to toggle the “passive Gamestate” flag on the Client asset.

img

Part 2: Running Multiple Network Instances

In this tutorial you’ll learn how to run multiple simulation instances at the same time. This tutorial build on the learnings of the pervious tutorial: Part 1

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

Now that we have setup our two Network Assets we want to make sure that when we start our simulation both a Client and a Server instances are created, each in its own Simulate Tab.

We can do that changing the Network Settings: File→Settings→Networrk Settings: let’s add a Server Instance and a Client Instance.

img

If we run the simulation now you will see an empty world on the Client: the reason is that the Client Asset has been setup to start with an empty world, and no entities in the Server world are currently being replicated to the Client: Let’s fix this by making the World Entity Replicated via the Entity Tree.

img

And if we run the simulation again we’ll now see that the Client is correctly receiving updates from the Server about the World Entity: we can move the Xbot Entity in both windows exactly like in the single player game, but now when we move on the Server Window you’ll notice that the updates are sent to the Client as well: that is because we automatically check for changes in the components of all the entitites that have been flagged as replicated (once per second by default).

If we instead move the Player from the Client window, you’ll see that the Player on the Server entity doesn’t get updated: we told the Client to have a passive Gamestate, and so even if the Client is simulating the Player on its own simulation instance it doesn’t send the updates to the Server.

img

Part 3: Entity Control

In this tutorial you’ll learn how to set the control of a specific entity and remap its input source. This tutorial build on the learnings of the pervious tutorial: Part 2

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

We now want to make sure that the movement and facing direction of our player on the Server are controlled by the Keyboard and mouse input that is detected on the Client.

There’s a special purpose node that we can use to do exactly that: let’s add a “Set Entity Control” node to the Entity Graph of the World Entity: we want to bind the control of the xbot entity to the Client that connects to our server, so we will take the output from the Connection Accepted node and chain it to the Set Entity Control node.

img

We also need to tell the server that it should use the input that comes from the Client for the xbot Entity: we can use the Remap Input node to do so.

img

We also need to tell the Client that the input for the Xbot entity has to be taken from its own keyboard/mouse input: we’ll add the Remap input node as well once the Acquired Entity Control event is triggered on the client.

img

Now that we’ve remapped the input to come from the correct source, let’s convert all of the Poll Key and Poll Mouse Motion nodes in Poll Key for Entity and Poll Mouse Motion for Entity in the Graph of the xbot entity: this will make sure that instead of blindly using the local input to drive this entity, we’ll be a bit more smart and use the correct input: either the local one (Client) or the remote one that comes from the Client (Server). You can do this by using the “Covert” feature: right click on a node and click “convert” to see all the nodes that you can convert that node into.

img

So the Client is now transmitting the Input to the server (while using the keyboard/mouse input to drive it’s own player entity), and the server is instead using that input to drive its simulation instead.

Part 4: Supporting Multiple players

In this tutorial you’ll learn how spawn an entity every time a client connects. This tutorial build on the learnings of the pervious tutorial: Part 3

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

Let’s add support for multiple Clients to join the same world.

Instead of referencing a static Entity in the scene, we now want to spawn a new xbot entity every time a client connects to the server: we can do that simply by converting the Scene Entity node into a Spawn Entity node.

img

The other problem we need to solve to effectively support multiple players is the fact that earlier the Camera entity was itself available in the scene: know the camera entity is part of the dynamic entity that we’ll spawn once a client connects, and so the Set Camera node has to be executed inside the graph of the xbot entity asset itself, once the Acquired Entity Control event is triggered.

img

And now our server fully supports multiple players to join: every time a Client connects to it, it will spawn a new xbot entity, set the control and remap the input of it: once the client is notified that it acquired the control of that entity, it will set the camera and remap the input as well.

img

Part 5: Basic Graph Variable replication

In this tutorial you’ll learn how to replicate a graph variable across the network. This tutorial build on the learnings of the pervious tutorial: Part 4

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

Even if we’re supporting multiple players, when they move and look around the animation it’s not smooth everywhere: if there are clients A and B connected, nobody is telling client B about where client A is moving or looking, and so client B will only rely on the state updates that comes every second from the server to update the position and orientation of client A in its own simulation.

Let’s first tackle the problem of broadcasting the facing direction of a client to all the other clients.

If you take a look at the Pan subgraph of the xbot entity, you’ll see that we are computing a small angle offset every frame and adding that to the current entity rotation via a quaternion multiplication. But Client B is not receiving the input for Client A and so this computation will always result in a null rotation.

img

So we need to do three things:

  1. make sure that each client accumulate the correct angle for its own player entity in a graph variable and use that directly to drive the orientation
  2. replicate this variable from each client to the server
  3. broadcast the variable from the server to all the other clients

-we can use the Set and Get float variable nodes and reorganize our graph a bit to accomplish 1.

img

-To solve 2, We then simply convert the Set float variable node in a Float variable network replication node, specifying the fact that only clients should set and replicate the variable using the Network is of type node.

img

Note: when you pass a null connection to the network is of type node you are implicitly asking the type of the “local” simulation.

  1. is automatically done by the server: the moment it receives the variable update from the client it will automatically replicate the change to all the connected nodes. So we don’t have to do anything for this.

And now each client has the correct information about where each other client is looking at, and can animate the orientation of other players smoothly.

Part 6: Smooth Animation

In this tutorial you’ll learn how to synchronize an animation across multiple simulation instances. This tutorial build on the learnings of the pervious tutorial: Part 5

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

Even if the orientation of each player is now correctly broadcasted, movement its not: we’d like to make it so each client receives the correct information about where each other client is moving, so that it can play the animation correctly.

Take a look at the WASD subgraph section of the xbot entity:

img

We want to inject some float variable network replication nodes on the float (green) connections to make it so that only the client that controls a specific player entity sets and replicates the movement variables: the value will be transmitted to the server, which will in turn broadcast it to all the other clients. (Exactly the same strategy we used for the facing direction). On the other hand, all the simulations running should Get the variable value (either from their own computation or from the network) and pass that value to the Animation Set Variable nodes.

img

Important note: even if Set if is true in the float variable network replication node, the variable won’t actually be set if you don’t have control over that particular entity. Otherwise each simulation instance would override the variables of each other client as well as their own.

Now all the player entities move smoothly on all the clients, and we added multiplayer support to the player movement code by just “hijacking” those four connections and injecting some network replication nodes in the middle of them.

img

Part 7: Spawning Entities

In this tutorial you’ll learn how to Spawn an Entity with Prediction. This tutorial build on the learnings of the pervious tutorial: Part 6

Note: we start with making sure that you have the Networking feature flag enabled, you can do that in the Tools→Feature Flags Menu.

Video

Tutorial

Let’s see how we can trigger the spawning of an entity on the server by pressing a button on the client.

First of all let’s make sure that the single player version of the spawning works: setup a single spawn entity node that is triggered when the P button is pressed.

img

Then go back to single player mode by just removing all the instances in the network settings and starting the simulation again, verifying that the entity is correctly spawned as the button is pressed.

Now try to run a client and a server instance at the same time and try to press the button when the focus is on both window:

-if you press the button while the client has focus, nothing will happen as the client has a passive gamestate and so even if the event is triggered the client won’t actually spawn any entity.

-if the button is pressed while the server window is in focus, the entity will actually be spawned in the server simulation, and from then on its changes will be propagated to the other connected nodes.

To fix this, let’s convert the Poll key node in a Poll key for Entity node:

-If we now try to press the button while the focus is on the server nothing will happen, as the server is ignoring the local input for the player

-pressing the button on the client will instead trigger the spawning event on the server (as the input is being replicated)

img

With the current setup, the client has to wait for its command to get to the server, be executed, and the entity state changes to come back before it can actually see the entity: if you were using this mechanism to spawn a projectile this would mean waiting potentially half a second or more before the player gets some feedback… definitively unacceptable.

Let’s use the spawn entity with prediction to let the client create a local copy of the entity that has to be spawned, so that while the packets travel the internet, the client has already “predicted” the new entity creation locally.

But before we can use that node we have to make sure that the spawning is done as a consequence of an event (as the event information is what’s used to do the “matching” between the local fake entity and the entity that will later come from the server).

img

Also, we’ll trigger the event only on the client and replicate the event itself via the trigger event network replication node:

img

We can now finally just convert our spawn entity node into a spawn entity with prediction node, and the client will correctly spawn (and later match) a fake entity on it’s own local simulation to give an immediate feedback to the player about what happened.

img

UI

The Machinery's UI system, is a Immediate Mode GUI (IMGUI). Besides the information you can find here or in our API Documentation there are several Blog Posts you should check out:

Build Custom UI Controls

These walkthroughs we will teach you how to extend The Machinery's UI system.

In following following parts 1 - 3 we will cover the following topics:

Build Custom UI Controls, Part I

Hi, we are starting a 3-part tutorial series about The Machinery UI system. In Part I, we’ll talk about the basics and create a custom circular button. In Part II, we’ll create a custom button with textures support. To show the results of the first two parts, we’ll be using a simple custom tab, so in Part III, we’ll see how to set up your UI and render it on screen.

During this tutorial, you’ll implement the tm_ui_custom_controls plugin, which will contain the tm_ui_custom_controls_api for draw custom controls in UI and tm_ui_custom_controls_tab custom tab in order to visualize results in a separate tab.

Table of Content

Environment setup

During this tutorial, you’ll build the tm_ui_custom_controls plugin, which will contain the tm_ui_custom_controls_api for draw custom controls and tm_ui_custom_controls_tab custom tab in order to visualize results in a separate tab.

The source code is hosted on https://github.com/raphael-ourmachinery/tm-custom-control-tutorial, copy the contents of skel/ folder to a separate directory, the final result will be available in part1/ folder.

Below is a list of files of our project:

  • skel/libs.json: specify premake5 binaries that will be downloaded from The Machinery server;
  • skel/(premake5.lua/build.bat/build.sh): build scripts that use tmbuild.exe to build our shared library. Note that we are targeting TM_SDK_DIR/bin/plugins, so our plugin will be automatically loaded by the engine. You’ll need to set the TM_SDK_DIR environment variable pointing to The Machinery directory.
  • skel/src/custom_tab.(c/h): this is a minimal version of the custom tab sample, which makes it easier to see our custom button;
  • skel/src/ui_custom_controls_loader.(c/h): load the necessary APIs, it contains the definition of tm_load_plugin() needed our plugin be loaded by the plugins system;
  • skel/src/ui_custom_controls.(c/h): implementation of our circular button, later you can extend the API with your custom controls too.

Circular Custom Button:

The Machinery uses an immediate-mode UI. You can read more about it on One Draw Call UI blog post. To draw 2D shapes, we’ll be using the [tm_draw2d_api](https://ourmachinery.com/apidoc/plugins/ui/draw2d.h.html#structtm_draw2d_api) implemented in [draw2d.h](https://ourmachinery.com/apidoc/plugins/ui/draw2d.h.html), which supplies functions to draw basic 2D shapes. As we are implementing a circular button, we’ll we need to draw a circle using the following function:

tm_draw2d_api→fill_circle(tm_draw2d_vbuffer_t *vbuffer, tm_draw2d_ibuffer_t *ibuffer, const tm_draw2d_style_t *style, tm_vec2_t pos, float radius)

You can note this function takes a vertex and an index buffer as arguments. In the following tutorials, we’ll learn more about it, but for now, you only need to know is that [tm_draw2d_api](https://ourmachinery.com/apidoc/plugins/ui/draw2d.h.html#structtm_draw2d_api) will fill them, and we need to call tm_ui_api->buffers() to get the buffers. Later the engine will use tm_ui_renderer_api to draw the UI using one draw call.

Let’s add some more information tm_ui_circular_button_t and use it on circular_button():

  • ui_custom_controls.h:
    ...
    
    typedef struct tm_ui_circular_button_t
    {
        uint64_t id;
    
        tm_vec2_t center;
        float radius;
        tm_color_srgb_t background_color;
    } tm_ui_circular_button_t;
    
    ...
  • ui_custom_controls.c:
    ...
    
    bool circular_button(struct tm_ui_o *ui, const struct tm_ui_style_t *uistyle, const tm_ui_circular_button_t *c)
    {
        // tm_ui_buffer_t contains information needed when creating a custom control
        tm_ui_buffers_t uib = tm_ui_api->buffers(ui);
    
        // convert tm_ui_style_t to tm_draw2d_style_t
        tm_draw2d_style_t style;
        tm_ui_api->to_draw_style(ui, &style, uistyle);
        style.color = c->background_color;
    
        tm_draw2d_api->fill_circle(uib.vbuffer, uib.ibuffers[uistyle->buffer], &style, c->center, c->radius);
    
        return false;
    }
    
    ...

For control interaction logic, we'll need interfaces from ui_custom.h, actually all editor's UI is implemented using them. tm_ui_buffers_t that we got earlier has two important members, tm_ui_activation_t one keeps the information about activation and hovering state of UI controls, and tm_ui_input_state_t maintains the input state. The table below lists some important concepts of our UI system. You can read it at once, or skip for now and return when necessary:

ConceptDescription
IDEach control in the UI has a unique 64-bit identifier. Since controls are not explicitly created and destroyed, the ID is the only thing that identifies a control from one frame to the next.

You create a new ID by calling tm_ui_api->make_id(). IDs are assigned sequentially by the UI. You have to be a bit careful with this if you have controls that sometimes are visible and sometimes not, such as context menus. If you only generate the ID for the context menu when it is visible, it will change the numbering of the subsequent controls depending on whether the menu is visible or not. Since controls are identified by their IDs, this can lead to controls being misidentified.

A good strategy is to generate the IDs for all the controls that you might show upfront, so that the ID assignment is stable.

Note: We may change this in the future if we can find a more stable way of assigning IDs.
HoverThe UI system keeps track of which control the mouse pointer is hovering over, by storing its ID in a hover variable.

You never set the hover variable directly. Instead, in your control’s update, you check if the mouse is over your control with tm_ui_api->is_hovering(), and if it is you set next_hover to its ID. At the end of the frame, the UI assigns the value of next_hover to the hover variable.

The reason for this two-step process is that multiple controls or objects might be drawn on top of each other in the same area of the UI. The last object drawn will be on top and we want the hover variable to reflect whatever the user sees on the screen.
OverlayThe UI is actually drawn in two layers, one Base and one Overlay layer. The controls in the overlay layer are drawn on top of the controls in the Base layer, even if they are drawn earlier in the draw order. We use the Overlay layer for things like drop-down menus that should appear on top of other controls.

If an earlier control set next_hover to a control in the Overlay layer, this shouldn’t be changed by a later control in the base layer, because the Overlay layer control will appear on top of that one. We use a variable next_hover_in_overlay to keep track of if the current next_hover value represents an ID in the Overlay layer. In this case, it shouldn’t be changed by base layer controls.

In practice, the Overlay layer is implemented by keeping track of two index buffers in the drawing system, one for the base layer and one for the overlay layer. (Note that the two layers still share a single vertex buffer.) At the end of drawing, we merge the two buffers into one, by simply concatenating the Overlay buffer at the end of the base Buffer, thus making sure the overlay controls are drawn later, on top of the base control. With this approach, we can still draw everything with a single draw call.

Note that as a consequence of how we render our UI — we only have a single Vulkan context and everything is drawn with the same draw call — drop-down menus and other pop-up controls cannot extrude past the edges of the system window — everything is drawn with the system window rect.
ActiveSimilar to Hover, Active is a variable that keeps track of the currently active control, i.e. the control the user is currently interacting with.

We need to keep track of the active control for two reasons. First, we often want to draw the active control in a special way, such as showing a highlight and a caret in an active text box.

Second, the active control typically needs to keep track of some extra state. For example, an active slider needs to keep track of the slider’s initial position so that it can pop back to that if the user drags the mouse outside the slider.

The UI system uses a single large char[] buffer to keep track of the current active control’s state. This buffer is shared by all controls. Since there can only be one active control at a time, only one control will be using this buffer at a time. When a new control becomes active the buffer is zeroed (this should be a valid initial state for the active data).

Typically a control becomes active if the user presses the left mouse button while the control is being hovered. In this case, the control will call tm_ui_api->set_active(). Though there are other ways a control can become active too, such as by tabbing. To implement tab focus, you need to call tm_ui_api->focus_on_tab() in the control’s code.
ClippingThe drawing system has support for Clipping Rects. This is mostly useful when you need to clip text to a control’s rect. You create a new clipping rect by calling tm_draw2d_api->add_clip_rect() or tm_draw2d_api->add_sub_clip_rect(). This gives you a clipping ID that can be passed as part of the Draw or UI style.
Responder scopesResponder Scopes are used to control which controls can respond to keyboard input. Typically, when a control is Active, it, and all its parent controls can respond to keyboard input. For example, if the control is inside a scrollview, the scrollview will respond to scroll keypresses, while the tab that hosts the scrollview may respond to commands such as Ctrl+F.

Being an immediate GUI system, The Machinery doesn’t have an explicit concept of “child” and “parent” controls. Instead we use the concept of Responder Scopes. A parent control first calls begin_responder_scope(), then draws all its child controls and finally calls end_responder_scope(). This establishes a parent-child relationship for the purpose of keyboard interaction.

When a control becomes Active, the current set of Responder Scopes is saved as the Responder Chain. This is the list of controls that can respond to a keyboard action. To test if your control should act on keyboard input, you can call in_responder_chain().

Note: We currently don’t have any mechanism to check if other controls in the Responder Chain have “consumed” keyboard input, so if you have multiple controls in the same chain that respond to the same keyboard command, you may run into trouble.

Bellow, we have a higher-level view of the steps needed to implement our interaction logic:

  1. Create a id with tm_ui_api→make_id();
  2. Check if the button is already active with tm_ui_api→is_active(), it will return a pointer for a 16Kb buffer that you can use to keep custom data needed while the button is active;
  3. Check if the mouse is hovering the button, and set activation next_hover variable according. At the end of the frame, the UI system will set hover to our control id case no other control changed next_hover after us;
  4. Case the hover variable contains our control id and mouse is pressed, set it as the active one, which is done which tm_ui_api→set_active(), a pointer to the 16Kb buffer will be returned so you can cast it to control custom data, note that we need to pass a hash to the function identifying this data;
  5. Case our button is active and mouse was released, the control is considered clicked, and we call tm_ui_api→clear_active() to deactivate him;
  6. Now we can check if the mouse is hovering our control and use either the active or hovering color depending on we are the active control or not;

With this in mind, the complete code will be the following:

  • ui_custom_controls.h:
    ...
    typedef struct tm_ui_circular_button_data_t {
        const char *name;
        uint32_t frames_active;
    } tm_ui_circular_button_data_t;
    
    typedef struct tm_ui_circular_button_t
    {
        uint64_t id;
    
        tm_vec2_t center;
        float radius;
        tm_color_srgb_t background_color;
        tm_color_srgb_t hover_color;
        tm_color_srgb_t clicked_color;
    
        const char *text;
        const struct tm_color_srgb_t text_color;
    } tm_ui_circular_button_t;
    ...
  • ui_custom_controls.c:
    ...
    
    bool circular_button(struct tm_ui_o *ui, const struct tm_ui_style_t *uistyle, const tm_ui_circular_button_t *c)
    {
        // Step 1
        // tm_ui_buffer_t contains information needed when creating a custom control
        tm_ui_buffers_t uib = tm_ui_api->buffers(ui);
        const uint64_t id = c->id ? c->id : tm_ui_api->make_id(ui);
        
        // Step 2
        // is_active will return a pointer for user defined data up to 16KB
        tm_ui_circular_button_data_t *active = (tm_ui_circular_button_data_t *)tm_ui_api->is_active(ui, id, TM_UI_ACTIVE_DATA__CIRCULAR_BUTTON);
        if (active) {
            TM_LOG("active data -> name: %s, frames_active: %u\n", active->name, active->frames_active);
            active->frames_active++;
        }
    
        // convert tm_ui_style_t to tm_draw2d_style_t
        tm_draw2d_style_t style;
        tm_ui_api->to_draw_style(ui, &style, uistyle);
        style.color = c->background_color;
        
        // Step 3
        bool clicked = false;
        bool inside = tm_vec2_in_circle(uib.input->mouse_pos, c->center, c->radius);
        if (inside)
            uib.activation->next_hover = id;
        
        // Step 4
        if (uib.activation->hover == id && uib.input->left_mouse_pressed) {
            active = tm_ui_api->set_active(ui, id, TM_UI_ACTIVE_DATA__CIRCULAR_BUTTON);
            if (active)
                *active = (tm_ui_circular_button_data_t){ .name = "circular_button", .frames_active = 0 };
            tm_ui_api->set_responder_chain(ui, 0);
        }
        
        // Step 5
        if (active && uib.input->left_mouse_released) {
            clicked = inside;
            tm_ui_api->clear_active(ui);
        }
        
        // Step 6
        if (inside) {
            if (active)
                style.color = c->clicked_color;
            else if (uib.activation->hover == id)
                style.color = c->hover_color;
        }
    
        tm_ui_api->reserve_draw_memory(ui);
        tm_draw2d_api->fill_circle(uib.vbuffer, uib.ibuffers[uistyle->buffer], &style, c->center, c->radius);
    
        return clicked;
    }
    
    ...

Drawing text

The last thing we need is to draw some text inside our button. You'll need to call tm_draw2d_api→draw_glyphs() to fill UI buffers with text information. It takes as one of its arguments an array of glyphs indices that point to the corresponding tm_font_glyph_t glyph inside the tm_font_t structure. To get this information, we first need to convert the desired text to an array of codepoints using tm_unicode_api→utf8_decode_n() and pass them to tm_font_api→glyphs() . Thus, add the following lines to the source code:

With this in mind, the complete code will be the following:

  • ui_custom_controls.h:
    ...
    
    typedef struct tm_ui_circular_button_t
    {
       ...
        uint32_t icon;
        const char *text;
        const struct tm_color_srgb_t text_color;
    } tm_ui_circular_button_t;
    ...
  • ui_custom_controls.c:
    ...
    
    bool circular_button(struct tm_ui_o *ui, const struct tm_ui_style_t *uistyle, const tm_ui_circular_button_t *c)
    {
        ...
        // Inscribe a quad in button circle
        const float side = c->radius * sqrtf(2);
        tm_rect_t text_rect = tm_rect_center_dim(c->center, (tm_vec2_t){ side, side });
    
        tm_ui_api->reserve_draw_memory(ui);
        style.clip = tm_draw2d_api->add_sub_clip_rect(uib.vbuffer, style.clip, text_rect);
    
        // Get glyphs from our text
        uint16_t glyphs[128];
        uint32_t n = 0;
        {
            uint32_t codepoints[128];
            n = tm_unicode_api->utf8_decode_n(codepoints, 128, tm_or(c->text, ""));
            tm_font_api->glyphs(style.font->info, glyphs, codepoints, 128);
        }
        tm_vec2_t text_pos = {
            .x = c->center.x - side / 2.f,
            .y = middle_baseline(text_rect.y, text_rect.h, style.font->info, 1.f),
        };
        style.color = c->text_color;
        tm_draw2d_api->draw_glyphs(uib.vbuffer, uib.ibuffers[uistyle->buffer], &style, text_pos, glyphs, n);
    
        return clicked;
    }
    
    ...

We now have a custom button implementation that can be used across your projects. Please extend it and show us your results.

Toolbars and Overlays

In the Machinery, tabs can provide toolbars. If you wish to use custom toolbars within The Machinery tabs, you do not need to use anything within tm_toolbar_api. The docking system will ask your tab for a list of toolbars to draw each frame. See tm_tab_vt->toolbars().

In this walkthrough, we will learn how to write our little toolbar for our newly added tab! This walkthrough requires you to know how our plugin system works.

Table of Content

Implement a Toolbar in a Tab

To begin with, we need to create a new tab plugin. We go on File -> New Plugin -> Tab.

A file dialog pops up, and we can decide where to store our plugin.

We open the custom_tab.c (or however we called it) file with our favorite editor.

We search for the line in which we define the tab itself:

static tm_tab_vt *custom_tab_vt = &(tm_tab_vt){
    .name = TM_CUSTOM_TAB_VT_NAME,
    .name_hash = TM_CUSTOM_TAB_VT_NAME_HASH,
    .create_menu_name = tab__create_menu_name,
    .create = tab__create,
    .destroy = tab__destroy,
    .title = tab__title,
    .ui = tab__ui,
};

To our definition, we add a toolbars(). This function returns a C-Array of toolbar definitions. The array is allocated with the passed in the temporary allocator.

Note: A temporary allocator (tm_temp_allocator_api) Provides a system for temporary memory allocations. I.e., short-lived memory allocations that are automatically freed when the allocator is destroyed. Temp allocators typically use a pointer bump allocator to allocate memory from one or more big memory blocks and then free the entire block when the allocator is destroyed.

Important: You need to include the api first #include <foundation/temp_allocator.h> and get the api from the registry!

static tm_tab_vt *custom_tab_vt_toolbars = &(tm_tab_vt){
    .name = TM_CUSTOM_TAB_VT_NAME,
    .name_hash = TM_CUSTOM_TAB_VT_NAME_HASH,
    .create_menu_name = tab__create_menu_name,
    .create = tab__create,
    .destroy = tab__destroy,
    .title = tab__title,
    .ui = tab__ui,
    .toolbars = tab__toolbars, // we added this line
};

After we have added this, we need actually to define the function itself:

static struct tm_toolbar_i *tab__toolbars(tm_tab_o *tab,
                                          tm_temp_allocator_i *ta) {}

Within this function, we define our toolbars. Our toolbar will have an essential job! It will have a button that prints "Hello World".

To make this work, we need to create a C-Array of tm_toolbar_i objects and add our toolbar to it. This interface expects the following things:

FieldDescription
idAn application-wide unique ID for the toolbar. Cannot be zero.
ownerA pointer that can be accessed through the toolbar argument to the functions of this struct. Often used to store state for the toolbar, for example if you drawing toolbars inside a tab then you might want to store a pointer to that tab here.
uiCalled when ui() of [tm_toolbar_api](https://ourmachinery.com/apidoc/plugins/ui/toolbar.h.html#structtm_toolbar_api) wants to draw the toolbar. Make sure to respect draw_mode and return the rect that encompasses all the drawn controls. For toolbars inside horizontal and vertical containers, you can use [tm_toolbar_rect_split_off()](https://ourmachinery.com/apidoc/plugins/ui/toolbar.h.html#tm_toolbar_rect_split_off()) and [tm_toolbar_rect_advance()](https://ourmachinery.com/apidoc/plugins/ui/toolbar.h.html#tm_toolbar_rect_advance()) to easily manage the rect sizes while drawing your toolbar.
If you need to store state, the make sure to set owner when you create the [tm_toolbar_i](https://ourmachinery.com/apidoc/plugins/ui/toolbar.h.html#structtm_toolbar_i) object and get it from the passed toolbar pointer.
draw_mode_maskA combination of supported draw modes, ORed together values of [enum tm_toolbar_draw_mode](https://ourmachinery.com/apidoc/plugins/ui/toolbar.h.html#enumtm_toolbar_draw_mode). The ui function will be passed the currently used draw mode and is expected to handle it.

Note: For a complete list please check the documentation

MaskDescription
TM_TOOLBAR_DRAW_MODE_HORIZONTALYou an draw the toolbar horizontal.
TM_TOOLBAR_DRAW_MODE_VERTICALYou an draw the toolbar vertical.
TM_TOOLBAR_DRAW_MODE_WIDGETThe toolbar is an overlay

Let us provide the essential things:

  1. The id is to be able to identify the toolbar.
  2. The UI function is to be able to draw something.
  3. The draw_mode_mask to indicate where we want the toolbar to be drawn.
static struct tm_toolbar_i *tab__toolbars(tm_tab_o *tab,
                                          tm_temp_allocator_i *ta) {}

In our UI function, we can add the button via the [tm_ui_api](https://ourmachinery.com/apidoc/plugins/ui/ui.h.html#structtm_ui_api). Then log the string "Hello World" to the screen with the logger API.

Note: You need include the plugins/ui/ui.h and the foundation/log.h as well as get the API's first!

static tm_rect_t toolbar__ui(tm_toolbar_i *toolbar, struct tm_ui_o *ui,
                             const struct tm_ui_style_t *uistyle,
                             tm_rect_t toolbar_r,
                             enum tm_toolbar_draw_mode dm) {
  // ui code...
  return toolbar_r;
}

Tab Overlays

An image with visualization modes enabled using the Visualize overlay, as well as a Renderer Statistics overlay. The Visualize overlay is found in Top right toolbar → Render → Lighting Module → Show as overlay. The Statistics overlay is found in Top right toolbar → Statistics.

As an extension to the dockable toolbars the engine support overlays that hover on top of the tabs. Also, any toolbar can be pulled off and be made into a hovering overlay.

An overlay is just a toolbar that does not belong to any of the four toolbar containers that run along the edge of the tab. Toolbars have three rendering modes — horizontal, vertical, and widget. The widget mode is new, it is the richer, window-like mode seen in the picture above.

In the scene and simulate tabs, we’ve added:

  • A rendering visualization overlay. Found in Render → Lighting Module → Show as overlay in the top right toolbar.
  • A Statistics button (also top right toolbar) that makes it possible to popup statistics overlays, previously found within the Statistics tab.

The tab should return all toolbars it wishes to draw each frame, see tm_tab_vt->toolbars(). If you wish to support widget mode drawing, then make sure to set the bitmask tm_toolbar_i->draw_mode_mask a value that contains [TM_TOOLBAR_DRAW_MODE_WIDGET](https://ourmachinery.com/apidoc/plugins/ui/toolbar.h.html#enumtm_toolbar_draw_mode).

Toolbars are generalized and they are not coupled to the docking system and the tabs, so you could use them within other contexts if you wish.

How to use toolbars outside of The Machinery tabs

See the documentation under How to use toolbars outside of The Machinery tabs in toolbar.h.

Creating tab layouts through code

The Machinery allows you to fully customize the editor layout. For personal layouts this system can be fully utilized without using code (see Interface Customizations for more information), but you might want to create custom default layouts that get defined procedurally, this allows for more control when loading and saving the layout.

Creating layouts in code can be done though the tm_tab_layout_api. Here various functions are available for tab management. In this tutorial we’ll go over how the default workspace is created using the save_layout function.

In order to create a layout for The Machinery editor we need access to the editor settings. This is done through tm_the_machinery_api. This tutorial uses the tm_the_machinery_create_layout_i interface in order to gain access to the settings. In these settings we have access to the window layouts, which is the subobject we want to append our layout to. The first things we should do however is check whether our layout already exists so we don’t create a new one every time on startup.

static void create_layout(tm_application_o *app) {
  TM_INIT_TEMP_ALLOCATOR(ta);

  // Query the settings object and Truth from The Machinery API.
  tm_tt_id_t app_settings_id;
  tm_the_truth_o *tt = tm_the_machinery_api->settings(app, &app_settings_id);
  const tm_tt_id_t window_layouts_id = tm_the_truth_api->get_subobject(
      tt, tm_tt_read(tt, app_settings_id),
      TM_TT_PROP__APPLICATION_SETTINGS__WINDOW_LAYOUTS);

  // Check whether our layout already exists.
  const tm_tt_id_t *window_layouts = tm_the_truth_api->get_subobject_set(
      tt, tm_tt_read(tt, window_layouts_id),
      TM_TT_PROP__WINDOW_LAYOUTS__LAYOUTS, ta);
  const uint32_t num_window_layouts = (uint32_t)tm_carray_size(window_layouts);
  for (uint32_t i = 0; i < num_window_layouts; ++i) {
    const tm_strhash_t name_hash = tm_the_truth_api->get_string_hash(
        tt, tm_tt_read(tt, window_layouts[i]), TM_TT_PROP__WINDOW_LAYOUT__NAME);
    if (TM_STRHASH_EQUAL(name_hash, TM_LAYOUT_NAME_HASH)) {
      TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
      return;
    }
  }
  TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
}

After this we can start to define our actual tab layout. This is done through the tm_tab_layout_t. In this layout we can recursively define our tab layout with three distinct options per tabwell.

  • We can split the tabwell horizontally, creating top and bottom child tabwells.
  • We can split the tabwell vertically, creating left and right child tabwells.
  • We can define (up to 3) tabs that should be in this tabwell.
tm_tab_layout_t layout = {
    .split = TM_TAB_LAYOUT_SPLIT_TYPE__HORIZONTAL,
    .bias = 0.25f,
    .top =
        &(tm_tab_layout_t){
            .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
            .bias = 0.67f,
            .left =
                &(tm_tab_layout_t){
                    .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
                    .bias = -0.67f,
                    .right =
                        &(tm_tab_layout_t){.tab = {TM_SCENE_TAB_VT_NAME_HASH}},
                    .left =
                        &(tm_tab_layout_t){.tab = {TM_TREE_TAB_VT_NAME_HASH}},
                },
            .right =
                &(tm_tab_layout_t){.tab = {TM_PROPERTIES_TAB_VT_NAME_HASH}},
        },
    .bottom =
        &(tm_tab_layout_t){
            .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
            .bias = 0.5f,
            .left =
                &(tm_tab_layout_t){
                    .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
                    .bias = -0.5f,
                    .right =
                        &(tm_tab_layout_t){
                            .tab = {TM_ASSET_BROWSER_TAB_VT_NAME_HASH}},
                    .left = &(
                        tm_tab_layout_t){.tab = {TM_CONSOLE_TAB_VT_NAME_HASH}},
                },
            .right = &(tm_tab_layout_t){.tab = {TM_PREVIEW_TAB_VT_NAME_HASH}},
        },
};

Defining the tabs is relatively straight forward, you define them using their name hash. Splitting a tabwell horizontally or vertically however requires a bias parameter. This defines the ratio of both tabs. Zero means both tabs are of equal size, whereas 1 means that the primary tab (left or top) fully encompass the tabwell whilst the secondary tab (right or bottom) is hidden. Negative values allow you to use the secondary tab as if it was the primary tab.

const tm_tt_id_t layout_id = tm_the_truth_api->create_object_of_hash(
    tt, TM_TT_TYPE_HASH__WINDOW_LAYOUT, TM_TT_NO_UNDO_SCOPE);
tm_the_truth_object_o *layout_w = tm_the_truth_api->write(tt, layout_id);

tm_the_truth_object_o *layouts_w =
    tm_the_truth_api->write(tt, window_layouts_id);
tm_the_truth_api->add_to_subobject_set(tt, layouts_w,
                                       TM_TT_PROP__WINDOW_LAYOUTS__LAYOUTS,
                                       &layout_w, 1);
tm_the_truth_api->commit(tt, layouts_w, TM_TT_NO_UNDO_SCOPE);

tm_the_truth_api->set_string(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__NAME,
                             TM_LAYOUT_NAME);
tm_the_truth_api->set_uint32_t(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__ICON,
                               TM_UI_ICON__COLOR_WAND);
tm_the_truth_api->set_float(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__WINDOW_X,
                            0.0f);
tm_the_truth_api->set_float(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__WINDOW_Y,
                            0.0f);
tm_the_truth_api->set_float(tt, layout_w,
                            TM_TT_PROP__WINDOW_LAYOUT__WINDOW_WIDTH, 1920.0f);
tm_the_truth_api->set_float(tt, layout_w,
                            TM_TT_PROP__WINDOW_LAYOUT__WINDOW_HEIGHT, 1080.0f);

const tm_tt_id_t tabwell_id =
    tm_tab_layout_api->save_layout(tt, &layout, false, TM_TT_NO_UNDO_SCOPE);
tm_the_truth_api->set_subobject_id(tt, layout_w,
                                   TM_TT_PROP__WINDOW_LAYOUT__TABWELL,
                                   tabwell_id, TM_TT_NO_UNDO_SCOPE);

tm_the_truth_api->commit(tt, layout_w, TM_TT_NO_UNDO_SCOPE);

Finally we can store our layout in the application settings. The top level object for this is the window layout. This specified the default position and size of the window if the user decides the instantiate the layout in a new window rather than as a workspace.

Entire Sample

static struct tm_the_machinery_api *tm_the_machinery_api;
static struct tm_temp_allocator_api *tm_temp_allocator_api;
static struct tm_tab_layout_api *tm_tab_layout_api;
static struct tm_the_truth_api *tm_the_truth_api;
#include <foundation/api_registry.h>
#include <foundation/application.h>
#include <foundation/temp_allocator.h>
#include <foundation/the_truth.h>
#include <foundation/carray.inl>

#include <plugins/ui/layouts.h>
#include <plugins/ui/ui_icon.h>
#include <plugins/ui/docking.h>

// tabs:
#include <the_machinery/the_machinery.h>
#include <the_machinery/asset_browser_tab.h>
#include <the_machinery/scene_tab.h>
#include <the_machinery/preview_tab.h>
#include <the_machinery/properties_tab.h>
#include <the_machinery/console_tab.h>
#include <the_machinery/entity_tree_tab.h>

#define TM_LAYOUT_NAME_HASH TM_STATIC_HASH("my_layout", 0xc1b38d6389074e53ULL)
#define TM_LAYOUT_NAME "my_layout"
static void create_layout(tm_application_o *app)
{
    TM_INIT_TEMP_ALLOCATOR(ta);

    // Query the settings object and Truth from The Machinery API.
    tm_tt_id_t app_settings_id;
    tm_the_truth_o *tt = tm_the_machinery_api->settings(app, &app_settings_id);
    const tm_tt_id_t window_layouts_id = tm_the_truth_api->get_subobject(tt, tm_tt_read(tt, app_settings_id), TM_TT_PROP__APPLICATION_SETTINGS__WINDOW_LAYOUTS);

    // Check whether our layout already exists.
    const tm_tt_id_t *window_layouts = tm_the_truth_api->get_subobject_set(tt, tm_tt_read(tt, window_layouts_id), TM_TT_PROP__WINDOW_LAYOUTS__LAYOUTS, ta);
    const uint32_t num_window_layouts = (uint32_t)tm_carray_size(window_layouts);
    for (uint32_t i = 0; i < num_window_layouts; ++i)
    {
        const tm_strhash_t name_hash = tm_the_truth_api->get_string_hash(tt, tm_tt_read(tt, window_layouts[i]), TM_TT_PROP__WINDOW_LAYOUT__NAME);
        if (TM_STRHASH_EQUAL(name_hash, TM_LAYOUT_NAME_HASH))
        {
            TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
            return;
        }
    }
    tm_tab_layout_t layout = {
        .split = TM_TAB_LAYOUT_SPLIT_TYPE__HORIZONTAL,
        .bias = 0.25f,
        .top = &(tm_tab_layout_t){
            .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
            .bias = 0.67f,
            .left = &(tm_tab_layout_t){
                .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
                .bias = -0.67f,
                .right = &(tm_tab_layout_t){.tab = {TM_SCENE_TAB_VT_NAME_HASH}},
                .left = &(tm_tab_layout_t){.tab = {TM_TREE_TAB_VT_NAME_HASH}},
            },
            .right = &(tm_tab_layout_t){.tab = {TM_PROPERTIES_TAB_VT_NAME_HASH}},
        },
        .bottom = &(tm_tab_layout_t){
            .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
            .bias = 0.5f,
            .left = &(tm_tab_layout_t){
                .split = TM_TAB_LAYOUT_SPLIT_TYPE__VERTICAL,
                .bias = -0.5f,
                .right = &(tm_tab_layout_t){.tab = {TM_ASSET_BROWSER_TAB_VT_NAME_HASH}},
                .left = &(tm_tab_layout_t){.tab = {TM_CONSOLE_TAB_VT_NAME_HASH}},
            },
            .right = &(tm_tab_layout_t){.tab = {TM_PREVIEW_TAB_VT_NAME_HASH}},
        },
    };
    const tm_tt_id_t layout_id = tm_the_truth_api->create_object_of_hash(tt, TM_TT_TYPE_HASH__WINDOW_LAYOUT, TM_TT_NO_UNDO_SCOPE);
    tm_the_truth_object_o *layout_w = tm_the_truth_api->write(tt, layout_id);

    tm_the_truth_object_o *layouts_w = tm_the_truth_api->write(tt, window_layouts_id);
    tm_the_truth_api->add_to_subobject_set(tt, layouts_w, TM_TT_PROP__WINDOW_LAYOUTS__LAYOUTS, &layout_w, 1);
    tm_the_truth_api->commit(tt, layouts_w, TM_TT_NO_UNDO_SCOPE);

    tm_the_truth_api->set_string(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__NAME, TM_LAYOUT_NAME);
    tm_the_truth_api->set_uint32_t(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__ICON, TM_UI_ICON__COLOR_WAND);
    tm_the_truth_api->set_float(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__WINDOW_X, 0.0f);
    tm_the_truth_api->set_float(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__WINDOW_Y, 0.0f);
    tm_the_truth_api->set_float(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__WINDOW_WIDTH, 1920.0f);
    tm_the_truth_api->set_float(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__WINDOW_HEIGHT, 1080.0f);

    const tm_tt_id_t tabwell_id = tm_tab_layout_api->save_layout(tt, &layout, false, TM_TT_NO_UNDO_SCOPE);
    tm_the_truth_api->set_subobject_id(tt, layout_w, TM_TT_PROP__WINDOW_LAYOUT__TABWELL, tabwell_id, TM_TT_NO_UNDO_SCOPE);

    tm_the_truth_api->commit(tt, layout_w, TM_TT_NO_UNDO_SCOPE);
    TM_SHUTDOWN_TEMP_ALLOCATOR(ta);
}

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{

    tm_temp_allocator_api = tm_get_api(reg, tm_temp_allocator_api);
    tm_the_machinery_api = tm_get_api(reg, tm_the_machinery_api);
    tm_tab_layout_api = tm_get_api(reg, tm_tab_layout_api);
    tm_the_truth_api = tm_get_api(reg, tm_the_truth_api);

    tm_add_or_remove_implementation(reg, load, tm_the_machinery_create_layout_i, create_layout);
}

Collaboration

The editor has built-in support for real-time collaboration, allowing multiple people to work together in the same project. All user actions — importing assets, creating and editing entities, etc, are supported in the collaborative workflow.

If you just want to try out collaboration on your own, you can run the client and the host on the same machine (just start two instances of the_machinery.exe) and connect using the LAN option.

Table of Content

Who's project is edited?

In our collaboration model, one of the users always acts as a host. The host invites others to join her in editing her project. All the changes made by the participants in the session end up in the host’s project and it’s the host’s responsibility to save the project, check in the changes into version control, or do whatever else is needed to make the changes permanent.

WARNING: Please only connect to people you trust. Be aware that Plugin Assets will be sent via a collaboration as well. The Engine will warn you every time a plugin asset is sent.

Host or Join Sessions

You have three options to host or join a collaboration Session:

Host a LAN Server

Host a server on your LAN. The system will choose a free port on your machine for hosting. Other users on the same LAN can join your session by choosing Join LAN Server and selecting your machine.

Host locally

  1. Select "Host LAN Server" from the dropdown
  2. Your Handle, in case of hosting this will be the name the session will have as well as your username.
  3. When you press Host the session starts.

Join a local Server

When you open the collaboration tab the default view is the "Join LAN Server" view. In this view you can join a local server.

The default view of the collab tab is the join local option

  1. Select "Join LAN Server" from the dropdown
  2. You can select a collaboration session. If there is no session available this field is disabled.
  3. Your Handle, the name the other user can see on their side.
  4. If you selected a session you can press this button to join. When you join the Engine will download the host's Project.

Host Internet Server

Host a server that can be accessed over the internet on a specified port.

Host Internet Server

  1. You can select the port of your session. If your router does not support UPnP you might have to port forward your selected port.
  2. Your Handle, the name the other user can see on their side.
  3. If you check the "Use UPnP" checkbox the system will attempt to use UPnP to open the port in your router, so that external users can access your server.
  4. If you selected a session you can press this button to join. When you join the Engine will download the host's Project.

Note: There is no guarantee that UPnP works with your particular router. If Internet hosting is not working for you, you may have to manually forward the hosting port in your router.

Join an internet Server

To connect, an external user would choose Join Internet Server and specify your external IP address.

Join a Internet Server

  1. You need the Online Host's IP Address. In the format: 89.89.89.89:1234.
  2. Your Handle, the name the other user can see on their side.
  3. It will try to connect to the other user.

WARNING: Please only connect to people you trust. Be aware that Plugin Assets will be sent via a collaboration as well. The Engine will warn you every time a plugin asset is sent.

Host a Discord based Session

The Machinery allows you to connect via Discord with your co-workers, team mates or friends. Important for this to work is that both parties: Host and Clients have the following option enabled in their Discord Options: Discord Settings -> Activity Status

Note: Host and client cannot be invisible otherwise invites wont work!

After this setting is enabled you need to add The Machinery as a game so others can see you are playing it. This allows you now to invite your friends via the Engine into your session and vice versa.

Connected

When you are connected to a collaboration session you have this view. In the connected view you can chat with the other participants of the chat. The session will not terminate / disconnect if you close the tab.

QA Pipeline

The Machinery comes with some built-in tools to support you in building games.

Statistic Tab

Allows you to visualize different statistics from different sources.

The Statistic tab consists of a Property View in which you can define your desired method of display and source. You can choose between Table, Line, or no visualization method. As sources, the engine will offer you any of the profiler scopes.

Statistics Overlay

During the simulation in the simulate tab you have the ability to open different statistic overlays.

https://www.dropbox.com/s/cmk3u9lt4d8l3n0/tm_guide_statistics_in_simulate.png?dl=1

Profiler Tab

The profiler tab will display all scopes that have been added to the profiler API. With the tab, you can record for a few moments all scopes and then afterward analyze them.

You can use the profiler API defined in the foundation/profiler.h. in your own projects. After you have loaded the [tm_profiler_api](https://ourmachinery.com/apidoc/foundation/profiler.h.html#structtm_profiler_api) in your plugin load function.

Profiler Macros
TM_PROFILER_BEGIN_FUNC_SCOPE() / TM_PROFILER_END_FUNC_SCOPE()
Starts a profiling scope for the current function. The scope in the profiler will have this name.
TM_PROFILER_BEGIN_LOCAL_SCOPE(tag) / TM_PROFILER_END_LOCAL_SCOPE(tag)
The call to this macro starts a local profiler scope. The scope is tagged with the naked word tag (it gets stringified by the macro). Use a local profiler scope if you need to profile parts of a function.

Example:

void my_function(https://ourmachinery.github.io/themachinery-books/*some arguments*/){
   TM_PROFILER_BEGIN_FUNC_SCOPE()
   // .. some code
   TM_PROFILER_END_FUNC_SCOPE()
}

Memory Usage Tab

The Machinery has a built-in Leak detection when one is using the provided allocators. Besides they all will log the used memory in the Memory Usage tab!

The memory tab will display all memory consumed via any allocator. Temporary allocators will be listed as well. Besides the memory from the CPU allocators, you can also inspect device memory used and the memory consumed by your assets.

Logging

The Machinery comes with a built-in Logger system. The Logger System lives in the foundation/log.h and contains the tm_logger_api. This API provides a few connivance macros. We can use them to log our code from anywhere. Besides this it is super easy to create your own logger and add it to the logger API.

Logging cheat sheet

You can log custom types. This is enabled via the tm_sprintf_api. You can log all primitive types like you are used to from C but as well as the engines API types. Just keep in mind the following syntax: %p{<MY_TYPE>} and the fact that you need to provide a pointer to the correct type:

typecall
boolTM_LOG("%p{bool}",&my_value);
tm_vec2_tTM_LOG("%p{tm_vec2_t}",&my_value);
tm_vec3_tTM_LOG("%p{tm_vec3_t}",&my_value);
tm_vec4_tTM_LOG("%p{tm_vec4_t}",&my_value);
tm_mat44_tTM_LOG("%p{tm_mat44_t}",&my_value);
tm_transform_tTM_LOG("%p{tm_transform_t}",&my_value);
tm_rect_tTM_LOG("%p{tm_rect_t}",&my_value);
tm_str_tTM_LOG("%p{tm_str_t}",&my_value);
tm_uuid_tTM_LOG("%p{tm_uuid_t}",&my_value);
tm_color_srgb_tTM_LOG("%p{tm_color_srgb_t}",&my_value);
tm_tt_type_tTM_LOG("%p{tm_tt_type_t}",&my_value);
tm_tt_id_tTM_LOG("%p{tm_tt_id_t}",&my_value);
tm_tt_undo_scope_tTM_LOG("%p{tm_tt_undo_scope_t}",&my_value);
tm_strhash_tTM_LOG("%p{tm_strhash_t}",&my_value);

You can register a support for your own custom type via the tm_sprintf_api.add_printer().

example:

First you define a function with a signature of the type tm_sprintf_printer. int tm_sprintf_printer(char *buf, int count, tm_str_t type, tm_str_t args, const void *data);

static int printer__custom_color_srgb_t(char *buf, int count, tm_str_t type, tm_str_t args, const void *data)
{
    const custom_color_srgb_t *v = data;
    return print(buf, count, "{ .r = %d, .g = %d, .b = %d, .a = %d , .hash = %llu }", v->r, v->g, v->b, v->a, v->hash);
}

After that you register it via the tm_sprintf_api to the add_printer() function.

TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_sprintf_api = reg->get(TM_SPRINTF_API_NAME);
    if(tm_sprintf_api->add_printer){
        tm_sprintf_api->add_printer("custom_color_srgb_t", printer__custom_color_srgb_t);
    }
}

More tm_sprintf_api formatting cheats

FmtValueResult
%I64d(uint64_t)100100
%'d1234512,345
%$d1234512.3 k
%$d10001.0 k
%$.2d25360002.53 M
%$$d25360002.42 Mi
%$$$d25360002.42 M
%_$d25360002.53M
%b36100100
%p{bool}&(bool){true}true
%p{tm_vec3_t}&(tm_vec3_t){ 1, 2, 3 }{ 1, 2, 3 }
%p{tm_vec3_t}0(null)
%p{unknown_type}&(tm_vec3_t){ 1, 2, 3 }%p{unknown_type}
%p{unknown_type:args}&(tm_vec3_t){ 1, 2, 3 }%p{unknown_type:args}
%p{tm_vec3_t&(tm_vec3_t){ 1, 2, 3 }(error)
%p{tm_rect_t}&(tm_rect_t){ 10, 20, 100, 200 }){ 10, 20, 100, 200 }
%p{tm_color_srgb_t}&TM_RGB(0xff7f00){ .r = 255, .g = 127, .b = 0, .a = 255 }

Write a custom logger

If you desire to add your own logger sink to the ecosystem there are a few steps you need to take:

  1. You need to include the foundation/log.h header
  2. You need to define a tm_logger_i in your file
  3. You need add a log function to this interface
    1. If you need some local data (such as an allocator) it might be good to define a .inst as well.
  4. After all of this you can call the tm_logger_api.add_logger() function to register your logger

Example:

#include <foundation/log.h>
// some more code

static void my_log_function(struct tm_logger_o *inst, enum tm_log_type log_type, const char *msg)
{
// do what you feel like doing!
}

tm_logger_i *logger = &(tm_logger_i){
    .log = my_log_function,
};
//.. more code
// This functions gets called at some point and this is the point I would like to register my logger
static void my_custom_api_function(void){
    tm_logger_api->add_logger(logger);
}

Note: This can be a use case for plugin callbacks. More about this see Write a plugin

This walkthrough introduces you to unit-tests.exe and shows you how to use it with The Machinery. You will learn about:

  • How to run tests
  • How to constantly monitor your changes
  • How to write tests in your plugin

This walkthrough expects basic knowledge on how to create a plugin and how a plugin is structured. If you are missing this knowledge, then you can find out more here.

Note: At this point, the testing framework is in the beginning stage. We are extending its capabilities overtime to meet the needs of modern game development QA pipelines.

About unit-tests

You can find the executable alongside tmbuild or the machinery executable in the bin/ folder. If you want to make the unit-tests executable globally accessible, you need to add it to your path environment variable. As you may have noticed, ensuring the quality of your build, tmbuild will run all unit tests of all plugins at the end of its build process. When you add a unit test to your plugin, it is guaranteed that its unit tests run every time you build. This is, of course, only guaranteed if the plugin system can find the plugin and its test.

Note: unit-tests will assume that the plugins live in a folder relative to the executable in the standardized folder plugins. If you need to load a plugin that is not in this folder, you need to provide a valid path via -p/--plugin so that unit-tests can find and run your tests.

How to run tests

To run all unit tests, execute unit-test, and it will run all tests besides the slow execution path tests. To run all unit tests, including the "slow" ones, you run unit-tests.exe -s/--slow-paths

Note: You may have noticed that if you run tmbuild regularly, you are lucky and win in the "lottery" from time to time. This means tmbuild will run all unit tests including the slow ones via unit-tests.

How to constantly monitor your changes

Like the editor, unit-tests supports hot reloading. In a nutshell, whenever plugins are rebuilt unit-tests can detect this and rerun the tests. To run in hot-reload mode startunit-tests with the -r/--hot-reload argument.

When could this be useful? It can be helpful on CI Server where build and test servers are different. The build server's final build step uploads the generated dlls to the test server if everything works fine. The test server is monitoring the filesystem, and whenever the dlls change, unit-tests would rerun all tests, also the slow ones, to ensure that all works. It could save time on the build server so the build times are faster and the developer knows quicker if the build fails. Also, the build server does not need a graphics card to run eventual graphic pipeline-related tests. The test server, on the other hand, could run such tests.

How to write your tests

All that is needed is to write tests is to register them via the tm_unit_test_i. You can find the interface in the unit_tests.h. tm_unit_test_i expects a pointer of the type tm_unit_test_i. This interface expects a name and a function pointer to the test entry function.

// Interface for running unit tests. To find all unit test, query the API registry for
// `TM_UNIT_TEST_INTERFACE_NAME` implementations.
typedef struct tm_unit_test_i
{
    // Name of this unit test.
    const char *name;

    // Runs unit tests, using the specified test runner. The supplied allocator can be used for
    // any allocations that the unit test needs to make.
    void (*test)(tm_unit_test_runner_i *tr, struct tm_allocator_i *a);
} tm_unit_test_i;

At this point, we have not tackled the following possible questions:

  • Where and how do we register the interface?
  • What could this interface look like?
  • What does the test itself look like?

Let us walk through those questions:

Where and how do we register the interface? We need to register our tests in the same function as everything else that needs to be executed when a plugin loads: in our tm_load_plugin. It may look like this:

#include <foundation/unit_test.h>
//...
// my amazing plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;
    //...
    tm_add_or_remove_implementation(reg, load, tm_unit_test_i, my_unit_tests);
}

Here we register our test interface to the tm_unit_test_i.

What could this interface look like? After we have done this, all we need to do is declare our my_unit_tests. It is as easy as it gets:

#include <foundation/unit_test.h>
//...
tm_unit_test_i *entity_unit_test = &(tm_unit_test_i){
    .name = "my_unit_tests",
    .test = test_function,
};

What does the test itself look like? All that's left is to write the test. Let us write this test. In its core all we need to do is write a function of the signature: (tm_unit_test_runner_i *tr, struct tm_allocator_i *a). In its body, we can define our tests.

static void test_function(tm_unit_test_runner_i *test_runner, tm_allocator_i *allocator)
{
    //.. code
}

The test runner variable test_runner is needed to communicate back to the test suite about failures etc. The following macros will help you write tests. They are the heart of the actual tests.

MacroArgumentsDescription
TM_UNIT_TEST(test_runner, assertion)Unit test macro. Tests the assertion using the test runner test_runner.
TM_UNIT_TESTF(test_runner, assertion, format, ...)As TM_UNIT_TEST() but records a formatted string in case of error.
TM_EXPECT_ERROR(test_runner, error)Expect the error message error. If the error message doesn't appear before the next call to record(), or if another error message appears before it, this will be considered a unit test failure.

Note that for TM_EXPECT_ERROR to work properly, you must redirect error messages to go through the test runner, so that it can check that the error message matches what's expected.

It's time for some tests. Let us write some tests for carrays

#include <foundation/unit_test.h>
#include <foundation/carray.inl>
//.. other code
static void test_function(tm_unit_test_runner_i *test_runner, tm_allocator_i *allocator)
{
    /*carray*/ int32_t *a = 0;
    TM_UNIT_TEST(test_runner, tm_carray_size(a) == 0);
    TM_UNIT_TEST(test_runner, tm_carray_capacity(a) == 0);
    TM_UNIT_TEST(test_runner, a == 0);
    tm_carray_push(a, 1, &allocator);

    TM_UNIT_TEST(test_runner, tm_carray_size(a) == 1);
    TM_UNIT_TEST(test_runner, tm_carray_capacity(a) == 16);
    TM_UNIT_TEST(test_runner, a);
    TM_UNIT_TEST(test_runner, a[0] == 1);

    tm_carray_header(a)->size--;

    TM_UNIT_TEST(test_runner, tm_carray_size(a) == 0);
    TM_UNIT_TEST(test_runner, tm_carray_capacity(a) == 16);

    tm_carray_grow(a, 20, &allocator);
    tm_carray_header(a)->size = 20;

    TM_UNIT_TEST(test_runner, tm_carray_size(a) == 20);
    TM_UNIT_TEST(test_runner, tm_carray_capacity(a) == 32);
}

All that's left is to build via tmbuild our plugin and watch the console output if our tests fail. This is how you integrate your tests into the whole build pipeline.

How to write integration tests

This walkthrough shows you how to write integration tests with our integration test framework. You will learn about:

  • How an integration test differs from a unit test.
  • Where to find the integration test framework and how to write a test
  • How to run an integration test.

About integration tests

Integration testing is the phase in testing software in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate a system's compliance or its interaction as a whole within specified functional requirements. It is generally used after unit testing to ensure that the composition of the software works. It is a potent tool to validate certain bugs that are hard to reproduce only after using software extensively. You can simulate this with integration tests. Besides, it is a very powerful tool validating that a bug fix was successful.

By their very nature, integration tests are slower and more fragile than unit tests, but they can also find issues that are hard to detect with regular unit tests. Each integration test runs in a specific "context", identified by a string hash. The context specifies the "scaffolding" is set up before the unit test runs.

How to write integration tests

Where to find the integration test framework?

The integration test framework can be found in the integration_test.h ,and is part of the foundation library. We need to include this header file, and then we can start writing our tests.

#include <foundation/integration_test.h>

To write a test you need to register it via the tm_integration_test_i. It expects a pointer of the type tm_integration_test_i. This interface expects a name and a function pointer to the test function (tick). Also, it expects a context. The context is a string hash. For example: TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR.

// Interface for integration tests.
typedef struct tm_integration_test_i
{
    // Name of the test.
    const char *name;
    // Context that this test will run in. Tests will only be run in contexts that match their
    // `context` setting.
    tm_strhash_t context;
    // Ticks the test. The `tick()` function will be called repeatedly until all it's `wait()` calls
    // have completed.
    void (*tick)(tm_integration_test_runner_i *);
} tm_integration_test_i;

At this point, we have not tackled the following possible questions:

  • Where and how do we register the interface?
  • What could this interface look like?
  • What does the test itself look like?

Let us walk those questions through:

Where and how do we register the interface?

We need to register our tests in the same function as everything else that needs to be executed when a plugin loads: in our tm_load_plugin.

// my amazing plugin
TM_DLL_EXPORT void tm_load_plugin(struct tm_api_registry_api *reg, bool load)
{
    tm_global_api_registry = reg;
    //...
    tm_add_or_remove_implementation(reg, load, tm_integration_test_i, my_integration_tests);
}

Here we register our test interface to the tm_integration_test_i.

What could this interface look like?

After we have done this, we need to declare our my_integration_tests.

tm_integration_test_i my_integration_tests = {
    .name = "stress-test",
    //Context that specifies a running The Machinery editor application
    .context = TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR,
    .tick = my_test_tick,
};

The name field is important because, later on, we need to use this name when we want to run the test. The context makes sure that it runs and boots up the Editor. TM_INTEGRATION_TEST_CONTEXT__THE_MACHINERY_EDITOR is defined in #include <foundation/integration_test.h>. The function my_test_tick gets called and the magic can happen.

What does the test itself look like?

Let us write this test. We need to write a function of the signature: (tm_integration_test_runner_i *). In its body, we can define our tests.

static void my_test_tick(tm_integration_test_runner_i *test_runner)
{
  //.. code
}

The test runner variable test_runner is needed to communicate back to the test suite about failures etc. The following macros will help you write tests. They are the heart of the tests.

MacroArgumentsDescription
TM_WAITtest_runner, secondWaits for the specified time inside an integration test.
TM_WAIT_LOOPtest_runner, second, iSince TM_WAIT() uses the __LINE__ macro to uniquely identify wait points, it doesn't work when called in a loop. In this case you can use TM_WAIT_LOOP() instead. It takes an iteration parameter i that uniquely identifies this iteration of the loop (typically it would just be the iteration index). This together with __LINE__ gives a unique identifier for the wait point.

TM_WAIT_LOOP WARNING

If you have multiple nested loops, be aware that using just the inner loop index j is not enough to uniquely identify the wait point since it is repeated for each outer loop iteration. Instead, you want to combine the outer and inner indexes.

Lets write some example:

#include <foundation/integration_test.h>
static void my_test_tick(tm_integration_test_runner_i *test_runner)
{
    const float step_time = 0.5f;
    if (TM_WAIT(tr, step_time))
    open(tr, "C:\\work\\sample-projects\\modular-dungeon-kit\\project.the_machinery_dir");
    if (TM_WAIT(tr, step_time))
    save_to_asset_database(tr, "C:\\work\\sample-projects\\modular-dungeon-kit\\modular-dungeon-kit.the_machinery_db");
    // ...
}

How do we run an integration test?

To run your newly created integration test, we need to build the project via tmbuild. Then start The Machinery with the -t/--test [NAME] parameter. It runs the specified integration test.

You can use multiple --test arguments to run multiple tests. This will boot up the engine and run your integration tests.

Example:

./bin/the-machinery.exe --test stress-test

Helper Tools

The Machinery comes with a few tools to make your daily life easier. There are tools for:

  • Generating Static Hash Values: hash
  • Generate Graph Nodes for you:generate-graph-nodes
  • To generate the solution files of The Engine or your plugin: tmbuild
  • Execute your unit tests: unit-test
  • Generate your Localization tables: localize
  • Free your Plugins from unneeded includes: trim-includes.exe

How to use tmbuild

We described tmbuild's core idea in our blog-post One-button source code builds. tmbuild is our custom one-click "build system." and it is quite a powerful tool. It allows you to do the most important tasks when developing with The Machinery: Building your plugin or the whole engine.

You can execute the tool from any terminal such as PowerShell or the VS Code internal Console window.

The key features are:

  • building
  • packaging
  • cleaning the solution/folder
  • downloading all the dependencies
  • running our unit tests

This walkthrough introduces you to tmbuild and shows you how to use and manipulate The Machinery Projects. You will learn about:

  • How to build with it
  • How to build a specific project with it
  • How to package your project

Also, you will learn some more advanced topics such as:

  • How to build/manipulate tmbuild

Table of Content

Installing tmbuild

When you download and unzip The Machinery either via the website or via the download tab you can find tmbuild in the bin folder in the root.

Alternatively, you can build it from source code\utils. We will talk about this later in this walkthrough.

Before we use tmbuild, we need to ensure that we have installed either build-essentials under Linux, XCode on Mac, or Visual Studio 2017 or 2019 (Either the Editor such as the Community Edition or the Build Tools).

Windows Side nodes:

On Windows, it is essential to install the C/C++ Build tools. If you run into the issue that tmbuild cannot find Visual Studios 2019 on Windows, it could be because you installed it on a typical path. No problem, you can just set the environment variable TM_VS2017_DIR or TM_VS2019_DIR to the root C:\Program Files (x86)\Microsoft Visual Studio\2019. The tool will find the right installed version automagically.

Set up our environment variables

Before we can build any project, we need to set up our environment. You need to set the following environment variable: (If this one has not been set the tool will not be able to build)

  • TM_SDK_DIR - This is the path to find the folder headers and the folder lib

If the following variable is not set, the tool will assume that you intend to use the current working directory:

  • TM_LIB_DIR - The folder which determines where to download and install all dependencies (besides the build environments)

How to add environment variables?

Windows

On Windows all you need to do is you need to add the folder where you installed The Machinery to your environment variables. You can do this like this: Start > Edit the system environment variables > environment variables > system variables > click New... > add TM_SDK_DIR or TM_LIB_DIR as the Variable Name and the needed path as the Variable Value. Close and restart the terminal or Visual Studio / Visual Studio Code. As an alternative, you can set an environment variable via PowerShell before you execute tmbuild, which will stay alive till the end of the session: $Env:TM_SDK_DIR="..PATH"

Debian/Ubuntu Linux

You open the terminal or edit with your favorite text editor ~/.bashrc and you add the following lines:

#...
export TM_SDK_DIR=path/to/themachinery/
export TM_LIB_DIR=path/to/themachinery/libs

(e.g. via nano nano ~/.bashrc)

Let us Build a plugin.

All you need to do is: navigate to the root folder of your plugin and run in PowerShell tmbuild.exe.

If you have not added tmbuild.exe to your global PATH, you need to have the right path to where tmbuild is located. [email protected]/home/user/tm/plugins/my_plugin/> ./../../bin/tmbuild

This command does all the magic. tmbuild will automatically download all the needed dependencies etc., for you (Either in the location set in TM_LIB_DIR or in the current working directory)