Building the VTT in-house, not exporting to one
Most platforms hand off maps to someone else's VTT. This is about closing the loop and testing what you built without leaving the tool.
Until this stretch, the test loop for a D3Designs map was a multi-step apology.
You build a map in the platform. You export it. You import it into your VTT of choice. You set up the session there. You realize you got the wall placement slightly wrong. You go back to D3Designs. You fix the wall. You export again. You re-import. You set up the session again. You ask yourself how this was an improvement over just building the map in your VTT in the first place.
The export dance is a feature, not a bug, in the sense that interoperability with existing VTTs matters and will keep mattering. But it is also a friction point that scales linearly with how often you iterate on a map, which is to say, all the time. The platform needed a way to let users test their creations in-house, without leaving the tool they just made the creation in.
This stretch was the start of that work.
What the VTT actually has to be
A virtual tabletop, at the technical level, is a real-time 2D scene with a lot of moving parts. Tokens that the DM and players manipulate. Maps that have to render at full quality without dragging the framerate. Visibility calculations that update as tokens move. Effects layered on top of the base map. Multiple users looking at the same scene from different perspectives.
The defining property of all of that is real time. The map is not a document that gets composed once and rendered. It is a live surface that updates many times per second, often with calculations that touch every pixel of the visible viewport. That is a different category of rendering work from the export canvas, which lays out a small number of elements once and re-renders only when something changes.
This is the part where the runtime choice matters.
WebGL is the point
Post 12 made the case for Konva on the export canvas. The argument was that the export canvas does not need WebGL, and reaching for a GPU-backed library when the work is "lay out twenty rectangles on a page" is choosing the wrong tool.
The VTT is the inverse situation. WebGL is exactly what the VTT needs.
A real-time tabletop with dynamic lighting, fog of war, multiple tokens, and effects layered on top of the map is a scene with hundreds of things that can change at any frame. Rendering that on the CPU through the 2D Canvas API would mean asking the browser to redraw a large portion of the viewport every time anything moved, and the framerate would suffer for it. Rendering it on the GPU through WebGL means the browser only sends the changes, and the GPU composites the scene at hardware speeds.
PixiJS is the library that gets you there. It is a WebGL-backed 2D scene graph with a real renderer underneath, broad browser support, and an API that lets you treat the scene as objects with transforms instead of pixels with positions. Everything the VTT needs to feel real-time is downstream of having that renderer underneath it.
The platform now has both runtimes. The export canvas runs on Konva because the work is layout. The VTT runs on PixiJS because the work is real-time rendering. They live in different parts of the codebase, share none of the rendering code, and each gets the runtime that fits the shape of what it has to do.
The three threads landing in parallel
The work in this stretch was not one thing. It was three.
The VTT engine itself: the underlying rendering layer, the scene graph, the way maps and tokens and elements get represented as objects that the renderer can draw. This is the foundation, and most of it is not visible to a user. Users see a map. The engine is what makes the map work.
The editor surface: the DM-facing interface for building and managing maps. Toolbars, sidebars, the layers panel, asset libraries, the floating toolbar that switches between tools. This is where the DM spends time. It has to feel like a tool, not a debugger.
The performance layer: the work that makes the engine and the editor fast enough to use on real maps. Viewport culling so the renderer is only thinking about what is on screen. Bulk operations so the editor can update many elements at once without grinding. Selection that handles composite objects gracefully. None of this is glamorous. All of it is the difference between a tech demo and a tool.
The three threads landed in parallel because they had to. The engine without the editor is invisible. The editor without performance is unusable. The performance work without the editor is premature optimization. Building all three at once is more work than building one at a time, but the alternative is shipping pieces that do not work on their own.
What is actually here
The honest framing is that this stretch built the core VTT features, not a finished VTT. The engine renders. The editor works. The performance is acceptable on the maps it was tested with. The platform now has a place to test what users build in it, without leaving the platform to do it.
There is more to build. The list of features that a complete VTT needs is long, and what landed in this stretch is the foundation for that list, not the list itself. The next stretch and the one after that and probably the one after that will keep extending the engine, the editor, and the performance work in parallel.
What changed in this stretch is that the test loop closed. A user can build a map in D3Designs and use it in D3Designs, and the export dance is no longer the only option for trying out what they just made.
That is what this stretch was for.

Written by Jean P.
Solo builder.
Discuss this post
Join the D3 Designs Discord to share thoughts and follow along.
