View All Projects
GABRIEL O'FLAHERTY-CHAN
Toronto-based iOS developer at Shopify, with roots in product design and UX. Currently making a procedurally generated universe sandbox and posting semi-frequent gif updates here
Most Recent Project
"CHAOTIC ERA", an RTS game set inside a procedurally generated universe sandbox. One year in.
Exactly a year ago, I chose to once again try my hand at making a game, and over the course of the year, Iâve tweeted biweekly progress updates in gif format. Here are some of the highlights.
Im making a game pic.twitter.com/u4GWBviSke
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) October 18, 2017
On-Demand Everything
From the start, I knew I had to think optimization-first. The universe is kind of big, so I kicked things off by investigating how to only show small parts of it at a time. The first step towards this led to implementing an octree, which was great for very quick access to only visible parts of the universe, enabling selective instantiation of objects in the scene graph. I donât think Unity wouldâve been too happy about instantiating trillions of game objects.
selective subdivision in the octree, will be necessary for varying levels of detail (lots of space in space) #gamedev pic.twitter.com/x1Lhr689B3
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) October 23, 2017
This ended up lending itself really well to how the layer backing the scene graph was modelled as well. For example, if only 1 galaxy is going to be visible at a given moment, we shouldnât also need to create the trillions of other galaxies that would also potentially exist, or the millions of stars in each of those galaxies.
out of necessity, I'm back on optimization problems in an effort to put stars in the sky.
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) July 6, 2018
this means on demand loading of super thin slivers of the universe graph now
hopefully non-debug visuals soon#gamedev pic.twitter.com/xlLisrNx1s
Where this fell apart was the assumption that everything was physically static. Because the universe is constantly expanding and itâs bodies in perpetual motion, assuming a static box should contain each body was not going to be an option.
some more debug visuals: planet with orbiting moons -> neighbouring planets -> nearby stars #gamedev pic.twitter.com/mLr9NLkFIO
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) August 28, 2018
The next iteration removed binary partitioning altogether, in favour of the gravitational relationships between parent and child bodies. For example, as Earth is a parent of its one moon, itâs also a child of the sun, which is (ok, was) a child of a larger cluster of stars. Each body defines a âreachâ, which encapsulates the farthest orbit of all its children, and given a âvisibility distanceâ (ie. max distance in which objects can be viewed from the gameâs camera), visible bodies are determined by the intersection of two spheres, one located at the bodyâs center with a radius of its reach, and one at the cameraâs center with a radius of its visibility distance.
This design enabled parent and child bodies and their siblings to be created at any point, and released from memory when no longer visible.
7 Digits of Precision Will Only Go so Far
While exploring the on-demand loading architecture, I quickly ran into an issue: objects in my scene would jump around or just stop showing up altogether. When I opened up the attributes inspector panel in the editor, I started seeing this everywhere:
Because Vector3
uses the float
type (which is only precise to 7 digits), any object in the scene with a Vector3
position containing a dimension approaching 1e+6
started to behave irradically as it lost precision.
Visualizing floating point precision loss, or "why #gamedev stuff behaves badly far from the origin"
— đłď¸âđDouglasđłď¸âđ (@D_M_Gregory) September 23, 2018
The blue dot is end of the desired vector. The black dot snaps to the closest representable point (in a simplified floating point grid with only 4 mantissa bits) pic.twitter.com/o1HKYByIbS
Since I was using Vector3
for both data modelling and positions of objects in the scene (no way around that), my kneejerk reaction was to start by defining my own double
-backed vector type which would at least double the numbers I could use for modelling. This got me a bit further but still wasnât addressing how those values would be represented in the scene. At this point I was rendering the entire universe in an absolute scale where 1 unity unit equaled a certain number of kilometers, resulting in values way beyond 16 digits of precision. Fiddling with the kilometer ratio wasnât going to solve this problem, as objects were always way too small or way too far away.
// In model land...
body.absolutePosition = body.parent.absolutePosition + body.relativePosition
// In scene land...
bodyObject.transform.position = body.absolutePosition * kmRatio;
One solution was to instead use a âcontextualâ coordinate system. Because the gameplay oriented around one body at a time, I could simply derive all other body positions relative to the contextual body. In other terms, the contextual body would always sit at (0,0,0)
, and all other visible bodies would be relatively nearby. And because the camera would always be located near the contextual body (focusing on one body at a time), as long as my visible distance was well within the 7 digit limit of float
, I could safely convert all double
vectors into Vector3
s, or even scrap use of double
in this context entirely.
var ctxPos = contextualBody.absolutePosition;
var bodPos = body.absolutePosition;
bodyObject.transform.position = ctxPos - bodPos;
// Huge loss of precision
This ended up working until it didnât, which was very soon. Hereâs a fun little phenomenon:
(1e+16 + 2) == (1e+16 + 3)
// false
(1e+17 + 2) == (1e+17 + 3)
// true
Despite using this âcontextual positionâ for scene objects, the actual values being calculated were still being truncated pretty badly even before being turned into Vector3
s, whenever the contextual position was a large enough value (ie. containing a dimension approaching or exceeding 1e+17). My kneejerk was to once again supersize my custom vectors and turn all those double
s into decimal
s to get even more precision (~29 digits). This felt extremely inelegant and lazy.
With the goal of making all position values as small as possible, I decided to just scrap the universe-space absolutePosition
design altogether, in favour of something a little more clever and cost-effective.
One way to avoid absolutePosition
in contextual position calculation was to rely instead on relational information, such as the position of a body relative to itâs parent. Since absolutePosition
could be derived by crawling up the relation graph and summing relative positions, the same value could be calculated by instead finding the lowest common ancestor of both contextual and given body, and calculate their distance relative to it. Effectively, shortening the resulting value significantly.
var lca = lowestCommonAncestor(contextualBody, body);
var ctxPos = contextualBody.relativePosition(relativeTo: lca);
var bodPos = body.relativePosition(relativeTo: lca);
// if body == contextualBody, this is (0,0,0)
bodyObject.transform.position = contextualBodyPosition - bodyPosition;
The result? Values well below that 7 digit ceiling! đ
Visuals and Interaction
One of the biggest challenges of this project has been visuals. Specifically choosing the right kind of UI element for interacting with non-conventional types of information, such as small points in 3D space. Over time, this project has seen so many iterations for how to best solve these problems:
a system for interacting with orbiting bodies using callouts #gamedev pic.twitter.com/uwUcph2z0B
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) September 21, 2018
looping back and forth between 3 levels. equally tedious and rewarding getting to this point #gamedev #fui pic.twitter.com/hDEMnsw6mx
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) February 10, 2018
Another challenge has been figuring out how to interact with the surface of a planet. Although this gameplay mechanic will likely be cut in favour of a coarser grain level of interaction, a big part of it was figuring out how to evenly distribute points on a sphere, and whether or not that should be done in 3D or in 2D:
this took an embarrassingly long time to figure out, but it actually works now: an infinitely subdivisable icosahedron-backed dual polyhedron made possible using a DCEL
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) January 10, 2018
now my life sim will have a place to play #gamedev #madewithunity pic.twitter.com/YkaMVtRYcg
improved accuracy of frustum culling by taking into account surface normals #gamedev pic.twitter.com/sBpRL7NRj4
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) March 10, 2018
small UI update: 3d navigation visualized with a 2d hex grid #gamedev #FUI pic.twitter.com/NmtdN9Im3M
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) February 18, 2018
Navigating between a contextual body and itâs parent or children is an ongoing challenge, as itâs a constant battle between utility and simplicity, and needless complexity creeps up all the time. For example, the first iteration of navigation had a neat yet unnecessary mosaic of boxes around the top of the screen representing individual bodies that could be navigated to. I decided to dramatically simplify for a few reasons:
- The gameplay doesnât necessitate travelling between multiple layers at a time (eg. moon -> galaxy)
- The hierarchical structure wasnât being communicated very well with boxes stacking horizontally and vertically
- Individual boxes didnât clearly describe the body they represented, and icons werenât going to be enough.
visuals are a little rough but functionality is all there. here's navigation between different levels of detail based off new architecture.
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) May 15, 2018
tldr; old graph created top-down, that was too slow, now everything's calculated bidirectionally and on-demand#gamedev pic.twitter.com/Fn2Srd1lJt
The current gameplay now shows one body at a time in the navigation and visible bodies can be interacted with more contextually, via 2D UI following objects in 3D space.
simplified navigation make transitions clearer. should be able to go up and down layers soon #gamedev pic.twitter.com/hwXp14uFov
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) October 2, 2018
a system for interacting with orbiting bodies using callouts #gamedev pic.twitter.com/uwUcph2z0B
— GABRIEL OFLAHERTY CHAN (@_GABRIELOC) September 21, 2018
Whatâs next?
With the universe sandbox in a stable place, Iâm hoping to shift all energy now towards gameplay and game mechanics. Because Iâm hoping to implement a real-time strategy element, I need think about how resources are mined, how territory is captured, how interaction with other players and AI works, and a lot more.
If youâre interested in learning more about this project, email me at hi@gabrieloc.com or contact me on twitter at @_gabrieloc.
More work
TV Plan - an AR app to help make TV purchase decisions
Making âGiovanniâ, a Game Boy Emulator for the Apple Watch
Building a Quadcopter Controller for iOS and Open-Sourcing the Internals
Š 2019 GABRIEL O'FLAHERTY-CHAN
Contact