This week was another one for admin, project management and the like! I did manage to squeeze in a bit of AI work, but I’ll have more to say on that next week. See you then!
This week I’ve spent working on the procedural heads, got both the mesh and texture getting generated procedurally and looking nice doing it. They aren’t all super-models but generally they don’t end up looking broken, which is a bigger feat than you might think.
Note: Still missing both eyebrows and eyelashes. Oh! And eyeballs!
We have a self-imposed internal milestone in a couple of weeks to get the base mission behavior in the game. I’ve been working towards that, with my nose buried in speadsheets. Last week I went through all the procedural plotpoint types that are specifically initiated by trait interaction with the enviornment.
The immediate design challenge is the following needs to be true:
1) if a character has a trait — say “Rough Around the Edges” (trait #208) his actions will differ from someone who has a different trait such as “Social Butterfly” (trait #229) in certain situations, and the player says “Yep, that guy is really Rough Around the Edges!”
2) The same character isn’t always doing the same thing. We don’t want the player to say “Oh, he’s in the procedural room called ‘Rugged Crater’ I know the “Rough Around the Edges” guy does X here because he always does the same thing on rugged terrain.
3) The system design can’t rely on too much “hand written” content for uniqueness of each scenario, because that blows up too quickly (which is why we are a system-heavy rather than content-heavy game)
So this week I came up with a design solution, which gives me the weekend to do enough data (or at the very least, the two necessary matricies) to see it in action.
This week I’ve mainly spent working on the faces more. Specifically their procedural textures. Additionally I’ve had a first stab at our LODs. Level Of Detail meshes automatically switches a mesh for a lower resolution version of it when the camera is far away. Sadly the computer can’t generate them and set everything up right all on it’s own so a human has to do a lot of file management and setup to ensure things turn out as they should. The human ending up doing this in our case is of course me.
This week I worked on organizing procedural plotpoint data. There are a lot of flags/tags/criteria, etc in data that the system relies on to know what a thing is. These need to be designed up front before system implementation. So basically I’m designing some of the plotpoint procgen and then formalizing in the design what data tracking/connections I need to make it work (aka imagining the db structure).
Hey there space cadets!
This week was more AI goodness as I moved forward with the behavior tree implementation. The core code is mostly done and I’ve been spending some time putting together a simple visual editor that will allow us to easily hook up AI logic.
I’m thinking long-term here, because this is the kind of tool I’d use again and again for AI in future titles. See you next week!
So, this week I’ve actually reached a point in the making of procedural faces that makes sense to show. They are not textured yet so they are here just flat gray. Blendshapes are pretty nice performance wise, in fact i just let it run for a second (recording at 60 fps) and slowed it down to one face per second in the gif. The result is a minute of random faces.
This week I worked on design of the procedural plot-point generation in the narrative system. We’re getting close to having this system finished!
Hey there fellow Spacers!
This past week, I changed gears and took another look at crewmember AI. I’m writing what’s known as a Behaviour Tree implementation to allow us a little more flexibility in how we define AI decision making.
Things like syncing up dinner schedules with other crew members and core meltdowns created problems that couldn’t be elegantly handled by the current system. Besides, the beauty of Behaviour Trees is that they don’t lock you into a particular way of doing things. Rather, they provide a control mechanism for pretty much any approach to decision making system you’d want to use.
This is extremely useful when trying to write something flexible for this game and reusable for the future! I’ll make a more elaborate post on this system once I’m done with it and have it in-game.
See you next week!
This week I’ve been sculpting blendshapes for the heads. I ended up going with a setup that requires what could end up at over 300 blendshapes to do everything it needs to. Thats a lot of work. As you might imagine that means I’ve been very busy to do this in a timely fashion.
This week I designed the sub-location suffixes, mapped them to the locations, and then designed the sub-location suffixes and how they relate to the traits.
I finally managed to get connectors added and done. I also started looking at AI again to get us sorted for Early Access. While the first iteration did a valiant job with most situations, some special cases (like reserving seats for later for social/dining areas) could not be handled elegantly. Once I’m done extending the system, I’ll probably post a special update explaining the process.
Once I’m done with that, I’ll be focusing on character conversations on-station. Dave completed a system a while ago to generate text for these conversations, as well as changes in their relationship. So, I’m looking forward to showing that one off in-game too.
See you next week!
This week I’ve spent mostly working on the heads. Still working on the behind the scenes stuff so theres not much to show to you guys right now that would make any sense without a rather lengthy explanation.
Last week I reorganized the planet location types in preparation for creating the narrative parameters which are referenced during procedural story generation. This upcoming week I will be mapping out the procgen sub-locations, which I’m calling “rooms” as they represent an indivisible space whereby every character in the room is engaged in whatever activity or story is taking place there. An abstraction at some level is needed to make implementation programmatically possible (and have a game rather than a simulation) and this is what I landed on.
After that, I will create a procedural generation of what are called “plotpoints” in the narrative system, which take place inside the “rooms,” using the aforementioned data hooks — so I better make sure I design the right parameters! Then it’s on to bookkeeping, wrapper code, activities, AI, and a bunch of misc items.
This a big system where many things are tied together, and my experience is that the order in which tasks need to be done is subject to frequent adjustment. I’ll let you know next week how things are progressing!
Greetings Star Rangers!
After returning from a glorious away mission where I visited strange new worlds, conquered vast new swathes of territory to fuel the fires of our Astrobase’s forges, and…
What’s that? Oh, sorry, I’m getting carried away.
I’ve spent the last week working on the emote icons that will help players understand what is going on in their crewmembers’ heads right before they get shoved out of the nearest airlock.
Whereas I’ve designed very simple black and white icons for the datapad interface, I want to highlight the characters’ vibrance by using more complex and colourful icons for the emotes.