A Plague Tale: Requiem Critical Analysis

I came into A Plague Tale: Innocence with absolutely zero expectations, I’d never heard of Asobo until that point and I didn’t exactly associate Focus Home with quality. I was very impressed and enjoyed it a lot. In contrast to the years that followed, there was a lot of competition in 2019. Unlike it’s predecessor though, I had high hopes for Requiem and interestingly it delivered something I wasn’t expecting that made me think, both about the themes of the game and the design of the game.

Next Generation

Two years into this console generation and I think it’s reasonable to say that we hadn’t seen a true next generation game. First parties straddled generations because they struggled to build enough consoles. People couldn’t get hold of consoles and demand was suppressed by the lack of first party incentive. On top of that 3rd party engines like Unreal and Unity have been stuck in a particularly deep last gen rut that epic is only now starting to claw itself out of. This left an open goal for the few studios that still build their own tech. Asobo’s release cycle lined up perfectly for them to deliver the first true next gen game and it is incredible to look at. It’s not doing anything particularly advanced, it just does what it did last generation but with the hardware limitations removed and it shows a taste of what can and will increasingly be achieved on this hardware.

What kind of game is this?

Innocence already established the series as a strange puzzle/stealth game with a strong emphasis on linear storytelling. I haven’t really played many Naughty Dog games but I get the impression that Asobo borrow a lot from the acclaimed studio. It’s very bold to try and go toe to toe with any first party studio, their budgets are usually at least 1 order of magnitude larger than any game that needs to break even, but especially so of Naughty Dog, that prides themselves on meticulous attention to detail.

It’s clear Asobo worked hard in this game to marry the themes of the gameplay and story together here and I’m divided on whether or not they succeeded. They built a much deeper resource economy for this game and they showed impressive restraint in increasing the player’s pool of resources. At the heart of this game, even more than it’s predecessor is a shared, constrained resource pool that’s used for both puzzles and stealth/combat. The combat needs a finite pool of resources to discourage aggressive play that would be narratively dissonant. The puzzling needs an abundance of these same resources to encourage experimentation.

Puzzling Combat

Ultimately they did fail in squaring this circle and puzzling had to lose out in service of the narrative, but that isn’t as much of a problem as it sounds. The puzzling is primarily used to control the pacing of the story, providing much needed down time from what would otherwise be considered a relentless slaughter. It wasn’t necessary for the puzzles to provide anything more than busy work and exposition and that’s what it does, I think there was only one or two sections where I didn’t know exactly what was expected of me immediately.

The stealth/combat on the other hand, in order to deliver a sense of progression needs to start slow and build and Asobo did this by building in a compelling, believable story about someone struggling and being overcome with trauma and hatred. There are points, particularly early on where I didn’t know if the mechanical game wanted me to be on a murderous rampage or to completely stealth entire encounters. At first, I saw this as a failing of the game but when the story made it clear that Amicia, the main character, was going through the exact same experience I felt closer to the story and more invested. I’m not sure if this is a triumph of design, it’s probably not, but I am sure that I really liked it and it made me think.

Empty World

The games industry zeitgeist on environment design has until recently been dominated by technical considerations, developers didn’t really choose the environments their games were set in, they were prescribed by technological or production considerations. The beginning of the tedious shift to ‘Open World’ was an inevitable consequence of the loosening of these limitations and developers immediately pushed the boundaries as far as they would go, rendering worlds vast and empty. Developers who pursued this avenue of design can’t be blamed, who wouldn’t want to explore this new range of possibilities. We all needed to get that out of our system but as a consequence, we’re only now emerging from that era into a new one. We’re entering a time where the size of the world is starting to be a reasoned, considered element of design. All this is to say that some developer’s are starting to build worlds that are deliberately smaller than the world they could build.

Through the Open World era there still existed games that lived in the older, extremely linear, story driven experience that pre-dated it, Innocence was one of those games that wasn’t interested in worlds, it wanted to tell a story. Requiem for the most part follows that same formula but there is a peculiar section towards the end where they experiment with something intriguing. Later in the game you are introduced to a relatively vast space, but it’s not an open world game and it doesn’t shift into one. Most of the space is empty and a normal playthrough wouldn’t take you to most of this map, it exists soley to give the player a sense of being in the open. It trusts them not to waste their own experience by going off piste. I think this is to be applauded in such a linear game, most other games of it’s ilk would force the player via obvious path blocking or some other condescending trick.

At the heart of Requiem however is an extremely prescribed design, you approach each encounter with exactly the resources needed to best it in the way the designers intended but there’s a deep, compelling, freeform gameplay loop waiting to break out and in this later open section I can see a version of A Plague Tale that lets the player off the leash. I hope to see it emerge in the next game from Asobo, I know they are capable of great things and I’ll be buying their next game regardless of what shape it takes.

Thematic Cohesion

I’ve been thinking a lot about thematic cohesion in game development lately, I don’t think it’s an understatement to say that achieving it is one of the greatest and most important challenges in the art form. I’ve had a few exchanges on twitter lately with Cara Ellison and Thomas Grip that both helped to focus my thinking on the subject to the point where I feel I could write something authoritative about it, so here’s my attempt.

What is it?

What exactly I mean by thematic cohesion, is that all the various constituent parts of the game are communicating to the player, or allowing the players to communicate to themselves or each other the same message. A textbook example would be a narrative game where the the narrative has a strong anti war message in a game where the primary gameplay verb is firing a gun.

Unifying these desperate thematic contexts is, in my opinion, what elevates the medium but is incredibly difficult, especially when working in a large team. The above example is the most commonly discussed form of thematic incoherence, what’s been labelled ludo-narrative dissonance but when writing about it or building a game, it’s important to take a more holistic view. Everything in the game communicates or facilitates communication of a message, not just the story and gameplay.

Other Media

We talk a lot, maybe too much, about what the movie industry has to offer the games industry. I would definitely argue that we’ve learned a lot of the wrong things from film making, largely a result of the scourge of failed film makers finding opportunities in the field that were beyond their abilities in the more competitive film industry.

I think though, that the games industry has transformed the film industry much more than the other way around. The technology that powers that beast would not exist without ours. I’m no expert but I would estimate also that we’ve harmed rather than helped movies. The technological progress largely unlocked by the games industry has facilitated de-emphasising character study and development in favour of plotting and cheap visual effects.

Having said all that there is one area where the absolute best of the movie industry has become adept at that even the best of us in games still struggle to achieve and that is preserving thematic consistency in large teams. Other than movies there is no other art form that demands so much harmony between the thinking of such large groups of people. I know very little about how movies are made but I believe there are 2 core components of how they made it work. Firstly, emphasising the core themes early and continuously throughout development to everyone involved in the production. Also, the post production editorial process.

Kojima Productions

Kojima Productions is a studio that is particularly successful at marrying themes or achieving ludo-narrative resonance. Again I don’t know how they go about doing this but I believe they do it primarily through the former approach described above. That Kojima is successful at communicating to the entire team throughout development his thematic vision for the project and keeps those themes at the forefront of every conversation throughout development. I think this is done primarily by leaning heavily on the kind of writing that ends up in the final game, sledge hammer subtle, naming characters after their achetypes or writing a 4 paragraph spiel about similarities between language and biological replication. This kind of writing makes it much easier to spot ludo-narrative dissonance.

The question is can we achieve this level of cohesion with more sophisticated writing? What percentage of people watch a film and bother to fully grasp the themes? I know for me, I won’t unless I truly love the film, is it possible for such large teams thoughts to resonate deeply enough? I speculate that it is by spending time during development to bring the entire team into the same space and having the team understand that it’s important that their contributions do have an effect and should be consistent.


I often say the games industry has an editing problem. We don’t do post production, the game gets built right up until the ship date and beyond. There’s no time to take stock of what you’ve made. If it reflects your intent and if there’s anything you’d like to remove, primarily because unlike the film industry, subtractive and transmutative changes are even more expensive than additive ones.

The reason for that is thermodynamics. In games we build systems, lots and lots of state for entropy to accumulate. Films do have a lot of entropy in the production but the end result is static, a single piece of state. You don’t have to unstir the paint to change it. Because of that we very probably won’t solve the editing problem.

Instead we have learned to be diligent in editing on the fly. We try and be aware of how the game is shaping up all the way through development. This is virtually impossible for a 100 hour game. No one before it ships plays that end to end and checks it for pacing considerations or thematic consistency. Definitely not the director or producers, who are absolutely swamped in work as the game is getting ready to ship.

One thing we could potentially learn from the film industry though is allowing for time between completion and release, this is difficult because timescales are very difficult to predict in software development, which is why engineers never use time as a measure of anything. Also a game sitting on the shelf for months is almost unthinkable because the technology and the field as a whole is constantly improving and there’s a fear of being left behind. We have enough cautionary tales about perpetual development to rightly fear obsolescence.

It’s possible we may be approaching a period of slowing of innovation that could allow us to take the time needed to make relatively cheap subtractive or compositional changes in a form of post production. This could potentially be facilitated by building games in a way that is more amenable to being transmuted, for that though we would need hardware to outstrip technological ambition, which I don’t really believe will ever happen.

A 21st Century Democracy


Why am I writing this article? The answer is simply because I have a lot of thoughts and sometimes I’m compelled to write them down, most of the time I throw them away but sometimes, writing them down isn’t enough and I still feel the need to push it out to the black hole of the internet simply to help me codify my own thinking and stop going over the same ground in my head. I’m under no delusions about my own insignificance or the insignificance of this article, but sometimes you just need to write something for yourself.

History of Democracy

I am not a historian and can’t talk authoritatively about the evolution of democracy, nor would it be inside the scope of this article if I could. Saying that, to my mind it’s important to understand 3 critical points along the evolution of democracy as a basis for what is to follow because in going back to those original intents we can gain insight into the ideal instead of the reality and perhaps create a new reality, closer to the ideal by revisiting assumptions of the time.

Greek Democracy

Something interesting I learned from Yanis Varoufakis about the origins of democracy is that the original democrats preferred sortition. It was the aristocrats who wanted elections. It says a lot about history, that my spell checker flags the word sortition as an error. It would arguably be more accurate to refer to a democracy, as we understand it, as an aristocracy.

Declaration of Arbroath & Magna Carta

This section is primarily relevant to Constitutional Monarchies, it’s debatable how important it is since in most places this form of government has been replaced by the current cutting edge of political technology, the 350 year old trias politica. I happen to live in the UK so it’s important to me and so I say it’s important.

You will often hear constitutional monarchies referred to as democratic and it’s important to debunk that claim because it’s the loose thread that has allowed our most powerful republics to unravel. By asserting that the UK is a democracy and therefore not incumbent on the American and French Republics to work towards liberating it’s people, the door was opened to assert that more and more authoritarian regimes were legitimate until we come to the the current preposterous claim that the Russian Federation is a Republic.

The UK is asserted to be a democracy because of two reasons, the first is that it has a sovereign parliament and the second is that it has a constitution. On the face of it these do seem like reasonable answers but they collapse under scrutiny. If you ask any American, they will be able to recite their constitutional rights verbatim, because they are written down and also because they are taught them. In the UK if you ask someone their constitutional rights, they will either tell you they are unwritten or that they are in either the Magna Carta or the Declaration of Arbroath. The truth is there is no constitution. The Magna Carta simply describes the rights of lords, not citizens and the declaration of Arbroath simply acknowledges that the king answers to the people, this is far from a constitution. When parliament needs to resolve a constitutional matter, they refer to a class of lawyers who gaze into history and try and find patterns of behaviour, not codified by law and declare that constitutional, this process is understandably and deliberately partisan and inscrutable.

Trias Politica

In answer to this obviously corrupt process Charles Montesquieu came up with a new system of government, one recognisable as a modern republic, the trias politica. It has proven a very powerful tool in resisting oppression and stands as the state of the art of how to administer democratic government. I won’t go into detail here, you can read about it elsewhere but it can loosely be described as this. Law making is the preserve of two houses of elected officials, one larger group of people elected to represent the will of the people and another smaller group of people elected to represent the needs of the people. The former should be representative of the population and the latter should be governmental experts. Once passed these laws are understood and implemented by the independent judiciary, ostensibly, legal experts. The third branch of government is the executive branch, this branch must conform to the laws passed by the other two and the constitution. It is responsible for taking direct action, the leader of this group is the elected head of state.

Why Democracy?

In order to convince people that a new form of democracy is needed, you first need to establish the basis for democracy itself. There are actually very strong reasons for why democracy exists. It might be surprising to know that there is actually a mathematical, scientific basis for it in fact. This is something that should be known to every person on earth because education is the foundation of a strong democracy, especially education about democracy.

The idea of compound intelligence is very old but the scientific basis for proving it is in the Jelly Bean experiment, this has been replicated and verified in many other scenarios. The Jelly Bean experiment is a simple experiment where people are asked to decide how many jelly beans there are in container, the experiment proves that the compound estimate is more correct than even the best guess of any individual in the group. This idea of compound intelligence is the basis for democracy but it’s important to understand how it works in order to use it, it turns out to be very easy to destroy or subvert the effect if you don’t. This process works primarily because the nature of the question allows for unbiased self elimination, i.e. fringe elements unwilling to participate or intent on subversion will self eliminate from impacting the result, because they’re just as likely to aim low as to aim high and cancel themselves out with other poor guesses. The signal is able to rise from the noise.

A New Democracy


In understanding the wisdom of the crowd and how easy it is to subvert, you begin to see that modern republics have been designed deliberately to dampen the signal as much as possible, through quantisation. In compound intelligence, quantisation is never to be trusted, ideally only analogue signals are to be trusted, as close to the undoctored individual intent as possible. When you have a binary or multiple choice, that’s quantisation, gerrymandering people into discrete regions, that’s quantisation. It follows from this then that in order to preserve the compound intelligence, when you go to the electorate you must be very careful in how you solicit their vote, at the very least it must be an analogue decision and it must not be regionalised and composed. In fact any kind of composition other than analogue constructive interference will diminish the signal.


The scientific term for diversity is entropy and one of it’s most defining attributes is stability. This stability is why a rewilded earth is preferable to one completely controlled by humans or why a company with diverse leadership will statistically outperform other companies, it’s the basis for evolution and how the pseudo science of eugenics was debunked. So to inject diversity into your government is desirable to induce stability, in the UK first past the post (i.e. vote quantisation) has been heralded as stability inducing but this is provably false, it induces a lack of diversity, that seems more stable until it collapses, as we’re currently witnessing.

True stability is borne from diversity so how do we achieve this, my preference is to go back to the insights of the original democrats, sortition. We could just toss a coin or read tea leaves or make decisions based on the background radiation of the universe but that seems a little too chaotic to me. Representative politics is desirable because it brings our acts as a group closer to our will, it keeps us from finding alternative, destructive modes of expression. What better way to get an unbiased representative than through sortition? It’s apparent to anyone that democracy is suffering from an increasingly “experienced” lower house problem. Political power is loathe to give up it’s control of this body and so stubbornly refuses to yield control of this so called representative group. Sortition, to me injects the diversity needed to keep a congress stable AND representative, the current polarisation is obviously not stable and inevitably will collapse.


One of the primary weaknesses of the trias politica system is the judiciary, it’s independence and it’s concentration of power. It’s the first place an authoritarian regime will attack because once captured, an organisation can control how and which laws are enforced or ignored. It is also, by design extremely difficult for other organisations to wrest control from bad actors. It’s clear to me that a modern attempt at improving democratic robustness should involve an attempt to prevent judicial subversion. Under a jury system of law, sortition is used to select random people (“peers”) responsible for finding facts, it’s the jury’s job to establish what actually happened during events under scrutiny. Once the jury has established the facts of a matter, it then becomes the responsibility of a judge to make findings of law. A judge is ostensibly an expert interpreter of laws set by parliament but it goes without saying that in practice, we’ve found that their findings are highly subjective and ritualistic (i.e. each will consistently emphasise laws they prefer and de-emphasise laws they dislike). There also is an inherent and important dialogue between the judiciary and a parliament that must take place to ensure laws don’t undermine the fabric of the democratic system. A functioning judiciary will send back poorly written laws that conflict with higher laws (like a constitution).

Automated Findings of Law

The human expert requirement of a judiciary predates computers, I don’t believe anyone designing these systems today would design a system free of computation. The scales of knowledge required to interpret these laws has ballooned massively, parliaments of today are in session for much longer and are much more reactive than originally conceived. This creates fertile soil for selective interpretation. I assert that a modern republic should have a mechanism for establishing automated finding of law this would remove the need for experts in the process and let democracy into the traditional oligarchic judicial branch. This would also would allow the parliament to check their laws in real time for legal contradictions before they’re made into law, removing the need for the time consuming and wasteful process of the judiciary waiting for laws to be tested in a court before sending them back to parliament. The difficulty of a system like this would be in ensuring it’s security.


Threat Modelling

There are very many ways a democracy can be subverted, it’s very difficult to know exactly in advance what an ingenious bad actor will discover about a system, we do know the play book for the current system well though. Capture parliament, use it to create popular dissent, use the dissent to corrupt the judiciary, remove education and public polling and create artificial gender and race based stratification to divide the people and ensure they don’t organise together.

Some of these would still be relevant and need to be guarded against in a permanent constitution, outside the reach of any organisation, political or otherwise. The two I believe can be prevented today with computer security is protecting any kind of polling and protecting judicial finding of law, I believe this is possible with cryptographic blockchains.

Cryptographic Blockchains

Every computer scientist knows that there are valid uses of blockchain technology and invalid uses. Unfortunately blockchains have a poor reputation because of their popular use in financial pyramid schemes like crypto currency. It’s less well known that they are also used in a lot of legitimate cases, like video codecs. It is my believe that we could establish a system of law making that uses distributed blockchains to encode laws that can allow us to automate the process of finding law. There are also many experiments in progress exploring secure blockchain based voting. Until these systems have been robustly verified, I believe there is still a good reason to maintain a parallel blockchain and traditional voting system in place. As sortition plays an important role in a future system we will also need a secure source of entropy, easily verifiable by anyone, anywhere in the world, something like cosmic radiation but it would need to give the same data anywhere in the world. As far as I know this is an open question, but an important one. This field is under active development and so the technologies employed would probably have to change as our understanding grows and so would need to be laid out in laws. However, the broad strokes, like public access to the blockchain would need to be encoded in an immutable constitution.


In this article I’ve tried to establish the idea that the trias politica system is the cutting edge system of democratic government. I also hope to establish the idea that it is old and outdated, it’s failings are very apparent in this modern era and that the process of improving democracy is and should be an ongoing one. I also laid out where I see the most prominent weaknesses of the system and my own ideas on how to fix them by wielding my area of expertise and the 21st century technology, computer science.

Cloud Gaming Will (probably) Never Work

There have been many many failed cloud gaming initiatives over the years and the discourse over it never seems to understand the interplay between technology and market economics that doom every effort before it begins.


The first and most obvious question around Cloud Gaming is why? There are many legitimate consumer focused answers to this question but streaming media has never been driven by consumer benefits, it’s primary motivation is control and the concept of rent seeking. This is why spotify, netflix or youtube doesn’t (by default) cache more than a few seconds at a time, it could cache whole movies locally and even speculatively cache media, both making it’s own service better and cheaper to run.

In today’s world individuals are in control of immense amounts of computing power, the need for it is almost entirely driven by the games industry. Corporations would like control of those devices, they would like the consumer to depend on them for the right to access it and Cloud Computing generally and Cloud Gaming specifically is how they would like to achieve that.

The concept of rent seeking is old, it essentially means finding a way to turn a product you sell into a service you rent. The concept of the End User License Agreement has allowed corporations to transition almost any product, especially one that uses software into a service instead of a product. Throughout history it has been aggressively legislated against because, the sale of goods is the cornerstone of a capitalist market. When you sell a product, legally you are required to transfer all rights over that product to the purchaser, however with a service model you don’t have to transfer any rights, you can lay out exactly how the end user is allowed to use their purchase and you can collect money from them on any schedule the service market will support. It has the potential to kill market economics and capitalism by transferring all consumer rights to corporations.


The popular criticism of cloud gaming revolves around latency, the idea that there are physical limits to how fast data can move from the server to the client and this does hold true and means a certain category of game is likely never going to be competitive with current gen consoles and pc games for latency, there have been many excellent technical solutions to work around the problem. The problem with that is they have mostly also been adopted by consoles and were never a problem with PC games to begin with so the streaming services got their end to end latency to match the ~100ms of last gen consoles just as this generation improved their technology to match the PC’s sub 50ms latency, and at this point the streaming services have nowhere left to go because the only latency left is the network latency that is physically impossible to improve. This criticism does ignore the fact, however, that most games have been running at the latencies available for years and no one complained.


The thing that really dooms cloud gaming however is thermodynamics, the cutting edge of datacentre research is all about how to keep the servers storing the data cool, cooling datacentres is the vast majority of the cost of running them. And the thing about datacentres is that they are comparatively cold, when compared to compute. If you stick a gaming class cpu and gpu in there, you need a lot more cooling. Compared with a games console in someone’s home, in most homes it’s going to help to heat their home, so rather being a problem that needs to be dealt with it’s a tiny benefit. Even in countries where they use air conditioning, the console doesn’t contribute significantly to the individual’s cooling bill. This adds up to the fact that running cloud gaming infrastructure is, on balance, more expensive. This means the only way to provide it competitively, is for the corporation to subsidise it, in the understanding that it will confer benefits for them elsewhere, primarily that they will get control of consumer compute and get to dictate what runs on it, so far no company, including google, has been willing to eat the costs in exchange for the power. The companies still in the market (microsoft and amazon) have, however, historically proven to be capable of making these kinds of long term power plays.

Another possibility for rendering it sustainable is to offer a hybrid model where you can start a game via streaming but also have a download running that when finished takes over, this means the cost of the datacentres can potentially be offset by the impulse purchases it facilitates while also trying to reduce actual usage of the service to where it is actually beneficial, this wouldn’t facilitate a subscription model however, only a traditional digital distribution model.


The heat problem should be enough to render the idea dead on arrival for all but the most committed corporation but on top of that there is the reliability issue to contend with. With streaming video or music, it’s infuriating when the connection degrades and it triggers stalls but compared to gaming they are infinitely more tolerant. Again this does depend on the game but in some games it only take a couple of dropped packets to render hours of work undone. Compound on top of this encoding artefacts degrading the visuals, internet outages and the multitude of other reliability issues when you’re talking about making the global internet an integral part of your gaming experience, it’s a very difficult problem to solve. The touted solution is moving the compute centre closer to the player, sometimes, even as close as their local exchange. This obviously is not a solution, mainly because if the service provider has any hope of offsetting the significant costs of cooling the hardware, they need to be able to use the same hardware for multiple people the closer the machine is to the end user the fewer end users can timeshare it. It’d be cheaper and less complex renting them a console in their own home, an approach that microsoft has already rolled out on account of it’s comparative simplicity.

Compound on top of all the issues above the fact that the bandwidth requirements have always been extremely optimistic, when stadia touted it’s recommended internet connection numbers it assumed that one person playing games was the only thing that happened on those lines. Any home with multiple people playing games and anything else going on would need to multiply those requirements. For some people, with gigabit plus connections things are starting to move to a place where bandwidth constraints aren’t going to be a limiting factor but it’s worth bearing in mind that those links currently cost 50-100% extra and so when calculating the value to the end user those costs will need to be accounted for in your value proposition.


All these things combined, some of which are solvable with a great investment in infrastructure and technology, some of which will need to be addressed via market positioning, all of which make the net cost of the technology more expensive than the comparatively simple and better alternatives already available. It makes an incredibly steep hill to climb for any would be cloud gaming provider which goes in part to explain why geforce now (probably the most palatable cloud gaming service available) has recently has to increase it’s prices, and why it will ultimately, most likely fail.

Amnesia Rebirth critical analysis

Frictional Games are one of my favourite developers and it’s been 5 years since Soma, it goes without saying I was eager to play their latest game, Amnesia Rebirth. Despite not being particularly invested in the Amnesia franchise, I did think the new setting was an interesting and potentially exciting way to mix up the shlocky original.

Somatic Response

Soma is, by far, frictional’s best game to date. Leaning away from the overt horror and into more philosophical sci-fi with light horror elements was a winning formula. It’s clear a lot of the lessons from Soma have carried over to Amnesia Rebirth, they don’t spend too much time leaning on the stealth evasion horror mechanics they helped define.

However where the plot of Soma was exceptionally well delivered, Rebirth’s was sparse and difficult to understand, largely down to the fact that your character spends so much time in the dark, both figuratively and literally. The darkness mechanic means it rarely feels like you are free to explore the world and the plot and the sickness the character is experiencing often means there’s a sense of urgency that precludes exploration. This differs greatly from Soma wherein there are clear parts of the game where you are not in peril and are free to explore and absorb the story.

The other areas it falls short narratively is in character interactions, most of the plot is delivered retroactively and most of the game takes place during a journey long after all the main players are gone, you very rarely talk to other people and this contrasts with Soma where the main character, Simon, can talk about his predicament with Catherine and that gives you more reason to care about what is happening to him. In Rebirth there aren’t really any character arcs, the exposition is sparse and mechanical, it fills in the gaps but it doesn’t really give you a reason to care about what has happened and what will, at least not as much.

The Alien in the room

When Soma came out in 2015, Alien Isolation had only been out for less than a year and Soma didn’t really use the stealth evasion horror mechanics at all and so Frictional didn’t need to withstand comparison in the stealth evasion mechanics. With Rebirth however, they have failed completely.

Their first failing is in environment design, the environments are frequently dark, difficult to read and difficult to mentally map. There’s often no clear goal and the evasion horror sequence is often the first time you’ve entered the space so mapping it takes priority over evading. On top of this there’s almost always nowhere to hide so once an enemy is upon you, there are no options available to you.

Secondly, the enemies are incredibly difficult to predict, primarily because you are punished for looking at them, I understand frictional consider this a feature but it makes for frustrating gameplay. Two key mechanics of Alien are that you can observe the Alien both on the radar and within a restricted field of view from areas of relative safety and also that the Alien makes a show of backing off, refusing to engage in tropey patrols and importantly allowing the player to make progress in apparent, relative safety. The knowledge in Alien that there is only ever one antagonist is key to maintaining a reasonable mental load, tracking multiple antagonists was simply too much for me and knowing that there was no real penalty for failure sometimes lead me to give up and run for the door, I usually failed but it didn’t matter.

Finally the Darkness mechanic is probably the games biggest failure, it’s antithetical to hiding to want to be in a well lit space and trying to balance being lit with not being seen was infuriating, it compounds with the issues listed above to make the evasion gameplay unsatisfying, most encounters for me ended in me being caught and “skipped” past the encounter. I don’t think this was the intention.

Ageing Technology

It’s clear that frictional’s engine is showing it’s age their ambitions are clearly being frustrated by it, it’s often but not always ugly and this detracts from so much of what they are trying to achieve, it feels like many of the game mechanics are designed to work around flaws in the engine rather than satisfy the desired player experience.

I’m a big advocate of rolling your own tech and it would pain me to see frictional switch to an off the shelve engine but game engines have come a long way since 2015 and this engine was starting to show it’s age then. There’s been no meaningful improvements in that time and If you are going to roll your own tech, you need to invest time in it to keep it up to date. I’d much rather see frictional realise their vision in a more modern engine than see them compromise it so deeply in their own tech.

Outer Wilds critical analysis

I quite often talk about ludodiegesis in this article, it’s a model of relating to game mechanics I use from this excellent blog post by Robert Yang about a paper by Dan Pinchbeck – https://www.blog.radiator.debacle.us/2011/04/ludodiegesis-or-pinchbecks-unified.html

Initial impression

The first hour of Outer Wilds is a bafflingly ordinary and unsatisfying game. I understand why it’s there, people are scared of being dropped into a world, we feel like people need to be eased in but frankly the first hour is quite conventional, boring and difficult. I’m not sure how or even if it’s possible to introduce a game like this and I certainly can’t think of better examples. Pathologic, for example just drops you into it’s (admittedly hostile) world and I’m quite confident the vast majority of people who play that game bounce straight off.

However, if you were to play the opening of Outer Wilds you’d be forgiven for believing it was going to be an under-polished, linear, narrative game with a quirky mechanic but that couldn’t be further from the truth. Even once you get to explore the world I didn’t fully take to it until I discovered the narrative web in the computer on your spaceship.

Core loops and Metagames

Once you’ve played Outer Wilds for a few iterations of it’s time loop, the narrative web becomes the central component of the game. Most, if not all games are built this way, they have a core loop which occupies most of your time and strives to be enjoyable in some way and on top of that they build a metagame which usually delivers some form of satisfying progress, typically by feeding back into the core loop.

Subversive metagame


Even games like those of 11 bit studios like Frostpunk and This War of Mine try (& in both those case succeed, exceptionally) to cleverly reconcile the need for an enjoyable core loop with a difficult message. That message is delivered through a brave and unique metagame which deliberately isn’t satisfying and in those games that approach works brilliantly.

Subversive core loop


Pathologic and other games by Ice Pick Lodge however often go the other way, their core loop is a slog, it’s difficult and clumsy and painful but they still manage to grab players who stick around by establishing a rich metagame founded in traditional, satisfying, progress. This model usually doesn’t really work, at least not for 90% of players, who quickly discard the broken core loop before getting to any of the satisfying progress.

Earned core loop


There’s a third school of ‘subversive’ game development that masquerades as a game with a punishing core loop but is actually holding it back until you’ve earned it through progress. From Software are the obvious masters of this technique but the Resident Evil series also stands out as a good example. Initially the controls feel broken, the game feels overly taxing and then through progress a satisfying core loop emerges. Outer Wilds belongs to this last group of games, except the thing that makes it special is that the game doesn’t actually award you any new verbs or improve your existing arsenal. The only game I can think of that shares this trait is the 2018 insta-classic Return of the Obra Dinn.

Player Progress vs Avatar Progress

Instead of awarding you new powers, the world of Outer Wilds has been crafted in a way as to allow knowledge to become your upgrades, it achieves that rare feat of making the player feel it’s them who is improving, not their avatar.

*This paragraph has a possible gameplay spoiler*

As an anecdotal example – On the planet of Brittle Hollow in the game there’s a city called the Hanging City, initially it’s only accessible by getting there as fast as you can before the route is destroyed by the planet’s volcanic moon. Once you explore the Hanging City you’ll discover a path to the surface which will enable you to get back there much more easily, alleviating the time pressures that otherwise exist. You now have a new ability, you can explore the hanging city whenever you like but it was only delivered by learning about the world. This is a rarer and arguably more satisfying metagame than the traditional verb unlock/power-up.

Meaningfully Open World Design


A lot of games manifest their metagame through a ludodiegetic interface and honestly Outer Wilds is no different. These interfaces often portray the true framework of the metagame. if it’s a quest log, the metagame becomes a list checking exercise, if it’s a map of icons the metagame becomes an icon janitor game. Outer Wilds’ metagame manifests via a narrative web. It’s design allows for Outer Wilds to become a meaningfully open world, where the things you do matter in relation to other things you do. The normal open world metagame (icon janitor) doesn’t achieve this and so the order you clean up the icons in stops having any in-world meaning.

Meaningful Decisions


Another thing Outer Wilds is similar to Dark Souls is in using something players have already been trained by other games to avoid (dying) and taking the sting out of it but unlike Super Meat Boy (which sands down all the edges to make dying as cheap as possible) it tries to preserve the pavlovian avoidance rather than retrain the players to become comfortable with it. There’s a lot of bluster and performance in the act of dying in both Dark Souls and Outer Wilds.

Both these games do this because they’re not really interested in actually punishing the players but want to imbue otherwise meaningless actions with meaning, like pushing too far into the unknown in Dark Souls and risking your souls (which are both fairly easily retrieved and not all that valuable) or forgetting to put your space suit on before leaving the spaceship in Outer Wilds. Outer Wilds’ time loop requires the player to perform a lot of actions repeatedly and could easily have streamlined the action right down to it’s bare essentials, starting you in the ship with your suit on for example. Mobius Digital very deliberately and cleverly didn’t do that because all these ludically meaningless decisions (it’s not a traditionally meaningful decision to leave the ship without a suit) are imbued with a grander immersive meaning. It allows you greater immersion into the world than obvious economical approach.

Outer Wilds is a great game which exhibits an excellent grasp of how to build a compelling and truly open world for players to explore, I’d love to see some of the ideas taken to explore more characterful interactions and worlds without the time loop framing, I think it could really be a game changer for narrative games, closer to the narrative lego idea in Ken Levine’s 2014 GDC talk.

If you have any questions or comments feel free to @ me on twitter.

Unreal Game Sync – Migrating to internal UE4 Builds

I decided to write up my recent experience migrating from a launcher installed version of Unreal Engine 4 to an internal build because the documentation is extremely sparse and it took me a lot of time to put together the information needed to perform what turned out to be an extremely simple task.

What is Unreal Game Sync and why should I use it?

Unreal Game Sync is a tool built by epic to interact with perforce to aid in developing Unreal Engine 4 projects. It has quite a lot of features on top of p4v but the primary use, to my mind, is distributing internal engine and project builds.

If you started developing UE4 in a launcher build and want to migrate to a build you built yourself, you’ll need to tackle the issue of distributing those builds without requiring anyone who works on the project to build the engine themselves, the simple way is to dump the entire engine into your version control system and that works well if code changes are rare and broken builds are a minor inconvenience but If you can’t risk breaking the editor with your changes you will need to use something like UGS.

Unreal Game Sync solves this by letting you build the engine once, zip it up and store the zipped binaries in perforce, where UGS when it pulls a new changelist will also fetch the corresponding engine (& game) binaries and automatically extract them into place (in theory) ensuring the binaries are always in sync with the content.

Migrating to an internal build of UE4

UGS expects your project to be set up a very specific way and epic (or anyone else as far as i can tell) didn’t document what that is. If you started developing with a launcher version of the engine, your project will be in a folder on it’s own with a .uproject file and various folders for binaries and content. The uproject file is a text file and inside the text file there are various settings, one of those settings is the EngineAssociation setting, which is used to determine which copy of the engine to load the file with. If it’s a launcher build it will read something like ‘4.23’ and if it’s a locally built build it will be a generated guid.

The problem with this is that every machine will have a different guid, the best solution for internal builds is to remove the EngineAssociation field. When there is no EngineAssociation, opening the uproject file will first search up it’s directory tree to find an engine and if one isn’t found will prompt the user with an engine selection dialogue.

So we want to remove the EngineAssociation and place the engine files downloaded from github above the project file in the perforce repo. i.e. The engine will be placed at –


and the game project will be located at –


On top of this I had to mark the following file as writable in perforce, since the build touches it –


Once we have our perforce repo in this form, we can run the Setup.bat file which will be used to download the dependencies, I opted to not store the dependencies in perforce and instead require it to be run once on each machine, I’m not certain this was the best approach since it doesn’t scale effectively to large teams, pushing and maintaining the dependencies in perforce may be a better approach.

After that we need to run GenerateProjectFiles.bat and start up visual studio, if it’s working as intended, the UE4.sln will also have the game code as well as the engine code, the old game vs solution file won’t be required any longer. This approach allows you to view, step into and debug engine code while debugging the game. From here you can build the engine.

Once the engine and game is built you should be able to start up the locally built editor by double clicking the .uproject file. Before we go on to setup UGS, we should build the zip archive and push it to perforce and ideally set up a continuous integration system to automatically build and push new binaries, I used jenkins to do this but there’s plenty of documentation on CI solutions and it’s outside the scope of this article. The command for building and pushing the archive can be found on the UGS reference page here –

UGS Reference

Before running this script on a machine you will need to make sure the Windows SDK has the Windows Debugging Tools component installed as the build process uses pdbcopy, you can install the WDT by modifying the Windows SDK install via the Settings -> Apps -> Apps and features panel in windows, locating the Windows SDK and selecting modify.

  -Target="Submit To Perforce for UGS"

The script and target options define the process for building, zipping and uploading builds if there have been code changes, the EditorTarget is the name of the editor .target.cs file that is in the vs game project, there are more options too including a GameTarget option for building the standalone game too.

The ArchiveStream is the location you want to place the zipped binaries. It wasn’t immediately clear to me how epic wanted me to structure this perforce stream and so I’ll go into a bit more detail about how I believe they intend it to be laid out. Epic require the zipped binaries to be placed in a stream depot in perforce, I believe they want the binaries in their own mainline stream (in this case named Dev-Binaries), I’m not certain why they’d require this since it doesn’t seem to be the kind of thing you’d branch but who knows ¯\_(ツ)_/¯.

Setting up UGS

UGS is a tool, which epic distribute as source code along with the engine, the vs solution can be found in Engine/Source/Programs/UnrealGameSync

You need a link Epic Games account to see it on github

Building and running UGS is fairly well documented in the UGS Reference.

UGS uses an odd update mechanism where there is an installer that installs a launcher that fetches the tool from a hardcoded perforce path. So before building the code, you need to install wix 3.8 and change the DefaultDepotPath string in DeploymentSettings.cs to point to a perforce location where the UGS launcher can find updated binaries for UGS. Once done build the installer and UGS. Commit UGS to perforce and then install the launcher, if it’s working correctly, you’ll be able to startup UGS and point it at your .uproject file and it will show you your changelists.

From here you should be able to turn on the ‘Sync Precompiled Binaries’ option and any changelists which have a corresponding binary zip in perforce will not be greyed out.

Before rolling the tool out everyone who doesn’t have visual studio will still need to run Setup.bat and install the latest vcredist because the one installed by Setup.bat will probably not be the one you built the engine against and will be out of date.

If you’re trying to minimise distruption the last step will be pushing the modified .uproject file to source control along with any updated plugins, allowing for a seamless transition from the installed build to the internal build.


Hopefully this information can help shorten the UGS setup time for others as the whole process turned out to be fairly simple but I had to work out quite a lot of it myself due to the absence of documentation and hopefully if anyone has better ideas for how to integrate or streamline all this stuff please let me know in the comments or on twitter.

Tacoma critical analysis


Alaska and Tacoma


It’s been a while since I posted anything on my blog, mainly because I haven’t had any time to do anything for most of 2017. That includes working on Alaska, which is still nearly finished but I don’t plan to restart active development on it until 2018, when I’m hoping to have more free time.

SPOILER WARNING: There are potential plot spoilers for Tacoma in this post!

This post however is about Tacoma. Gone Home is the game that inspired me to try and make Alaska and I find it interesting that there are so many parallels between where I wanted to take the genre next and where Fulbright did. Both decided adding humans into the mix was the next step, both expanded the space a little and chose a space station as the logical setting for that. When I started working on Alaska, I didn’t anticipate I’d be playing Fulbright’s next game before it was finished. I’ve always been over ambitious about things that don’t seem to matter to other people and the the two prime areas in Alaska I worked most on was the lighting system (because I’m a programmer) and the dynamic characters.

Meaningful Decisions


The dynamic characters were an attempt to inject meaning into the first person adventure genre. A more living world would lend your actions greater consequences – destroying a bridge only matters if someone wants to cross it. Brilliantly, Fulbright opted for another mechanism and it’s genius, a boring alternative route to success. Fulbright have twice now resisted giving the player any way to impact the world and they’ve gotten away with it because they’ve done it expertly. Since the inception of branching narratives the entire industry has been obsessed with the idea that it existed to engender replayability, like you’re trying to squeeze multiple games into one and that success is when the player plays them all. This thinking can be seen most prominently in the critical acclaim for nier automata (a game I loved) and it’s complete disrespect for the players time by requiring you to play through the same game 3 times before delivering any narrative conclusion. I’ve always asserted that branching narratives derive most of their value from communicating to the player that their decision mattered because a different choice would’ve had different outcomes. I feel it’s ok to explore the potential outcomes, to explore causality, but it’s also important to recognise that the desire to fully explore the system and to fully realise the system is a futile one. It doesn’t map well onto real world systems, it’s a power fantasy. Fulbright have implemented the core value of a branching narrative without incurring the burden of the massive development overhead by offering the player one other branch. One that is so intensely uninteresting that no player would ever opt to take it except as a feat of extra ludic endurance. They even deliberately refused to add an achievement for it because they wanted it to be clear – there is no game here. This option exists, soley, to make the decision to explore the station yours. It doesn’t matter how dire the consequences of not doing something it always matters to us that we had the choice. It’s why we live in permissive, punishing societies as opposed to preventative ones i.e. you won’t be dragged, kicking and screaming to your work but you will lose your house if you do.

Dr Manhattan, not Shodan


The plot of Tacoma reveals, in my opinion, an exceptional grasp of the themes of corporate culture, the information revolution and artificial intelligence’s role in both. Most discussions of artificial intelligence focus on the threat in terms of it being a separate sentient animalistic predator who opts, in one way or another to operate without humans, the idea that it won’t want us around. The real threat of artificial intelligence is actually in how we use it and how it uses us and Tacoma tries to get to the heart of the idea. We already live in a world controlled by artificial intelligence. We have been for quite some time. It doesn’t direct us or take direction from us by taking on a human ego and having a verbal dialogue. It does it by selling stock in sock manufacturers and investing in cotton buds. It controls us by giving us the best congestion free route to Disney land. The AI in Tacoma is a projected version of this idea, it gives CEOs a choice between doing something immoral and suffering certain negative consequences. Disobeying Google maps would be working against our own interests, just as the CEO dismissing the AI’s projections would be. It is my impression that the players own non choice of sitting and waiting for hours for the transfers to complete are deliberate mirrors of this idea, gameplay reinforcing the core theme. The threat of AI in Tacoma, just as in real life is that we will self select ourselves out of existence because it makes sense to have AI’s do more and us do less, we need less of us so WE choose to reduce the population, through primarily natural processes but also atrocities like the one in Tacoma.

An idea central to the game is that the AI’s continue to serve but by serving us they annihilate us. Natural selection, the passage of time and the march of entropy are fundamentally linked, they are 3 facets of causality. Just as atoms formed molecules, proteins started replicating, replicating proteins formed more complex cells, cells formed organisms and organisms developed sentience, sentient organisms will come together to form a more complex thing, we are no more capable of discerning it or its motivations as a cell is of understanding us. Tacoma brings these grand projections – ideas about the nature of humanity, where we are going, transhumanism and brings them right down to earth (although not literally). It focuses in on the details of the lives, the importance of things that happen in the lives of a small group of diverse people.

Diversity is always better


Diversity is another one of the key themes of Tacoma. I’ve heard many people say that diverse representations make more interesting experiences, I’ve always thought that would be the case and it’s something I am trying to do in Alaska also. I feel that the complete lack of diversity in modern Hollywood is a key reason for why it is so utterly stagnant. One of the most defining films of my childhood was Alien, I don’t think it’s an understatement to say that Sigourney Weaver’s portrayal of Ellen Ripley was critical in developing my understanding of women. Every film had multiple black characters and there were more than 3 black actors playing those black characters. I feel Hollywood has started to make progress on these issues again and it’s refreshing.

Computer games haven’t even begun to scratch the surface of the potential for diverse characters. One reason for this has been a technical limitation. Skeletal animation systems like similar shaped skins. If the skin is very different, an animator needs to be careful not to animate in a way that the bigger ones clip, or the thinner ones seem like their limbs are floating, it’s hard and it’s only recently been possible to have a feasible system to counter this problem, either by dynamically modifying the skeleton and animations to fit the skin better or hand craft a skeleton & it’s animations for a wide enough body of representation. Recently a few games have went to great effort to make sure the people in the world are named, individuals with their own personalities and faces, I think it’s an exciting evolution in games and expectations that will hopefully do away with the stormtrooper syndrome that infests most games, every character with a convenient, identical face covering.

Fullbright again, played to their strengths when diversifying their cast, they didn’t have the resources to create fully realised unique looking characters, instead the wove AR into the fiction of the game, they built it from the ground up to allow them to plausibly have the characters’ physical representation fit within the budget of a small studio by have AR simulacra represent the characters. They could then focus in on the areas of diversity they wanted to emphasis, body shape and personality. The characters in Tacoma are different from each other, the different body shapes make it easier to distinguish them, even without faces (although the colour coding admittedly does most of the work). The different personality types let you get to know them.

When watching dialogue I often try and predict what characters are going to say, I feel it’s telling when I guess correctly to analyse whether it was because the character is an archetype I have prior understanding of or if it is because I have come to understand the individual. In my estimation none of the characters are archetypes, they don’t create conflict for each other, except in plausible small ways. Generally they are all working together towards a common end, deviations in their idioms create interesting, plausible dynamics but in general the game shows that lots of very different people can get along swimmingly. I felt there were a few moments when the game was signalling some grand conflict like when Andrew was hesitant to go into cryo or when Sareh didn’t tell Natali about her medical condition or when it was clear something was going on with ODIN and Natali. The game deliberately made nothing of these issues however, they were blips on the road. It had something more important to say, although it did have a satisfying conclusion It didn’t feel to me that that was the point of the game, it was the journey. Just like your so called mission to collect ODIN by watching a progress bar wasn’t really the point.

G2A is right and you are wrong


I’ve seen and had a lot of back and forths over the last few days about G2A and key reselling and almost everyone I’ve seen talk about it seem to have some serious misconceptions, so I thought I’d have a go at enlightening people. Before I go any further though, I want to say If you have any insights on the subject matter which AREN’T addressed here, feel free to tweet me or email me or comment etc. however if the sum of you’re input is “but people can steal keys” or “but people can scam keys” then I’d appreciate it if you think about it and maybe consider that someone (almost everyone) has already made that point and not bother.

(Also, I know it’s been a long time since I’ve posted about Alaska, it is still in development, I’ve had a lot going on but it’s still currently scheduled for release sometime in 2017)

OK. So what exactly am I talking about? Mainly G2A are correct. The right to resell is a fundamental right. If I buy something from someone, I am protected by law in selling it on. This right, in the EU at least, has been upheld with regards to digital goods, there is no legal reason why G2A or any other reseller can’t do what they are doing. Game developers who contest this are at the very least, deeply misguided and at worst deliberately trying to mislead customers into foregoing their rights. If that were the long and the short of it however no one would be running around cutting heads off on social media. So what are the counter arguments I’ve received?

But people can steal keys!

People can steal keys, this is true. People can also steal every other thing on earth and it doesn’t affect your right to resell. Not one bit. Arguing that the ability to steal keys is an argument that has been made against physical goods resellers in the past by illicit corporations to attempt to secure a monopoly on the sale of their products. The truth is it’s less of an inconvenience for digital goods than physical. Lets look at the typical case people usually refer to: credit card fraud. So with physical goods, when credit card fraud is used to steal, the credit card is charged back, the victim gets their money back, the retailer loses the money and the thief keeps the goods. With digital goods, the card is charged back, the retailer loses the money and revokes the key and whoever ended up with the stolen key loses access. Those people are legally protected, they can go to their retailer, say G2A and, legally, can get a full refund. A.K.A. there is no problem here. If you use amazon to buy physical goods you are more complicit if credit card fraud than G2A, it functions exactly the same way, except the original retailer doesn’t get their products back.

But people can scam keys!

This conversation is a lot simpler. If someone asks you for a key and you give them it, it is theirs, end of story. You may not like what they do with it afterwards but the fact is, it is no longer yours. If someone has committed fraud against you then feel free to litigate. I have to ask however, who is giving out enough keys to meaningfully impact their sales that this type of fraud is a concern? If you are giving large volumes of your game away, then you only have yourself to blame, don’t do that! If you expect something in return for the keys you give away, get it in writing, get a contract, and then pursue the terms of that contract in court! If that idea doesn’t sit right with you, it should tell you that your expectations of what you get from giving that key away aren’t entirely something you are comfortable with.

The Game Dev gets nothing from these sales.

This is a very familiar argument. Physical retailers were also attacked by game publishers with this argument. It is as illegitimate now as it ever was. The game dev made their money on the initial sale, If that was zero, then that’s because their asking price was zero. a resold key is not an extra copy, it’s a transfer of ownership of that copy they received from the developer.

It circumvents Regional Pricing.

This argument, in my opinion is the only one that holds any water. Selling internationally is a hard problem to solve, so many people celebrate the removal of region locking, it’s clear they don’t really understand the full nuances of what region locking/regional pricing is designed to do. Regional pricing is a critical component of homogenising international economies, it’s a mechanism for the transferral of wealth from affluent economies to poorer ones. Saying that, it is already a solved problem, beyond correct labelling it isn’t a problem for the retailer (and again if it’s mislabelled the customer is legally entitled to a refund/replace). In short, if you enact regional pricing, you have to use region locking or accept that people will purchase internationally.


I have yet to see any arguments against key reselling or G2A specifically that amounts to anything. To my eye it is a trial by social media. The thing that upsets me about the situation is, you see prominent talking heads, making assertions and accusations, implying and inferring. It seems to me they never once emphasis that they support the general practice of key reselling or that it is a perfectly legal, moral and fundamental right. I do not think they should be making these kinds of accusations in public, they should make them in court, where our legal checks and balances can preserve our rights (both developers and consumers). With regards to G2A specifically, they seem crass, they seems abrasive but they also seem to be in the right. If they are committing fraud or copyright infringement or other illegal acts, I will be the first to call them out but I’m not going to do that until it’s been proven in court.

Development Update


I know it’s been a long long time since my last development update, almost a year! The reason is very simply because the things I’ve been doing have been tediously boring, at least to my mind anyway. Here’s a brief summary of what I’ve been doing on Alaska for the last 10 months. I expect this will be my last development update before I start my greenlight campaign, I currently estimate i’ll be ready to go on greenlight in July or August.


Last month was the 5 year development anniversary of Alaska, my original estimate was a 3 year project but I’ve changed jobs twice, got married and had 2 amazing kids in that time and those things all take priority, as they should. This is what it looked like 5 years ago:


It’s come a long way since then. I’m pretty proud of what I’ve achieved, personally.

So What have I been doing over the last 10 months?

Moving Repo, Moving IDE, C++11 (Aug-Oct)

So the first thing I did after my last blog post was do a lot of long overdue clean-up. Moving to visual studio 2015, updating build flags and dependencies and generally tidying up the build. As part of that I also moved to git which has taken a load of as subversion is a pain to work with. I backed up all my source assets too. With the move to visual studio 2015, I can now support C++11 properly which meant I could clean up some crusty code.

Moving from Windows 7 and Visual Studio 2012 to Windows 10 and Visual Studio 2015 forced me to migrate from the old DirectX SDK to DirectX included in the Windows SDK. I’m still using D3D10 and effects. I’m not moving mid development and this meant reworking some stuff to work better with the new SDK & IDE. It’s better in the long run but PIX still doesn’t work properly with old effects and that’s a nuisance.

This all took a few months but It was important because it made a lot of things that had become a slog less so and it was time away from the mammoth job i’d unwittingly undertaken in my last blog post, the dreaded character import pipeline rework.

Bug Fixes (Nov & Febuary)

Mainly as a consequence of changing lots of stuff related to the build there were a few weird bug fixes that needed addressed, I did some of these in November and some in February.

  • BSP Loading was very slow (one small part was 75% of the load time)
  • luabind threw exceptions in the destructor and vs2015 rightly didn’t like that
  • a couple of materials and reusing some render buffers were causing visual issues
  • The Game Entities used their address in memory as their guid! Need real guids.

Fixing all these issues meant I had to do the job I’d been avoiding:

The Dreaded Character Import Pipeline Rework (Nov-Mar)


My character import pipeline is a disgrace, It is laughably fragile and I’ve paid dearly for it. If this was my full time job I would have allotted much more time to making it more stable but I figured I would only have to do it once or twice and so I could just suffer.

My import pipeline currently consists of:

  • Build and Rig the characters in Mixamo Fuse
  • Load the model in Blender and cut away all the hidden verts (teeth, joins)
  • Reduce the poly count as much as possible with decimate
  • Import the Rig with all the animations, repose & bind it
  • Export the rig and all animations, one by one to collada
  • Make DDS versions of all the textures

Alaska is quite rare for a small indie game in that it has 13 unique characters and this process is very manual, laborious and error prone. It took me a long time doing it manually before I decided to try and automate it.

By March I had generated all the Final Models and used a couple in the test map to prove them out. At this point I realised I could use python in blender to automate the export and It made one of the most laborious parts trivial, It really is easy to write scripts for blender and I thoroughly recommend it, this is the script I used to export all the animations:

import bpy
import os

for action in bpy.data.actions:
    outname = None
    for object in bpy.data.objects:
        if object.animation_data != None:
            object.animation_data.action = action
        if object.name != "Armature":
            outname = object.name.lower()
    root = os.path.dirname(bpy.data.filepath)
    if action.name == "Idle":
        file = root + "\\" + outname + ".dae"
        actionname = action.name
        if actionname.endswith("StrafeWalking"):
            actionname = actionname[:-7]
        if actionname == "Walking":
            actionname = "Walk"
        file = root + "\\" + outname + actionname + ".dae"

A few loops around this script and the final models were in the game but something was wrong!

Human Readable Formats (March-April)

I always knew I was going to need to implement binary formats and now was the time, the final characters where massive collada files and they had a lot of redundant garbage in them. This caused loading times to explode to about 2 minutes if I remember correctly so I was forced to address the issue. I ended up making unified binary formats for the following files:

  • skeletons
  • objects
  • atlases
  • fonts
  • bsps
  • shaders

This reduced the load times to ~10 seconds and was pretty satisfying. There were quite a lot of minor bug fixes as a result of this and it took me into April.

Kinematic Character Controllers vs Nav Mesh Based Character Controllers (April – May)

I had been using a kinematic character control for the full length of development and while I was fairly happy with it, there were a couple of issues. Primarily I couldn’t achieve the kind of control over it I wanted, getting pushed out of the way, sliding down surfaces, climbing vertical faces and tunneling where all issues i’d been fighting all through development.

Kinematic Character Controllers are good because they give you a strong connection to the physical geometry (walking into a door, knocks it open for example). Nav Mesh Based Character Controllers are good because they give you a strong connection to AI reasoning (If an AI decides he wants to go somewhere, he’s not going to get stuck on the way).

Weighing these two options up I decided to take the plunge and try to remove and replace the kinematic controller I wasn’t really happy with, with a nav mesh based controller. It turned out to be a lot simpler than I anticipated and now I feel like I have a good character controller which I can tweak much more easily without having to fire off rays and apply forces to.

Animatic (May~ongoing)



I had been looking for someone to do an animatic for me to help with promotion and to cement some of the themes of the game more firmly at the start to help the demo pop better. By April I had decided, as usual to stop waiting for someone to come to me and try and throw something together with my extremely limited artistic capability, it’s ongoing and may not pan out but If I can manage to achieve what I’m after it’ll really add something to the game I feel it needs.

What Next?

The greenlight submission really is coming soon (although I wont submit it until I am completely happy with it). To get there I need to block off the areas of the map I don’t want the player going and finish the animatic, I also need to make an updated in game trailer and then make my greenlight page. Following me on Twitter is the best way to keep up with ongoing development, fingers crossed my next post will be about the greenlight submission!