Posted on by Ryan Gauvreau

This post originally appeared at The Oak Wheel on May 22, 2014.

“Companions, the creator seeks- not corpses, not herds of believers. Fellow creators, the creator seeks, those who inscribe new values on new tablets. Fellow creators, the creator seeks, and fellow harvesters; for everything about him is ripe for the harvest.” Thus Spake Zarathustra, by Friedrich Nietzsche.

Blue and orange morality, says TV Tropes, is what you have when “characters have a moral framework that is so utterly alien and foreign to human experience that we can’t peg them as good or evil… There might be a logic behind their actions, it’s just that they operate with entirely different sets of values and premises with which to draw their conclusions.” 

We’ll get into it more as we go along, but to be succinct you could say that they’re outside our moral and ethical context.

Such characters, and those who approach this situation, are among my favorites. They are aliens— or they are the insane— or they are all too sane, and all too human, but nevertheless alien to your understanding of the world.

You can create such characters by making veritable schizophrenics. A delusional paranoiac is certainly not interpreting the world like you do. That’s hardly the only way to do it, either, but my favorite method is to take the value system of a character or group and tweak it just so. Realism will give me a minimum amount to change when I am working with nonhuman psychologies but no matter what the circumstances I take immense satisfaction in making the greatest possible changes with the lightest possible touch.

When I can get away with it, I like to change just. One. Thing. And let the whole world shift because of it.

This is what Tailsteak says as he peels back the curtains for Max Hellenberger and Lily Hammerschmidt, two of the characters in his philosophical slice-of-life webcomic Leftover Soup.

Both characters [Max Hellenberger and Lily Hammerschmidt] have unique, unusual philosophies; philosophies that are so intense, so fanatical, so behaviour-defining, and so far off from societal norms that Max and Lily can only exist because their friends bend over backwards to accomodate them – and both of these philosophies stem from the same fundamental a priori principle: that, intelligence being equal, there is (or there should be) no ethical distinction between human beings and animals.


Both characters exemplify the inherent danger in establishing a reasonable-sounding set of first principles and then deriving a philosophical system from them without regard for practicality or the pre-existing culture. Their mindsets, like the systems of many of the more unique philosophers before them, are like two different flavours of nonEuclidian geometry – internally consistent, but, at times, surreal.

You don’t have to be a member of some other species to have blue and orange morality.

A paradigm is based on a foundation of axioms, things that we consider to be self-evident truths. You can’t really argue someone into accepting an axiom because axioms are the things that you argue with. One of the best examples of an axiom might be the idea that you should update your beliefs according to new evidence. If you don’t believe that then no amount of evidence in support of it is going to convince you to update your beliefs. Similarly, the Founding Fathers claimed certain “truths to be self-evident.” There was no point to arguing about them, they just were, like the light of the sun at noon-day.

My favorite way of making an “insane” character, then, is to change one of their axioms and construct their worldview from there. You can’t argue with them, and they can’t argue with you. They are reasonable. They are logical. They just… see things differently, and they think that you’re as insane as you think they are.

The simplest way of defining a blue and orange moral system is “any moral system that you can’t actually debate with because you can’t agree on common principles from which to begin.” For example, arguments with somebody who believes in Divine Command, the idea that something is good or bad purely because a divine being has said so, are notoriously difficult to find common ground for unless your system is derived from the commands of the same divinity.

As Tailsteak says elsewhere in his commentary,

“Most people (fortunately) guide their lives with a combination of instinct, social pressure, and several vaguely defined competing philosophies. I think that regardless of how reasonable a single manifesto may be, following it religiously will, necessarily, skew your mindset and behaviours farther and farther from normal.”

When you recognize the contradictions and competing philosophies and ad hoc justifications for what they are, you can cut through the corpse’s bones and dredge its soul up.

You can also develop what is at least initially a system of blue and orange morality by making one party aware of facts that the other is blind to. These facts almost invariably lead to a different value system, yes, but the differences are reconciled once all the information comes to light. Once the Formics in Ender’s Game find out that every human has a distinct personality and it’s as if they were killing a queen every time that they killed a human (when previously they thought that they were killing remote-controlled bodies), they change how they value individual human lives.


In his book Xenology, Robert Freitas describes an experiment by Dr. Bernard Aaronson on the connection of psychology and our awareness and understanding of time. While under some circumstances the effect is more pronounced than under others and not all mindsets would really qualify as Blue and Orange, it would still serve to read up on both the account given in Xenology and an article that Freitas wrote for Omni, which is shorter but contains a couple of details that the other account lacks.

For a little bit more on the topic, check out my old TV Tropes article So You Want To: Design an Alien Mind. Once you’re done with that, hunker down and read Freitas’ Xenology. It’s completely free, and besides what I’ve already discussed it goes into things like “genetic sentience,” universal emotions (and how unlikely they may be), and the possibilities of alien logic. More than xenopsychology, though, it tackles the whole package and goes from planetary evolution all the way to first contact and everything in-between.

Best of all, he has better credentials than just a good imagination: Robert Freitas has degrees in physics and psychology, and the book shows it. There are over four thousand footnotes, and that’s just counting the ones that refer to Freitas’ reference material.

If you want to play hard (and crazy random) mode, take twenty different actions and determine their im/morality by flipping a coin for each of them, and then figure out an internally-consistent system that would account for the results.

I have no idea what it will produce, but at least the process itself should be fun.

Your turn: What do you think would be a good basis for developing a blue and orange system of morality?

R. Donald James Gauvreau works an assortment of odd jobs, most involving batteries. He has recently finished a guide to comparative mythology for worldbuilders, available here for free. He also maintains a blog at White Marble Block, where he regularly posts story ideas and free fiction, and writes The Culture Column, an column with cultures ready for you to drop into your setting. 

Seventh Sanctum™, the page of random generators.

...  ...  ... ...

Seventh Sanctum(tm) and its contents are copyright (c) 2013 by Steven Savage except where otherwise noted. No infringement or claim on any copyrighted material is intended. Code provided in these pages is free for all to use as long as the author and this website are credited. No guarantees whatsoever are made regarding these generators or their contents.


Seventh Sanctum Logo by Megami Studios