Philosophical Foundations

Introduction

A philosophy can be thought of as a guide or framework for the rational thought of an individual. They seek to answer questions like “How should we act?” and “How should we think?”.

This framework is intended as both a possible foundation to build a well reasoned personal philosophy, or as a lens to view and understand other peoples ways of reasoning. At the end, I present my own Philosophy expressed in this framework. Seeing how well this ends up correlating with my intuitions and values will be a future post.

Some motivations and further explorations for a few of the ideas in this post, as well as much more, can be found here.

A Quick Preliminary

Slightly extending Daniel Kahneman’s “Thinking, Fast and Slow”, we can sort the reasons for individual thoughts and actions into 3 categories.

  • System 0 - Reflexive actions that proceed via the CNS outside the brain
  • System 1 - Thought that is unconscious, e.g. intuitions or emotions
  • System 2 - Conscious, deliberated thought

The rest of the framework concerns itself only with how System 2 should operate, but it’s worth making note of the others, since they can inform System 2 and be influenced/trained by it.

General Framework of a Philosophy

Beliefs

We can roughly divide our beliefs into 3 main types that allow us to reason well about where these beliefs should come from and what they’re for.

  • Epistemic - Beliefs about external features of reality
  • Moral - Beliefs about how we should act in the world
  • Semantic - (Meta-)beliefs about how we should label things

Working with Beliefs

Ideally our Philosophy should encode how to act and think about everything, at least on some level, even if that ends up being “I’m not sure”. Thus in order to “complete” our set of beliefs, we need a system of reasoning to derive new ones given the current.

This is fairly unconstrained. For example it could be pretty informal, or it could be something very mathematical and specific such as predicate logic. Multiple systems of reasoning can be folded into a single one to allow for flexibility.

Seeds of Belief

Our Philosophy also needs a way of “initialising” our beliefs. Epistemic beliefs require incorporation of sensory information, done via the system of reasoning. Moral beliefs are seeded by moral axioms. Semantic beliefs are seeded by language, although this is fairly straightforward and more of a technicality.

Evaluating a Philosophy

Sometimes we find ourselves preferring one Philosophy to another, thus it cannot be the only foundation of our cognition. Preference axioms can be used to evaluate a Philosophy in some way.

Summary

Thus, a Philosophy is some set of statements that cover the following:

  1. Initial language
  2. Moral axioms
  3. A system of reasoning over beliefs and sensory information

And we evaluate how well it’s doing according to…

  1. Preference axioms

My Personal Philosophy

Outline

Preference Axioms:

  • Logical Consistency
  • Weak Intuition Compatibility
  • Winning

Initial Language:

  • Inherit from local culture

Moral Axioms:

  • Physicalism
  • Pragmatism
  • Consequentialism
  • Utility Maximisation with Goal Uncertainty
  • Logical Decision Theory

System of Reasoning:

  • Bayesian Probability -> Second Order Logic -> Informal Deduction
  • Principles
  • (Physicalism)
  • (Pragmatism)

Preference Axioms

Logical Consistency. If our Philosophy does not give rise to consistent outcomes, then there’s scope for decisions to be influenced by the path taken to make those decisions, rather than the intrinsic properties of the problem. We should not come into unresolvable conflict and should instead produce coherent outcomes.

Weak Intuition Compatibility. The human subconscious is a large part of our neural processing power (System 1). Our intuition comes from this, and has been tuned over the course of millions of years of evolution (including before we were humans). Modern society however is terribly out of distribution with respect to the natural world. Thus we value our intuition and the outcomes of our Philosophy should be somewhat compatible with it, but we do not mind if there are well justified conflicts.

Winning. Rationality can be seen as systematised winning. Our Philosophy should drive us to do better, not worse, in what we aim to achieve.

Moral Axioms

Physicalism. There’s no more to the universe than what can be observed, either directly or via its influence. As such there is no soul in the dualist sense, though perhaps there is some physical brain structure or pattern that is compatible with what people reason the soul to be responsible for. As such, epistemic beliefs are solely beliefs about physical reality and morality makes no appeal to a higher power.

Consequentialism. Moral value is ultimately derived from consequence, though this can be done in abstract and generalised ways. E.g. consequence across all possible worlds, rather than consequence observed or predicted.

Pragmatism. Usefulness is basis of reason. Beliefs pay rent. Epistemic beliefs are useful for modelling how our actions will interact with the world to achieve our goals. They pay rent in anticipated experiences. Moral beliefs are useful to encode our desires and motivations, and form the basis of co-ordination with others. They pay rent in predicted actions. Semantic beliefs are useful for communicating ideas to others and word-meanings provide lenses to focus thinking. They pay rent in lucid explanations.

Utility Maximisation with Goal Uncertainty. Every agent has some internal utility function they are trying to maximise, however there is potential uncertainty to the agent itself what that function is. For example, us humans maybe want to maximise happiness, or maybe life satisfaction. Thus we should try and maximise our utility over the expectation of our internal goal. Assuming it to be a single well-defined thing leads to actions which are in conflict with not making this assumption (Mathematical exploration of this coming soon^TM).

Logical Decision Theory. Since there are other agents in this world, modelling us, things get a little more complicated than “causally maximise utility”. How we act affects how other agents can co-operate or exploit us, and taking this into consideration affects how we act. Blog post exploring some tractable explorations and consequences of this coming soon^TM.

System of Reasoning

Bayesian Probability. The foundation of lawful reasoning, an extension of logic.

Second Order Logic. When propositions have extreme probabilities, or we are reasoning about mathematical objects, Second Order Logic is a very useful system. It’s suitability vs other logics is debated but it lends itself to, in my opinion, cleaner and clearer reasoning than the rest.

Informal Deduction. Of course not every decision needs to be made with rigour and strict reasoning. For more trivial things or for a first approximation of what’s best, informal deduction serves as a computationally efficient mode of thinking for handling the every-day.

Principles. When deriving moral beliefs, principles can be useful as they allow us to correct systematic errors in human reasoning. This works well when something often appears to be locally-good but typically has long-term or second-order negative effects (or the converse). Principles are not absolutes and should be overpowered if sufficient evidence to act against it is accumulated. They should also be non-zero in value and there should exist some decisions that would be made differently if the principle was not held. This is a sort of soft rule consequentialism.

Home