Metamorality
[Contains discussions of death and suicide.]
(This is a work in progress, and far from the best version of this post that I am capable of writing. I expect to reorganize it, and maybe change parts of it, in the near future. Though the overall point will remain the same.)
Morality is a weird thing. All people have some kind of morality – they believe some things are good or bad.
(Believing in some morality (for a certain meaning of the word “morality”) is a universal trait for humans, and possibly sapient creatures. If you do something, it’s because you consider that thing somehow good or preferable, unless you have serious mental/neurological problems with self-control. Some people say things like “I know the thing I am doing is bad, but I am still doing it”. But that just means either they think it’s bad, but better than the alternatives, or they recognize the thing is bad according to society’s standards, but still good according to their standards, and they just use different terms for that [1]. Or they perceive the thing as bad for some reasons but good for others, and their internal values are in conflict. But the principle is the same.)
But different people have different moralities, and often consider their moralities objective, and others to just be wrong about the factual question of what morality is like. Which sounds reasonable, as there are factual questions people disagree about.
But the problem is that morality can’t be proven or justified. When you ask a person “ok, X is good, but why is X good” the only real answers you can get is either “X is good because it leads to Y”, with Y being a thing you can recursively ask the same question about, or to “it just is, it’s a good thing, it’s obviously good, that’s an inherent quality”. Or “because I want it/feel it is good”, which is ultimately the same answer, just a more self-aware version of it.
A person has many dogmas about morality. Basic opinions that can’t really be proven or disproven. If X leads to Y you can try to convince people based on factual evidence that X doesn’t lead to Y, but you can’t prove to them that Y itself is different. You can prove to people who are afraid of something that can kill them that they shouldn’t be afraid, because the thing actually can’t kill them. But you can’t convince them that death itself is not a problem (unless the fear of death by itself relies on something else. Like being afraid of death because death is the end of life, and convincing people that it actually isn’t the end). There are many ways to convince people to change their dogmas, and people do that not very rarely, but none of them are logical and based on proof, they are mostly emotional associations, and just realizing that you don’t like a thing you liked previously, or vice versa. Moral dogmas are mostly arbitrary, and when they change the change itself, and the result, are often pretty arbitrary too, not “more right” than before.
Because morality is subjective, and can’t be proven or disproven. That is Hume’s law, which I consider an essential part of both logic and morality [2].
And morality is also different for different people. There is the Typical Mind Fallacy, tendency to assume all people think like you. Especially when combined with thinking morality is objective. But no, different people consider different things good. My favorite example, the example I started to think about this whole topic from, is that I never in my life felt that death is something bad. And thought that preventing a person from killing themselves when they want to is one of the worst crimes possible (I don’t think that anymore. But I still feel it.)
This is a point obvious for some people, but nearly incomprehensible for others: morality isn’t and can’t be objective. Many people have already explained it many times. There’s the thought experiment saying “if you met God, and God told you that murder is good, would you believe that it is good?”. There’s the fictional story about a cult(?) that believes some things are objectively good and divinely commanded, but those things have nothing to do at all with humanity and are related to “the way matter is organized” (I.e. "Mundum").
(There can be factual questions where people disagree, and there is no way to make an observation that will empirically prove who is right. But, in subjective cases like morality, it is impossible to even imagine an empirical observation of argument that will make people change their fundamental ("terminal") goals and dogmas.)
Some would say “ok, external cosmic morality may not exist, but morality objectively exists for humans”. There is the Moral Foundations Theory, which some people (though the actual site dedicated to the theory disagrees) interpret to mean “it is hard, as all science is hard, but by studying people, and cases where people have similar moral positions, you can come to realize that some things really are objectively good”. ("Moral convergence" might or might not be the same thing. There are a lot of terms and definitions.)
Which is...clearly wrong. Morality is not something that exists by itself. You can study the pattern of moral positions people tend to have, but you will find the pattern of positions that people have, not the objective existence of morality, because morality isn’t objective.
I really wanted to be a good person, from a very young age. The best I can be.
(I am talking about my experience here because I don’t want to be wrong about anything, and I know my experience well, unlike that of anyone else.)
And I knew that just wanting and believing isn’t enough. A lot of people in history followed the ideology they thought was good, and ended up doing horrible things. So I can’t just follow the accepted standards of goodness. I needed to figure out myself and confirm what things are good, and why.
I realized that morality is subjective. That I think death is fine but others don’t. That others think being insulted is terrible, and I just don’t get it. That I think learning people’s body language is just as unethical as reading thoughts, because you are trying to discover information about a person they didn’t intentionally share with you (again, i don’t believe that anymore, but still feel).
And if there is no proof or justifications for what is good…what should I do? Should I not steal things from people? It’s bad, but what do I care if it is bad? Should I listen to music? I enjoy listening to music, yes, but does enjoying it mean I should do that?
I noticed dogmas I had about what is good, concluded they are arbitrary, and let go of them. Or convinced myself out of them, depending on the perspective.
And did it again and again. Because I had to believe what is right and true (which is also itself a moral dogma, but I didn’t reach that deep at the time).
In retrospect, I visualize it as a building with many floors. I looked at a floor, realized it is flawed, and destroyed it, falling to the next floor.
Until I was at the final, lowest floor. Let’s say it was metaphorically transparent. Under it there was an abyss. If I destroyed it I would not have anything to believe or prefer to anything else, fall to the abyss of philosophical nihilism, or Logical Void, or whatever it should be called.
Not because I wanted to fall, but because if that floor/dogma wasn’t true, then it already didn’t exist, and I already lived a lie all this time, I would just need to realize it.
And the final floor passed the test. I didn’t destroy it. It made sense, unlike everything above it.
From that moment I built my worldview on this very basic remaining structure, specifically, two axioms. Those are still just assumptions, they are not objective, but they are the least subjective possible assumptions, the least subjective basis for morality that can exist.
Axiom 1 – there exist some situations/events/world-states that are better/preferable to others, and actions should be taken to cause them to happen instead of less preferable things.
This is still subjective. It can’t be logically proven. Someone can always object “no, actually I don’t think any state of the world is preferable to any other, or any actions should be taken”. They would not be, technically, wrong.
But I don’t think it is possible for any person to actually live that way, even if they abstractly believe things. I don’t think any creature capable of even basic thought, like a spider, can believe this. If you are doing actions, it’s because you think they are worth doing. If you are eating or moving, it’s because you want to eat or move, feel like it’s better than not. if you are thinking, it’s because you have a reason to, and prefer thinking to not doing it. If you are a creature capable of thought who manages to not think, it’s because you believe that not thinking is preferable.
You still prefer some things. You have a value (the word “value” being interpreted very broadly here, meaning “prefer some things to others”. “Preference” doesn’t sound as meaningful, though it’s still accurate).
Everyone has values. And maybe they are wrong and stupid to have values. But if you are able to contemplate whether they are stupid to have values, this already means you are the kind of creature who also, inevitably, has some values, the only question is "which ones".
Something is good, ok, but what is good?
Everyone has some system of morality, that determines what they consider good or bad (”preferable or dispreferable”). Things are not objectively bad or good, they are bad or good from the perspective of some systems of morality. “X is good” is subjective, but “X is good from the perspective of Y” is objective (and is the same as saying “Y thinks X is good” or “Y has the value of X”).
There is a thing that happens with your values. When you think something is good, you can get it, or not get it. Your value can be “fulfilled” or “negated” (words I chose, some more fitting words might exist). It’s not necessarily an experience you feel (if you have a value of “knowing the truth”, and someone lies to you, if you don’t know you were lied to, your value of knowing the truth is still negated), but it’s still a thing that happens to you.
Of course, there is no objective reason to care about values being fulfilled and negated.
But, I looked at the objective fact “everyone has values, that can be fulfilled or negated”, and the assumption “well, there exist things that are good and preferable” (i.e. “you should have a value”), and derived:
Axiom 2 – if the values of others can be fulfilled or negated, then it is good for the values of others to be fulfilled, and bad for the values of others to be negated. Or in other words, “my value is the meta-value of trying to fulfill all other values”.
And the system of fulfilling this meta-value is meta-morality.
Things are meta-good when they fulfill the values of others, and meta-bad if they negate the values of others. And, as said before, whether some event or world-state is meta-good or meta-bad is an objective fact. Even if it’s an objective fact you wouldn’t care about if you are not yourself someone who morality is met-morality. (different from Moral Convergence because people converging on a concept doesn’t make it objectively good, only objectively meta-good.)
And that is the choice I made. It’s still an arbitrary choice. But it is the least arbitrary of all possible choices.
Metamorality isn’t a fully new thing. The idea that happiness is good and suffering is bad, so we should aim to increase happiness and reduce suffering, already exists for a long time and, I would assume, most people are aware of it.
Metamorality just defines what happiness and suffering mean differently. Formalizes it. In a way that is more complicated, maybe less intuitive, but I think ultimately much more clear. It solves some weird edge cases and thought experiments like, say, “is it bad for all people on Earth to die at the same time, too fast for them to notice, such that no one experiences suffering?”. The answer to which is “they don’t fail pain, but if they wanted to continue living, then their value of living was negated, which is meta-bad, and should be avoided”.
“Should people be forced to take drugs that make them happier?” Not if they don’t want to, because the simpler experience of happiness the emotion, or physical pleasure, is not the main goal.
“What if someone experiences pleasure that makes them unhealthy and leads to their death very soon?” If they are fine with it, it’s good for them to do that. If they both want the drugs experience, and don’t want to die, this is just an old case of wanting several conflicting things at once. Which metamorality doesn’t solve, but removes a lot of confusion around, and tradeoffs are an old existing phenomenon people know how to deal with. It’s not fundamentally different from something like “I want to move to a different country for a very important reason, but also want to keep living near my friends and family”, or something similar.
(For some time I used the Buddhist term “Dukkha”, because it means “suffering born from dissatisfaction with reality”, perfectly fitting for suffering because you have a value and reality doesn’t fulfill it. But then I read some more about people describing Dukkha, and seems to be mostly something spiritual about the nature of reality that I don’t exactly understand, so maybe it is not the best term.)
There is also the “Platinum Rule”. Instead of the old Golden Rule, “treat others like you want them to treat you”, it’s the much more thoughtful “treat others like they want you to treat them” (because otherwise the difference between people’s values and experiences will mean a lot of harm is done constantly). And it’s the same thing. Except instead of just being a guideline for how to be good, it’s literally a definition for what “good” is.
(“Isn’t that the same thing as Coherent Extrapolated Volition?” people might ask/. Not exactly. I don’t know much about CEV, but the first ever writing on it starts with “this is hypothetical, and humans shouldn’t try to extrapolate volitions, they are not smart enough”. While metamorality is intended as a principle that is applied. Being perfectly metamoral is not possible; being significantly metamoral is not just possible but essential. And you shouldn’t extrapolate anything, but just take people’s stated values. And believe them when they say they changed their mind.)
That is the core of metamorality.
I think that the metamoral approach fully solves the question of what makes good things good, how that is judged (a field of philosophy called “metaethics”. I didn’t know about it when I came up with the term “metamorality” to describe my worldview).
It tells you on which basis to judge whether a result is good or bad.
It is universal, applies equally no matter who you are and what you what. Might hypothetically apply to non-human sapient races, if they exist, and think in a way that is entirely dissimilar from humans.
(It is also a very simple answer to the weird question that some weird people like me nonetheless need to ask: "maybe life itself is bad and it’s better that everyone dies, to prevent further suffering"? To which the very simple answer is "if suffering is judged metamorally, then death of people who don’t want to die is itself suffering, so nothing is really prevented".)
But that core isn’t everything. How to apply it is still a question with no clear answers. I have some intuitions that seem to me like natural conclusions from the basis of metamorality, but they are more subjective, and I might be biased about them.
Metamoral goals don’t help that much to figure out which means are best to achieve that goal. You still have the old selection of consequentialism, deontology, and virtue ethics, just with metamoral principles behind them: “you should take the actions that will end up ultimately in most metamoral results” vs “you should follow metamoral rules” vs “you should have metamoral intentions and habits” (which are not incompatible positions! All three elements are important, the question is which of them you priorities).
There are all the questions of normal morality. Like, as mentioned before, what if the same person has several different conflicting values?
Just like with normal morality, whether something is "metagood" can refer to just "it would be better if people did it" and "people have the expectation and responsibility of behaving that way", which are different things we should always distinguish.
There is the standard problem of people who want to create the maximal amount of happiness, which is “how the hell do you really measure happiness?” (which is not easier with metamorality, but I don’t think is harder either).
And there is the old question of morality which becomes even harder with metamorality – what do you to do with conflicts between people?
Metamorality, by definition, doesn’t discriminate or differentiate. If you have a value, you are happy when it is fulfilled and suffer when it is negated. Metamorality can’t say “no, this is a bad value, it’s not good for it to be fulfilled”. If a system says this, then it is just some system of morality that seeks to make some people happy but not others, rather than a system that seeks to encompass all other systems.
But some values can’t be fulfilled.
If your values are anti-metamoral, “i.e. the thing you want the most is for people to not get what they want”…well, if it applies to you too it’s inherently paradoxical, the thing you want to get is not to get what you want. But if it only applies to others, than you have a perfectly coherent value. And from the perspective of metamorality, it would be better if you could fulfill it (it would cause you happiness). But doing so will cause suffering to others. When choosing whose value to fulfill, the metamoral approach would never choose you, because it is impossible to make you happy while also making others happy. By definition.
There are some/values desire that are less hard to fulfill. If you want to, say, kill someone, it is possible to arrange for you to kill someone who wants to die (or someone who is already dying anyway, and doesn’t care). It is not contradictory, from a metamoral perspective, to try help you fulfill that value. But it is very hard, and will not work most of the time. It’s bad, from the perspective of metamorality, that you will not get your value fulfilled, but it’s not something you can get. Maybe in a slightly different world, it would work, but not in this one.
And that applies to almost all values applying to other people. The values metamorality will focus on fulfilling are things happening in your life. If you have a value of other people watching the same movies as you, being part of the same religion as you, and wearing orange shirts, it’s not something you can force on them. That would negate their values. Maybe your value will count if they don’t care. If someone is truly indifferent to the color of their shirt, and you really want it to be orange, maybe you would get to make theirs orange. It increases your happiness without reducing theirs. If they do prefer even slightly for it to be green? Your value is irrelevant, even if you care about it much more than them. You will experience suffering, and that is bad, but a society where people get what they want out of life is not possible if anyone is, even to the smallest extent, forced to do what others want them to get out of life.
If the orange-shirt person is your friend, you can still say “oh, you’re my friend, If you really care I will wear an orange shirt instead of green, even if I prefer green ones”. It’s your choice. If you have the value “I do what my friends want”, you can fulfill it. The point is that you can’t be forced to do it. Society can’t dictate you how to behave, and you can’t dictate society how you behave.
I read some post on the internet talking about aliens, and about how if we meet aliens, we should take our values, and the values of aliens, and then should fuse them and follow the exact average of all values. And they call it “the moral law”.
No. Having to follow arbitrary rules and principles in our life, just because aliens follow them in their lives, is not moral law, it’s bullshit! And I think the aliens will be equally angry about it. The point is that everyone gets to follow their own morality, not that everyone has to constantly follow the morality of everyone else (unless, of course, they choose to).
What counts as “your life” vs “the lives of others” is a very hard line to draw.
“your freedom ends where my freedom begins”, but where does my freedom begin?
For example, if I am wearing ugly clothes, the consensus (of normal morality) is that it’s my right to do so, even if they don’t like it (i.e. it negates their values). But if I go around naked, I am in the wrong, and in most places it is not just considered immoral, but is literally illegal. Even if the fundamental situation in both cases is “I chose to look in a way other people don’t like”.
I personally do not have the desire to walk around naked, but many people do. And I always thought that there is an obvious solution. Not one we can currently implement, but with technological progress and augmented reality, there could be a situation where someone walks naked, and if I don’t want to see that I press a single button for filters, and see them clothed. So they aren’t forced to change their lives, and I am not harmed by seeing something I don’t like, it doesn’t interfere with my life (I may still hate them doing it, but I have no right under metamorality to affect them).
Is that right? Let’s take another example.
When my neighbors blast music at 1 AM, and I am trying to sleep (which is a real thing that happened repeatedly in the last several years), they are clearly in the wrong. They affect my life in a way I don’t want. If they had perfect sound proofing in their house, it would be fine to have strong music there, because it doesn’t come outside to bother me.
But if my house is so soundproofed I don’t hear them? Do they get to do it? Maybe. But then I have to waste money on my house to avoid things they aren’t supposed to do in the first place.
What if there are perfect earplugs that would make me protected from their music? They would be much cheaper than soundproofing my house. But still cost money. What if they were free? Using them would still be a cost on me. I don’t like wearing earplugs at all, and certainly can’t sleep in them. But if those were superearplugs that protected me from sound and don’t feel like anything, so I am fine with wearing them? Well, they might stop other sounds I do want to hear, like my alarm clock. But what if they only specifically blocked the music from my neighbors, and nothing else? Then we are literally back to my previous example about augmented reality filters.
So apparently I do think there are some costs people have a right to force on me for avoiding the consequences of their freedom. One button press is an acceptable cost. But some others aren’t. How do we decide? Do the costs acceptable to me vary depending on what the actions are? Ugly clothes vs nudity, very loud music vs only slightly loud music? Metamorality should be objective, and there is no objective way to judge the severity of people’s actions and the extent to which they affect my life and not theirs.
(There could potentially be a way to objectively judge the strength of my values, and how much they are being negated, how much Suffering I experience as result of others’ actions. But that is not what I currently focus on. I currently focus on the deontological structure of a metamoral society, and which rights people should have to ensure it remaining metamoral. I don’t measure how much total happiness and suffering is created, and judge whether an action is good based on that. That would open the way to the concept of “utility monster” – someone who benefits so strongly from having things compared to me having them that they “deserve” to have everything I have, and my feelings and values are meaningless and compared to them. And that is not the goal. Metamorality is about everyone being free to live their own life and fulfill their own values while being only minimally harmed or limited by society. Not about a chosen few getting everything because they feel more strongly or enjoy things more. We have enough of those worldviews already.)
So, what is my answer to the question of how much effort counts as affecting my life vs not? I don’t have an answer.
Are people right to hate things even if those things are metamoral?
I had a long argument with my father, about his opinion on drug use (or other forms of “wireheading”) and on movements that have less children. He emphasized a lot how he doesn’t think that those things should be illegal, because he doesn’t have a right to force anyone to do anything, and it’s bad when laws forbid too many things, they have a right to do this, but it’s still bad (and the implication he didn’t state directly that if they were better people, they wouldn’t do it despite having the right).
And I am pretty sure our actual disagreement was purely instrumental, and if we had enough information and enough time and enough mental clarity we could find a practical answer that is either “oh yeah, this specific case actually had a meta-bad part that actually violates the values of others” or “no, there isn’t a problem” (thought that isn’t actually possible, and would require, among other things, accurate information on the different possible futures depending on how much children are born). And also on practice, even if he doesn’t intentionally acts on those beliefs, what things he perceives as bad determines, for example, what parties he votes for, and what actions they will take.
But in principle, is it right? Could you be convinced that a thing is bad even if it doesn’t harm anybody?
Well, you can. You totally could have a value system that says something innocuous is bad. Metamorality doesn’t mean forcing people to not have their values, only that if your values are meta-bad they wouldn’t be fulfilled.
Now, intuitively, it feels correct to me that you can just feel that something is bad without it actually being a part of your values. There are a lot of things I hate when other people are doing and don’t affect me, and I of course know that it’s not actually wrong – not only do they have the right to do it, it is good they are doing it, as it fulfills their values. Examining my feeling on what I hate because it’s actually bad (which is a lot of things, the world is terrible, remember), and what I just hate personally and shouldn’t endorse, takes a lot of my attention, but it is something I feel is an essential part of being a moral person.
It is possible I am able to do it at all only because I use metamorality as replacement for my own morality, and this is just a form of invalidating my own beliefs and emotions. That is true in a sense, but my personal morality before I thought of metamorality (and which remained as unendorsed emotions) wasn’t at all coherent, and was pretty bad by conventional standards. For example, I really hated “low cultured people” who talked about things I considered unimportant, listened to music I don’t like, and masturbated, and I thought this all made them inherently bad people. Which is bullshit. They can also be actually bad people, doing meta-bad actions, but this is a separate topic, unrelated to music, which should be judged morally only if it related to morality, and not things like music that have nothing to do with morality.
But there is a correct insight (by unitofcaring, read her blog, she is the best person in the world and very metamoral), that people often say and believe “your need is not valid and shouldn’t be met”, when they could say instead “your need is valid and important but I am not going to meet it”.
And that is true. Suffering from values being violated is always real, regardless of how good the values are, or how much they fit into general metamorality. The grief of a family over the suicide of a family member is real suffering, and it is bad, and in a perfect world be avoided, even if the suicide was completely voluntary, based on complete information, and the person wouldn’t have regretted the attempt if they survived, and so the act is meta-good.
I believe people have a basic human right to that kind of suicide, and it is an extension of my belief in metmorality, because the person’s values is what makes it right.
And yet the suffering is real, and it’s bad even if the situation causing them is perfectly metamoral. And the situation is bad and should be avoided despite being metamoral (and despite the fact that without the full moral right to that suicide, the situation would be even worse). But also, on an objective level, “my brother willfully and metagoodly committed suicide” isn’t at all different from “my brother willfully and metagoodly became a part of a wrong religion” (where “wrong religion” includes atheism), or “my brother appreciates the wrong kind of art and not the right one”. And isn’t very different from the horror aliens will feel when they will first meet humanity, and discover humanity sorts pebbles into unimaginably wrong heaps.
All of those are equally suffering, equally true and valid suffering, and suffering equally arbitrary in their source. Which is exactly the reason the truly right course of actions is to allow person to fully determine the course of their lives, and not the lives of others, even if it will still cause suffering (for example, it’s bad if nobody loves you, and causes you suffering, but obviously doesn’t mean people have to love you, that’s not how it works), and it’s still better for that suffering not to happen in the first place. But a perfect world without suffering isn’t possible at all, in my opinion, and a metamoral world with some but still far less suffering is possible, if not obviously achievable judging by our present situation.
You don’t have to act metamorally or believe in metamorality, as long as your actions are not actively anti-metamoral. You can follow your normal specific morality. And even be nice and altruistic.
But if you want to really be good, if you want to do things that make the world better, you have to be metamoral. Otherwise you will fail. Maybe not fully fail. Maybe you can make 90% percent of what you do really good. But the last 10% of your actions will still be bad. And they wouldn’t become good, wouldn’t become help, unless you realize that the people you are helping define what is good for them, and therefore what counts as actually helping them, and not you, or cultural norms, or any other external authority.
There is the question of how much we should consider the values of those who aren’t alive.
Metamorality, as codified by me, applies not just to a person’s experience as related to values, but the values themselves. Which is why it is a bad if someone who wants to live dies, regardless of whether they notice they are dying before they stop existing.
If values can exist even if the person is dead, should they also apply to people who are not born yet? Does that mean metamorality inherently means we should leads to the birth of as much people as possible, because they would want to live if they existed?
I think not. If people do not exist yet, they don’t have values yet, so they can’t be fulfilled or negated. It is bad to kill someone who wants to live, it is not bad to prevent someone from being born, because they don’t have yet values, and also we don’t know yet whether they would want to exist or not.
Preventing existence is also not good, it is neutral. Whether to have children is I think something that depends purely on the values of people who are currently alive. Though I think it is important to not do things that will make the lives of your descendants worse, regardless of the specific values they might have. Like leaving earth a radioactive wasteland. I don’t think there is a non-negligible chance of people existing who prefer to be born in a radioactive wasteland.
(So in terms of Population ethics, my metamorality-based approach is closest to Person-Affecting Views. But may still be very different.)
But that does mean the values of people who are long dead are important? People who did exist, and did have values, but lived 4000 years ago?
Hard to say. They values are, in any case, irrelevant when it comes to society or global things, because under metamorality you have no say over the lives of others.
But that does that make grave robbing meta-bad, as harming the personal belongings of people who don’t want them harmed? I don’t know. Maybe? Maybe grave robbing is meta-bad (in addition to culturally bad in that currently living people don’t want graves robbed. Though they also mostly have no say).
There is the old questions of “what is more important to achieve – more happiness, or less suffering”, which I don’t know how to answer in metamoral terms because I don’t see a clear way to distinguish “not getting something you want” and “getting something you don’t want” in terms that are objective and universal. Ultimately they are just “world-states that are more or less preferable”.
I don’t have answers to all those questions. But no one really does.
But I think it is possible to find those answers. And it is easier to finding using metamorality as a basis. To be perfectly honest, I think they can be found only using metamorality as a basis, and not otherwise.
I hope people understand what I am trying to say here. And agree with me, or disagree and then write me, so we can both come closer to the truth through discussion.
And I hope people will make the world better because of reading this post. Or just make the world better regardless, my involvement doesn’t actually matter.
Clarification 15.12.24:
The intention of my description of metamorality is two two things at once, which might be the reason for some of the confusion it holds. The first is to try convincing people who are already altruistic, and already want to be good, what good really is, and that’s why it needs all those logical justifications from first principles. The second is that even for people who aren’t altruistic and don’t want to be, a society with a metamoral approach to morality will usually be better to live in (unless they actively need for the values of others to not be fulfilled), regardless of how objective the justification for the foundation of the system is.
[1] I know a guy who once said something along the lines of “there are all those people who say they do good, but actually do horrific things. Therefore, good is fake and bad, and I hate goodness, I should be evil instead”. The guy obviously still considered some things good and some bad (and was angry at how bad the pseudo-good things were), he just insistently uses a different terminology (and is possibly very self-deluded).
[2] Though many people misunderstand the law, and misuse it. For example, people present “it is natural to eat meat” as contradiction of Hume’s law, but usually the argument goes “it is natural to eat meat, humans are adapted to eating meat, not eating meat might create problems in humans”. And then considerations are based on the created problems, and other factors. “It is natural, so it is good to do it” is of course a very obvious logical fallacy.
Written by RationalMoron/lord buss
Comments to this page are encouraged.
Other published Google drive pages (with long and short essays on various weird topics): https://drive.google.com/drive/folders/1Wnk9CP7qrFzV0nzNbKapc7b4u_QJeqZN (hopefully I will post them to a blog at some point)
Contact me at:
lordbuss@gmail.com
lordbuss_MageOfRage on discord