My philosophy, or system of definitions to be more accurate, is that it is trivially true that life has value. This is multiple articles of my own personal philosophy and related thoughts.
Note: Clicking or pasting links may cause you to lose your place when you hit the back button.
Related:
Workerism, how about worker's capitalism as an alternative to other economic systems?
Humans invented meaninglessness
I am a very literal person, especially when I'm tired. That's why, when I hear someone ask "What is the meaning of life?" my first response is "It's life. What do you mean by meaning? Purpose, or referencing?"
If 'life' is short-hand referencing not just life forms, but all possible experiences and thoughts we could ever have, literately the totality of our existence and everything we can conceive of, then frankly, any other definition than 'life' would be demeaning. If your default meaning is the totality of all things, to mean anything else would be to mean less than itself.
Of course, there is another definition of meaning, and that is what is intended or interpreted by some sapient being. Since you are a sapient being, the obvious conclusion is that you can make whatever kind of meaning you want for yourself. Nobody is going to stop you from saying 'I want the meaning of my life to be That Annoying Guy to all my shitty neighbors.'* and then proceed to be really mind blowingly annoying, although you may be thwarted if you manage to encounter someone immune to being annoyed, or if you find yourself out-pranked or thrown in jail for going a step too far. Likewise, if you want the meaning of your life when anyone hears about it to be 'That person who worked hard and went to Mars', you better actually go to Mars. (Although depending on your actions, you could get 'that asshole who frivolously shoved a car into space and spent his money on Mars instead of treating his employees better')
If all things mean themselves by default, and the secondary, newer kind of meaning tends to happen automatically whenever sapient beings are around because we cannot help but interpret events, it follows that we all automatically have meaning of both kinds. This meaning is not necessarily a nice meaning: if you are an asshole, your meaning will be 'that asshole' to other people!
Humans invented meaninglessness. Stop blaming the poor universe for it, except for the part of the universe that is you of course, that part you can blame as much as you want.
Meaninglessness may actually be one of our greatest inventions, because it would be hard to have symbolic language if we could not conceive of a thing, a 'symbol', that does not stand for itself and by itself is completely meaningless, only becoming useful when it has something else to refer to.
You can repeat a similar argument about value. Sentient beings can't help but treat some things or experiences as having more value to them than others, it's part of our survival instinct to want. It's arguably the entire reason we are sentient in the first place, it's a useful trait. We are all autovaluists.
So you can come to me with your nihilism and go 'We are just meaningless specks in a vast, uncaring Universe' and I will, with my very literal mind, respond 'No, you and everyone else are part of the universe too, buddy, not something separate from it. As long as that is the case, like it or not, you do have some meaning and part of the Universe cares about you.'** It takes human ego to declare we are separate from the Universe and that it is somehow terrible and dark and edgy that random spoons, chairs, dirt and bits of star stuff don't have any thoughts about our existence, that either the entire universe has to revolve around us or we don't matter.
Morality as an autovaluist
Morality can still exist in such a universe. It can even exist objectively. It just depends on how you define morality. I'm not one for 'definition arguments', where people argue about what the 'true' definition of a word is. As far as I'm concerned, if morality as a concept has any actual worth, then it will work just as well if we replace it with the word 'Floogle' and carefully define what we mean by floogle.
The common definition is some kind of code of conduct. Yet another useful definition is the set of instincts we have in non-psychopathic individuals for successful communal living. These two things are not identical, but they are both useful concepts, so I find it useful to call them 'Code morality' and 'Moral instinct' respectively, with the understanding that the two moralities referred to are actually different concepts that have some overlap. The existence of moral instinct is not subjective, we can either measure such behavior in individuals and animals or we can't, the value of moral instinct is entirely dependent on whether you like survival or people. Code conduct morality on the other paw/claw/hand/tentacle is important if you don't want to go to jail, and agreeing on a code satisfies some of our moral instincts.
The morality I am most interested in, however, is neither of those. Evolution is short-sighted. If there is a behavior that would boost our happiness, yet lowers our gene propagation through the population, evolution can't select for it even if not going for it will cause the eventual extinction of our species, say because in the very very long term the happiness booster has benefits for the species as a whole. Now, if it would encourage the genes of our siblings or family group, it could still spread, that's how bees evolved after all. But my point is that there is nothing preventing evolution from evolving a species straight into extinction by selecting only for short term benefits. Many species over-specialize for a niche, then die when that niche inevitably shifts elsewhere or disappears entirely. Just look at pandas and their reliance on bamboo.
Hypothetically, then, we can imagine a superior version of what evolution does blindly without actual purpose and do this purposefully. If we value our communities, families, and life, then we can ask what would actually benefit our fellow sapient being in terms of fulfilling their values such as happiness and long life, even if this would end up going against a given instinct we have, and with everything defined carefully this can be evaluated objectively, because certain actions definitely will not make other people happy. Now, this 'close to evolved moral instinct but with actual purpose and aims' must be done carefully, because ignoring an instinct exists will not make it go away, and the only reason we do this is because of our own values so it doesn't make sense to make ourselves completely miserable ignoring said instincts. In fact, it could end up contradictory, because if our reasoning is 'valuing life' then ignoring our own needs means we aren't actually valuing all life like we wanted to. Now, if the only kick you get out of life is helping others, then all this translates to is not working yourself to sickness, but if you are a more normal person then this means you should take some time off for fun every once in awhile and be careful of the 'being altruistic gives me license to be a jerk/being so good on my diet means I get to eat icecream' effect.
One must also study and think carefully about what is actually effective to get what one morally desires (or even just desires in general). For instance, effective sex ed is much better at stopping teen pregnancy (and thus abortions, if one cares about non-sentient life more than sapient, which, I can't say I personally do) than simply telling kids 'sex is bad'. Teenagers do not have good impulse control, that's why they are teenagers and not considered adults. Now, if one cared intensely about both giving women right to their own bodies and preserving embryos, one could also push hard for funding for artificial womb technology, which has been getting increasingly better but suffers from the problem that few people are actually interested; this would be great even if one doesn't care about embryos because no pregnancy is completely without risk of death, and childbirth is very painful for many women. For some women, childbirth is completely impossible, due to health issues, and artificial wombs would make that possible for them. It would be a great boon to society.
Likewise, one may conclude a communist society where everyone acted like bees would be great, but the hard part is that humans do not in fact act like bees. Unfettered capitalism is not the answer, of course. The better strategy is a very socialist-flavored capitalism with 'need to have to not die' goods and services like air, water, roads and food safety (no one wants poison in their food) and medicine regulated by the state, and some form of control over how wealth is distributed: this could be accomplished in multiple ways (some of them better than others), taxes on the rich, or preventing property inheritance, or wage controls, or worker-owned companies and capital only where wages are voted on by the entire company and shares are distributed equally with no one allowed to accumulate more. If we regard "pure communism" as the state distributing and controlling all wealth and resources equally, then the central problem is that it puts power in the hands of the politicians and state officials rather than the people it meant to empower, thus the appropriate replacement is a "worker's capitalism" where workers themselves actually own all capital, and all the state does is manage the flow so that there are no 'owners' who set wages for all the other workers. The conflict between labor and capital will be over if they have a 1 to 1 relationship, although of course the ultimate arbiter of any hypothesis should be experiment.
Many economic papers suffer from the problem of not enough contact with experiment, not even in the form of simulations that aim to capture as many real world variables as one can add, making the system more chaotic but also more reflective of real life. Economics should not be based purely on politics, but also on sincere scientific study of what has actually worked in the past, and careful test runs with what has never been tried before. Unfortunately, our current Democratic political systems are not really optimal for careful experimentation, especially if an experiment optimally would run longer than most voter's attention spans or a politician's current election term. We do have experimentation in the form of trying out a different party than last time, so that's something at least, so the best fix is to make sure that everyone votes and to get out as much information as possible so people can make informed voting decisions.
*Please do not do this.
**I care about you, a little bit, so don't be an asshole. I have it on good authority being an asshole is an unhealthy condition, despite technically not being an official form of mental illness.***
***You were going to nitpick that sentence and declare "HOW DO YOU KNOW SOMEONE CARES?", were you not?
"No man, I concluded it was technically true since I care about myself and I do not plan to become suicidal," some of you say, eager to win the argument.
"Cool beans," I reply to a conversation that never happened. "Just remember not to be an asshole anyway."
"Never," declares random psychopath, visitor 1 out of 100.
"What if I told you it was in your best benefit, as it gets you a lot of rewards?" I avoid mentioning punishments because that argument doesn't work well on psychopaths.
"I'm not good with long term planning. I do idiotic things like steal from a bank when I forget my wallet, remember? I can flatter people really good though."
"Eh. Never mind. Something tells me you are not the target audience for this post."
Meta Morality
Science does not automatically tell you what morals to have, but that doesn't mean it has no role in morality
Any philosophy worth its salt must have science within itself. This is because, to make valid judgments about what to do, you need to make valid judgments about what the situation even is and what the results of different inputs to try to change the outcomes will be!
However, science by itself is rather poor at telling you what moral positions to take, mostly because morality is just a word: you have to define it first before science can hope to say whether a given position is more moral or not. So our philosophy should have some minimum of science + some system of definitions. It also can't force you to care about something you really don't care about.
If you don't care whether anything lives or dies, and don't care if you feel pain, you're not going to care about morality very much. The interesting thing about being a living being, though, is that we generally kinda do care by default. One could even write a definition of life (life, being just a sound/word, can have multiple definitions and this isn't contradictory, you just have to be careful in discussions of life to specify what you actually mean) where-in life is what automatically, intrinsically, tends to value things like food or not-dying (so viruses are not alive, trees are 'sort of' alive in that they display goal oriented behaviors but probably/hopefully aren't conscious, and fish and mammals are definitely alive, under this definition; it doesn't have to be a binary). Moral systems then can be systems to manage when different values come into conflict, particularly social/group values versus self-interest values.
We can even have a system where we notice this commonality between systems and start to care about value and the makers of value for their own sake, and create a sort of 'meta morality system' that navigates differences between different morality systems without descending into relativism: if one person values a thing, that thing has value even if a different person doesn't value it; it's just that there's nothing 'magic' about value in and of itself, so that this is an objective fact doesn't mean anything good or bad, it just is. It just happens to be that living beings really care about value; that's what valuing is. By using a careful system of definitions, we can completely avoid nihilism, and say that things automatically have meaning and value simply because sentient living things can't help but assign meanings.
It should be noted that being precise about definitions and, especially, what is meant by procedures, is an important precursor to being able to actually do science. So one could aim for a philosophy that spontaneously incorporates science, rather than one that has science tagged on as an aside. One way to do this is to demand that any philosophy worth its salt have a conflict management mechanism and an error-adjustment mechanism. This automatically leads us to dump countless philosophies which end up schisming because they have no procedure for how to go forward when two sides disagree; natural philosophy does, it's called an experiment. This dictates then that where-ever an argument could possibly be settled by natural philosophy, we should use natural philosophy (aka science). Mathematics also has such a mechanism: it's called a disproof. So when such is appropriate, we should also incorporate disproofs/math. Unfortunately, there are many arguments where this just isn't possible, like when both sides agree what the outcomes will be for different options yet prefer different options still (say one side prefers more money for business over lives saved). In that case, social mechanisms with a long track record of nonviolent conflict resolution, such as democracy, can be used.
Science can also help reveal what moral positions people actually have in terms of how they are willing to act versus what they say they have. That's certainly useful. It can also reveal that policy that was morally motivated backfires and, say, costs more lives than it saves.
To properly reveal that a policy backfires however requires a great deal of honesty on the part of the policy makers, because if they said one thing 'save lives' but actually meant another, 'stop people from doing a thing regardless of how much harm stopping them actually causes' or even 'hurt/punish people we do not like', then we'll note they probably won't change their ways on the revelation that their policy is backfiring!
If you have a value, science can help you decide what options to take to satisfy that value.
Here's an example of a position I decided to take based upon my own values and what science had to say on the subject: my decision to be against NFTs.
Diet is also another place where science can be informative; one might decide to stop shaming others for their weight upon the realization that this strategy just doesn't work and, if the cause of the weight is stress-eating, may be actively counter-productive.
Values: When you can derive an ought from an is.
There is a saying that dates back to Hume, if I remember correctly, that you can't make an 'ought' from an 'is'.
This is not strictly true. While it is true that you cannot make a statement about, say, how dogs 'ought' to act from how most dogs act (or women, or men), there is a scenario where talking about oughts makes sense given a certain type of 'is' statement.
That statement is about the values of the person we are trying to make the 'ought' statement for.
Example: If Billy values his life, then he ought not to stick his fingers in the electrical sockets.
Since it 'is' true Billy values his life, we can conclude he will not, unless there is a problem with Billy such as failure to understand the consequences of his actions or some kind of compulsive disorder that makes him do things he doesn't actually want to do. If Billy is an irrational person, then despite the fact that we can say by our definition of 'value' that people are driven to care about things they value and act accordingly, the 'Billy will' statement won't match the 'Billy ought' statement because his reasoning for reality does not actually match up to how reality behaves.
There will also be modifiers from other value statements. Suppose there is something Billy values more than or equally to his life, like his sibling, and someone is threatening their life unless Billy sticks his fingers in electrical sockets. Then while usually Billy ought not to stick his fingers in, in this case it is more ambiguous .
So the 'ought' statements one can derive are really quite weak, but they are there.
What about moral oughts, which were the original focus of 'is vs. ought'? One can certainly make statements based on a very broad definition of what one means by 'morality', and tying this to valuation, like 'valuing sentience or sapience and generally automatic values of such, such as the hardwired desire of sapients to not suffer', with the understanding that morality is just a word and that there may be other definitions in the same way bat can refer to both a baseball bat and the animal bat.
They are, again, fairly weak 'oughts', in that nothing says one has to actually be moral (although one likely has a number of other values for which morality has benefits for even if one doesn't inherently value morality for its own sake, like increased survival chances of the species or the link between good deeds and happiness, or the simple fact people tend to punish the immoral when possible, which together make at least some morality usually pretty compelling), but if one does care about it, then for the rational actor there are many possible oughts to be considered, and some actions that are very obviously disbarred even under a fairly vague morality.
Example: Torturing someone for fun is pretty clearly immoral under any definition of morality that the average non-fascist person would recognize as such. If you decided to be moral because, like a normal person, you have empathy and thus empathic moral behavior appeals to you, then even if the concept of 'morality' never existed you'd still find it abhorrent and ought not to do it because if you did you'd feel awful.
---
Deontology and consequentialism in different scenarios, plus body awareness versus self awareness
study:
People prefer consequentialist leaders and deontological spouses
(Don't worry if you don't remember which was which, the link gives an explanation.)
I think it makes sense, if we recall moral behavior evolved to help us survive. A leader who prioritizes the many over the few is, on average, more likely to do good by you than some leader who decides he's going to monopolize all his resources and bankrupt the country in order to give all of his family and friends bigger bank accounts. But a spouse who, most likely isn't a leader and likely doesn't have that many resources, decides to prioritize strangers over family is going to end up screwing you over.
I think this suggests a (rather reasonable, even/especially for a consequentialist view) middle ground principle where you spend some resources on yourself and your family to make yourself happy, and some of your excess resources on friends, community members, and outsiders, in that order*, unless there is some sort of large disaster that could be averted by you pitching in more than usual. The idea is that if you matter, then obviously you need to be allowed some leeway to do things that make you happy; we could try to make a world where we maximize the number of people and they all sacrifice to make the other people and animals not actively suffering, but in the process end up making themselves not particularly happy either, and in that scenario it doesn't really seem very worth it if few people are actually happy. I don't put a lot of moral impetus on 'more people'; rather, higher quality of life for the people that do exist is my first preference. We don't value people simply for being alive (else, they'd have no value when dead) but for being sentient beings.
*(Deontologically, you might say it's just order of obligations, alternatively both it and consequentially you might say that the more excess resources you have the greater the amount of each group you have affected and chances are good you made your money in the local community first and as your zone of influence increases so do the number of strangers you extract resources from; there are no hermit hobo billionaires living outside civilization engaging in zero trade because there are hard limits to how many resources a single human completely by themselves can extract. Thus, we can say your increased share of resources both has consequences and moral obligations, because you have a communal interaction and dependence on people that also affects their own happiness even if you can't directly see your impact once your influence grows to companies that serve hundreds of people.)
So, the 'vacation for Hawaii' versus 'nets for kids' scenario is kind of interesting. I think the obvious idea is that one should be putting some ratio of excess funds towards the kids, but it need not be all of it, and probably not all of it that does go to charity should be just for nets - sure, you have the kid ALIVE, but is the kid happy? Do they have access to good food and schooling? Is the area free of humans rights abuses? And over the longer term, it would save more lives to invent a cure. Now, you don't necessarily need to think long and hard over which charities: as long as not everyone is choosing the same ones, then you can be sure at least some are investing in making cures and some are investing in nets. It seems unlikely that if one could save for vacation to Hawaii that one could not simply delay the vacation a little longer.
So what is the ratio that one should invest? That's a good question, and I think the leader versus spouse scenario again points the way: the ratio increases as your resources increase. A billionaire should not be keeping 50/50 for himself, because that would impoverish many people, he should be paying it in taxes so they can do things like pay for infrastructure jobs that give common goods that everyone needs and would also lift a number of people out of poverty, where-as for someone barely making enough to live that is obviously completely untenable: that person needs to be saving for emergencies that could wipe out all of their wealth or saving for higher education, it really isn't reasonable to expect them to pitch in 50% of their excess wealth. (The exception being, of course, for sudden disasters where it's important to restore food/power to as many people as quickly as possible or the like, where the contribution is a one-off. In that case they still have less obligation to donate, but it makes sense for them if they have a little bit to spare and it's clear not enough other people are doing so.)
My own principle is simple. As sentient life forms, we automatically value things, but we also choose actions which show greater or lesser valuation of things. Sapients, with self-recognition and empathy, can value the value-makers (sapients and to a slightly lesser degree sentients, who often have some kind of recognition/empathy but notably not as good; it seems more like a body-feel sympathy than a thinker/self empathy, if that makes sense**) themselves as valuers/beings with feelings (which we may take to be the sensible starting point for what we mean in a broad way by morality) or we can choose not to and in a sense devalue ourselves by stating that as sapient being we don't actually have value; we're just valuing our whims and those happen to usually have a self-preservation instinct. You can value yourself and your survival yet still not really value yourself as a person.
My obvious bias is that we should value ourselves as people, not merely as lives, and that includes keeping some reasonable share of resources for ourselves (one house, not three, no mansion unless you have 101 dalmatian orphans you're rescuing in it) and trying to keep population down to a point where everyone can enjoy a fairly good share to do things like occasionally spend 3000 on themselves for fun without completely destroying the planet or each other (the latter being more likely to happen, the planet may suffer but it is pretty resilient and would if nothing else probably still have microbes), and that there is certainly room for viewing things through a social obligation lens because things like family are something that people value; I don't think deontology/moral-obligation and consequentialism actually have to necessarily contradict that much!
If we let moral obligation scale with the effort put in to the community to acquire resources, and also in a consequentialist view think about how those with greater resources can do more good, then they should actually have a lot of agreement as in my examples.
**What I mean is that you can have an animal with body awareness, and have this basically be what one means by sentience, without much in the way of self awareness of oneself as a thinker. So a rat could see another rat writhe in pain and make squeaking noises of pain that remind it of when its body was in pain and thus distress it and want to alleviate/stop the behavior of the other body, but completely fail to think about what the other rat might be seeing when it looks in the mirror or even the fact that it is a thinker attached to a body which performs actions mirroring its thoughts, the latter allowing one to pass a mirror test. Body awareness is thus a stepping stone that gradually scales into thinker awareness by taking on more complex properties than just 'mood behavior reminds of my own body mood or what the lion acted like last time when it was not hungry and did not try to chase me' and position in space. We might put 'behavior' and 'position in space' as the most basal sort of awareness, and label this alone as 'object awareness' rather than awareness of bodies in general; an animal with only object awareness might not really be sentient, as computers can accomplish object awareness of that level but struggle with multi-body awareness of the kind I mentioned which animals master easily. If we did decide to label object awareness as basal sentience, then we would have to consider computers as basally sentient, and that just seems really dubious to me.
A better name might be 'actor' awareness, where 'actor' is much more basal than 'thinker'. Actors act like they have intent, and they have a repertoire of multiple little algorithms to call on and can modify this/learn. Computers may mime actor behavior but they have no contextual actor awareness as of yet. A computer given a bunch of photographs labeled 'happy' or 'sad' which sorts out features given that input is robbed entirely of actor-context (no actions actually performed! we humans can infer intent from a sad face staring up at a pickle jar at a shelf because we've seen actors move to try to grab pickle jars and fail, but if you gave us aliens with antennae up or down performing completely alien actions we'd never seen before in a still frame so we can't see the full context I doubt we'd do well beyond saying 'yeah, the antennae are up in this one') and is just picking out which textures match the arbitrary labels.
Computers don't comprehend same or different even for objects, so it's actually dubious whether they even have object awareness too! I suspect once you get good enough at object awareness you get at least a little good at actor awareness, because if you understand same/different abstractly then you can class objects as same or different based on behavior, so it comes along for the ride with minimal extra effort. Okay, maybe not minimal: at some point, qualia comes into the picture when one is thinking about behavior and actual goals, if we think of qualia as just a nonverbal way to think and handle things we put values on, like red 'flashy potentially very important color so make it stick out more'. If someone is looking at red, green, blue wavelengths and instead of just qualifying them by, well, their length, they also qualify them by their value/meanings to them as a body/actor, then you have a very qualic behavior, since now you have an extra quality to color that exists only in the actor's head, and since actual qualia comes in somewhere it seems reasonable to guess that this actually will be qualia. (To repeat myself: Why this suddenly emerges as like an entirely new kind of thing is rather like a nothingness problem, so I have a suspicion as I have said before that probably the math behind the nothingness problem and the math needed to solve full actor awareness tasks are probably related. Another reason to think they are related is that it would be a little awkward/nonsensical if our mathematical system generating/being the physical universe were incomplete, as being physical you should be able to 'view' essentially all the true parts, if something doesn't interact with anything else in any way so you can't examine it that doesn't really match my definition of physical and would then be 'false', and incompleteness is fundamentally related to algorithms and the Halt problem.
However, arguably the issue is referential statements, and we could operate fine up until we reach problems with consciousness without ever dealing with referential statements in our physic's mathematics, i.e counting how many fingers we have by holding up fingers is a trivial math where things only represent themselves and do not make references, there is no possibility for incompleteness in such a trivial system. But this offers the new problem of why the physicist's math and the physic's math should diverge in completeness if one is just a morphism of the other, so I think it does have to ultimately confront Godel, probably by using some sort of novel logic that, like his completeness theorem, the incompleteness theorem proofs don't apply to. Alternatively, the problem might just be trying to use infinity inappropriately, in which case the apparent link between 'new kind of shit appears where nothing like it was before' and 'something appears when presumably nothing was before' may be a red herring. However, note infinity is also, gasp, algorithmic, so some deeper dive in why algorithmic systems behave the way they do and to perform identifications between them may still be required, and we'd have to answer why our system didn't go to infinity, as it isn't like one can ever run out of 0s, and one can often re-write finites in terms of infinites, so asking why our system is finite might actually be just the same problem with the same math to handle infinite systems just with a morphism, which would be a strange twist, but maybe not that strange since the jump from non-referential to referential also involves some sort of morphism.)
Final note: The finding that people rate how good an action is by how fuzzy it makes them feel rather than how much good it actually does for the greatest number of people is unsurprising to me and just confirms to me that most people are... I don't really have nice words. Not especially sharp thinkers, leading them to do things I personally find kind of immoral.
---
To see yourself in the mirror is (basically) to see other people; defining self and reason
To acknowledge it is yourself you see in a mirror is to be one step away from recognizing other people. One merely also needs to acknowledge some things as not yourself.
He who makes a movement and notices it mimed, goes 'aha, that is me giving that result', does not literately think the mirror is himself. Even a radical solipsist can acknowledge the distinction between willful conscious action and unwilled action.
From there, first, define 'self' as specifically pertaining to your conscious domain. Everything might be a product of your brain, but that would not make everything a product of your self. Case in point: suppose your hand suffered from a syndrome where it moved on its own. You would most likely say, were it to try to strangle you, that is was not you choosing to do those things, especially if you were trying to defend yourself from a charge of attempted suicide.
Now we take that one step forward, having defined what we mean by self and noting some things are not the self, and having allowed for the concept of equivalence that is not equality. We expand our world view to willful actions that are not your willful actions.
Because we are capable of equivalence and noting when something reacts as a result of willful action that is, nonetheless, not a thing that is the same as us, we have already the structure needed for the concept of other selves. They have a kind of equivalence to us, but like the mirror, they are not the same. And this recognition of other selves has no requirement of knowledge about the universe's greater metaphysics, for example, if we are in a simulation or a brain in a jar with multiple personality disorder. It requires only that we choose a simple definition of selves that is satisfied by another entity displaying independent reasoning.
It should be noted this problem is different entirely from the problem of qualia. However, it is affected by the problem of where exactly one should draw the line between simple computation which may convey a result (empty rhetoric that sounds meaningful to the listener but was not to the speaker) and reasoning.
I am content with the reason definition that if one can fully explain one's actions independently, and understands relationships in terms of equivalences, referencing and equality (including realizing the difference between those two), then one is reasoning.
AI which is fed essays about AI that then produces an eerie essay saying 'I have no feelings and cannot think, here is how' does not count, because ability to replace sentences with other equivalent sentences that don't trigger our sense of plagiarism aside (which is honestly just a simple exercise of dictionary and grammar mappings) and inputting a few concept-related sentences that don't violate the 'grammar' of larger paragraph flows (you don't go from talking about AI to a non-sequitur speech about popcorn), it is not actual independent reasoning in the slightest. The AI is not actually reasoning in terms of equivalences or references, when it makes a sentence about feelings it is not summoning up information about feelings it has learned and treating the word 'feelings' as equivalent (but obviously not equal) to those things, as required for non-rhetorical reasoned speech.
---
article from 2019:
On Unconditional Love of All Life (and its conflict with acceptance of death)
content note: swear / reference to sex, cannibalism, death, and necrophiliac ducks and carnivorous deer.Yes, you read that right.
So this is kind of corny to think about it, but as an autovaluist (someone who thinks a good definition for life is things that have intrinsic tendencies to value things, that life is part of the universe, and thus the universe intrinsically has value, especially life as the value-generator) it naturally weighs on my mind a lot, as I often debate about whether this is something I'd like to try, or if it is even possible without contradiction.
Right now, the two big things holding me up are: (a) Some people are really gigantic assholes and I'm not sure I'd want to love them, even in the very bland 'I mean love as in I mean you well no matter what' sort of way not 'love as in romance or best friends' sort of way* and (b) I'm uncomfortable with the idea of removing all predation or death, even if I had the ability to magically do so with no ill-effects. We literately would not exist without evolutionary forces shaping us, and a hypothetical society of immortals who never produce children except to replace freak accident deaths sounds nightmarish to me. I know to people who hate children that probably sounds wonderful, but if you hate children you automatically don't get to be part of the 'unconditional love of all life' argument so you don't count, buckeroo.
[*English really needs multiple words for love, I think. I'm really talking about a kind of parental-like love where even if your child is a psychopath and you don't feel a great deal of affection for them these days you really still wish the best for them, it's just that in such cases the 'best' may be going to jail and reforming.]
That said, I'm going to give it a go thinking about it anyway. Even though some people really test my patience, I genuinely don't want to see anyone tortured and wish the more obnoxious people could just be forced to see a psychiatrist, so, you could argue I'm quite close to a certain version of that ideal already.
If your idea of love means never doing anything to stop any loved one from their desires, then it's not very plausible as eventually you'd have to choose sides, as sitting back and watching someone get hurt isn't terribly loving and it isn't always possible to simply stop both parties from fighting. However, if you think unconditional love means wanting everyone to mostly peacefully get along and, even if they do something 'wrong', not suffer too much torment but not to the unrealistic standard of trying to ensure there is never any pain or death ever, and want them to enjoy a fair degree of happiness before their time ends, it becomes a lot more plausible you could say you hold unconditional love for everyone. The issue just then becomes where you draw the line on 'too much torment' or 'died too soon'.
A good concept of love should involve some notion of what the other person truly desires and can stand, and not just what you could stand.
For instance, I'm totally fine with being eaten after I die, it doesn't bother me at all. I would be mildly worried about people catching diseases, especially prions, if they started engaging in cannibalism, but I'm an organ donor so the concept of my organs or other body parts ending up inside of someone else's body really doesn't bother me. I actually think it would be quite cool to be fed to vultures after I die (like the 'sky burials' of some ancient cultures), although I don't think that's one of the options they offer at funeral homes sadly.
It's one of the reasons I'm not a vegetarian (although I would like to eat less meat as I think we eat too much of it - and before you ask why I say 'like to', I often don't get to choose what food is bought and if I don't eat it, it literately just rots on the counter, it's happened before).
However, I know other people would be really disturbed if other people were being eaten (and some are disturbed by any organism with a once-functioning brain being eaten), so if I hypothetically came upon a random corpse, I wouldn't eat it (besides just being plain unappealing and an easy way to get disease, I mean). There are also situations where I would in fact like to be killed (being in a coma for years for instance sounds horrifying, and I'm not sure I actually want to live to old age if it means going senile - any kind of living where I've lost significant brain function sounds very unappealing to me), but that doesn't mean I assume everyone else would be fine with it in the same situation.
Another example is animals in captivity fucking in public.
People assume that because they are shy and would hate to have everyone staring at them in cages that all animals would hate it too. But that really isn't the case for all tamed animals: many animals will happily fuck right in front of an audience, although some are indeed more finicky and need exacting conditions to be satisfied to mate. What they really should be asking is if the animals have the opportunity to satisfy the instincts they'd satisfy in the wild and if the enclosure is large enough for them / has enough enrichment. A cage with a wheel or treadmill can be a bit smaller than one without one, for instance, and may after a certain point even be preferred to a bigger cage: your mouse may not like large open spaces very much if it makes them worry a hawk is going to swoop down and eat them or if your gerbils start to de-clan and get into territorial fights instead of getting along. You might not want a wheel or a burrow in your cage, but a gerbil certainly would. You might not care about a cardboard box, but a tiger might love it as a dark place to hide for an ambush or to look for hiding prey.
People also offer the false dichotomy of 'whether you would like to be in captivity or free', when many of the animals would never be able to survive in the wild and, frankly, neither would I.
Nor, if I had to make the choice, would I really prefer to be eaten alive by wild lions (And yes, predators do sometimes eat their prey alive) than to being a zoo animal living in safety with no demands made upon me and all the toys I could ask for, or even a farm animal provided the conditions were not completely atrocious and I wasn't punished if I simply refused to do any work. I'd be pissed if I were killed before I got to finish some nice work on mathematics or other big dream, but I don't think cows are known for being great dreamers beyond 'have a really nice piece of food' or 'get petted today'. I also just really don't think you can draw comparisons between slavery of humans and slavery of animals; not without being really careful to admit first that there IS a difference in the fact one set has advanced cognition /ability to consent and the other doesn't.
It's like drawing comparisons between bestiality and LGBT people: one side can consent, the other can't, it makes a huge, huge difference in context.You don't want to accidentally make a pro-racist argument that there is no difference between slavery of people and slavery of animals; this is one of those instances where you have to ask if making yourself feel good is more important than actually persuading people, because that argument isn't going to convince anyone who isn't already convinced. You'd be better off comparing slavery of children too young to comprehend their situation with that of animals, if you are really determined to use the comparison. But there we must admit we do not, in fact, allow children free reign to do whatever they want, some people even put their children on leashes, yet this isn't some dystopian horror but simply a safety issue: some children, like some animals, are dumb and will run into traffic and die. Leashing them is not an unspeakable horror like the most extreme animal rights activists (PETA, looking at you) may make it sound like. Equating all animals in captivity to the worst versions of it, such as hoarders, is a straw man.
And if you're talking about pets, I'd definitely rather be a pet dog or cat than a feral - I think people who try to call cats slaves have never met a house-cat, frankly. And if one were talking about a fenced in area, if the area were big enough and I was a dumber species of animal would I even necessarily realize I was in captivity? At what size does a fenced area become a park instead of a cage? Even most 'wild' populations are heavily managed by people these days. Humans are constantly making life and death decisions around endangered and invasive species. To give this up on some notion that death is always evil could very well consign some species to extinction.
But, here again comes the point that what I would be bothered by (or not bothered by) if I 'were' an animal is irrelevant: what does the animal show distress over, and is this level of distress within the parameters we accepted as part of life's stresses for someone or something you love?
This would include thinking about the situation where if you have multiple loved ones and two of them are starving and you have to choose which one to feed and which one to let die, which is relevant in context of predation. Are we morally obligated to aim for a future where we make synthetic meat for all carnivores and keep them all in captivity until we breed out the instinct to prey and kill things, and to use sterilization procedures to prevent over-breeding of prey animals instead? A fairly impossible scenario, as we'd surely never be able to capture all predators or manage to so effectively control breeding of all prey animals, but let's say we could: if killing is always wrong and unloving, would we be so obligated to try as much as possible even if we couldn't capture 100%?
I would be pretty bothered if someone were luring me into a feedlot, but that's because I have a concept of death and have life goals. Some animals recognize death, other ones are willing to fuck corpses (ducks for instance) with seemingly no recognition the other animal is dead.
I actually kind of wished one could buy meat from animals that had expired naturally from old age or accident, but that's not really an option.
Animals also show pretty clearly they'd have no problems eating us if the tables were turned. They have no desire or ability to follow the social contract of 'I agree to not harm anyone else in society and in return they do not hurt me, and I will pay taxes for police to handle rule-breakers instead of acting as a vigilante'. Animals eating other animals as a concept clearly doesn't bother them much, even deer will eat helpless baby chicks. Some herbivores will even kill you not because they even want to eat you or you did anything to them but because you happened to be standing near them or looked at them funny: more people die from hippo attacks than crocodiles.
So, killing them is a bit iffy, especially if it's just for fun and serves no purpose, but it's not actually outside of definitions that someone could say they love animals and also kill them: they could just be applying different definitions of what harm goes way too far and do not agree that you can never kill a loved one. I recently had to euthanize my dog: I didn't do this because I hated him! I cried over him. If I had hated him, I would have just let him hack up blood and suffocate slowly to death instead of letting him die peacefully.
Animals produce far many more young than is sustainable. Predation is necessary. Predators are not evil, not even when the predator happens to be human. Sterilization of invasives isn't always a viable strategy. For this reason, I support Australians who are willing kill stray feral cats to protect their wildlife, even though as a cat lover I'd really hate if my cat died. It is possible to have nuanced positions on things instead of viewing everything in black and white.
Then one last question: are they suffering in captivity? For many animals, the answer is yes. But not all of them. To our best scientific ability to gauge animal happiness, some animals are genuinely happy in captivity, and some of those animals are even farm animals. Some animal rights activists are notorious for doing things that actually lead to the death or suffering of the animals they 'rescue', because they assume that because they would be happier dead than, say, a milk cow or a pet dog, that the animal would agree with them too.
The biggest reason not to eat meat even if one can find an ethical farmer (or if one decides, say, to raise their own chickens) is the fact it's currently contributing far too much to unsustainable use of our ecosystems. We should be eating far less than we currently do.
However, I said all life, didn't I? There are more living things than just animals.
I used to think there was almost no chance plants feel (actually, I still think that, just changed by a few percentage points). They sit and turn dirt into sunlight, what would they need a capacity for thought for? But recently I learned that plants are somewhat more complex than I previously thought, can sense predators, send warnings and food to other plants, and even may be able to learn. They just have very slow electric impulses, not non-existent ones, and if they have any thinking going on it's almost certainly in the roots and not the leaves they constantly shed or have get eaten by herbivores.
Plants will even support family members who haven't sent them back any food for years. We know this because stumps cut down who have no way of photosynthesizing can stay alive for decades. Now, irrational behavior like supporting a deadbeat doesn't sound like something you'd evolve toward, but it does sound like the exact kind of thing you'd expect as an evolutionary artifact of having evolved something akin to rudimentary care for your family members.
Plants certainly show no particular sign of being terribly happy when herbivores munch down on them. I really hope they don't feel pain and that their electrical impulses are too slow for that, but considering how little I used to know about them I can't say for certain we won't eventually learn that they do feel pain, just very slowly.
That said, they don't need to feel pain to have some value. People who do think and feel often find them interesting: that means if nothing else, they have some value, yes?
I am, however, honestly clueless how to take better care of plants, beyond the fact that we should support forestry practices that allow some trees to grow to old age so that the trees can form long lived communities and give each other support, and we should stop the wholesale destruction of rain-forests. I don't think it's reasonable to expect anyone to eat only fruit and seeds, and some plants (grasses) have evolved to tolerate grazing and even need it to compete properly with other plants.
We also have been potentially making ourselves literately sick by killing beneficial microbes and encouraging in their place antimicrobial resistant pathogens. Kudos for beneficial organisms (including non-microbe insects); sadly they are less likely to quickly evolve immunity than the pests. The more beneficial organisms there are around the harder it is for invaders to get a foothold into the diverse community. Thoughtlessly trying to kill all microbes or pests without concern to beneficial organisms caught in the crossfire, or raising monocultures? This is incredibly idiotic behavior.
I genuinely actually like microbes a lot! I think they are fascinating. I wish the movement to put beneficial microbial communities on newborns instead of sterilizing the fuck out of everything in hospitals had been the movement to gain traction. So if you like microbes, don't use antimicrobial soap, which just encourages pathogens to grow resistance, use normal soap which won't disrupt microbial colonies that already have a foothold on you (unless you like to boil your skin raw, that is) but will dislodge newcomers.
Actually, there's another thought of what we could do for plants: make more diverse plant communities instead of the mono-culture communities we've been growing, and stop killing their pollinators, support native bees and insects.
So, my current standard is that if an organism shows autopoeiesis / intrinsic tendencies to value certain things over others, that it has some value itself, to me at least and to itself, and that at least up to a point (up to where it starts getting to the question of valuing one organism over another) to show some degree of loving behavior toward them, like wanting them to be happy on average and willing to support policies that will support that. As long as something has value to someone in the universe, then, since people are part of the universe, we can say that it has a little bit of value to the universe. Just, you know, not the rocks and shit part of the universe, but why do you care what a rock or pile of poo thinks anyway? What kind of person has such low self esteem they need a rock to care about them? Do you really need every single bit of the universe to care about you to feel like it's a pretty great place?
The statement 'The Universe is a Cold, Uncaring Place' only holds true if you define the universe as separate from people. I don't. If at least one person holds unconditional love for all life, then it is technically true that even if all my loved ones die and I end up a hobo, part of the universe cares about me, that 'The Universe has the capacity to be warm and loving'. It's just that that part of the universe is not all powerful and limited in what it can do.
Just don't forget, if you decide you want to value and love all life, that life includes human beings of many varieties and that if they do something you find unethical, it's not necessarily because they are 'vicious' or 'mentally ill'. Most mentally ill people are more likely to be hurt by and bullied by those deemed mentally normal than the other way around! Personality disorders (aka being a jackass or sadist) are not mental disorders, and someone disagreeing with you doesn't necessarily make them a jackass.
So, yeah, maybe try out that unconditional love thing, even though it's nuanced and hard and in practice you have to choose some organisms to value more than others, and at some point you often even have to choose what lives and what dies. That's just part of the complexity of life. It doesn't mean you hate your dog, or so forth.
Now if you'll excuse me, I have some microbes or viruses in my throat I don't want there...
Parental Choice is NOT eugenics
eu•gen•ics:
noun: The study or practice of attempting to improve the human gene pool by encouraging the reproduction of people considered to have desirable traits and discouraging or preventing the reproduction of people considered to have undesirable traits.
A person deciding to not have a kid is not engaging in the practice of discouraging people considered to have undesirable traits, they are making a choice about one person, themselves, and they presumably do not view themselves as inferior. They are not preventing themselves from having kids, they are perfectly capable of going and getting kids later if they so choose. And you know what? Even if they do prevent themselves from having kids by getting themselves spayed/neutered, that's still not the same as an active practice of eugenics or anything like what the Nazis were doing, because (a) of the element of consent. It makes a huge difference in the wrongness of an activity whether it was consensual or not. And (b), because one person is not a population / 'gene pool'.
I see a lot of hand-wringing over whether one day, parents will be able to choose to not have kids with genetic traits they don't desire, such as being LGBT (which I strongly suspect is often hormonally caused and not genetic, but that's irrelevant to this argument). Has anyone realized that having such parents NOT have LGBT kids might be a good fucking thing for LGBT kids? Do you think forcing someone who hates LGBT children to have LGBT children will magically make them love them? The homeless child population is disproportionately LGBT! How do you think that happened, geniuses? Bringing a child into this world just to let them suffer is immoral.
Speaking thereof, I can only say that eliminating genetic diseases or other traits by gene editing is hardly stopping someone from reproducing: they are just producing somewhat different children than they otherwise would have had.
I'm trans. I can definitively say there is a huge difference to me between someone trying to prevent trans people from breeding because they are 'inferior' and a parent deciding not to have a transgender child. Being transgender fucking sucks! Why would you intentionally have a trans kid when you could have a cis one who won't suffer dysphoria? OK, maybe they'll get lucky and have a mild condition where all they need is someone to call them the right pronoun to not feel suicidal, but there are plenty of trans people who really do need expensive surgery which will, ultimately, still not render them able to live exactly like a cis person: if a trans woman wants kids, she can't exactly get pregnant, now can she?
That is not the same as saying trans people who currently exist are somehow inferior as people. It is saying that inflicting pain on a person which is entirely avoidable is wrong. I would have much rather have been a cis boy than a trans man. My existence is currently rather painful and while I am not suicidal, I do find myself wishing sometimes I'd never been born and that I had a very genetically similar brother in my place instead who shared the same upbringing and interests (I really don't think my interests are ones that require the person in question to be a trans man, although I do admit I would be less likely to be interested in transgender issues as much as I am otherwise I rather hope that isn't the most important part of me). Why in the world would I wish that on someone else? But I will admit my dysphoria is an especially strong case, other people might suffer less. A more non-binary person would probably be absolutely fine. And maybe in some magical future, gender transition will be so cheap and effortless that even cis people change their gender back and forth for lulz sometimes, but I don't see that happening any time soon. Of course, I don't see someone identifying whether someone will be trans or not in the womb any time soon either.
One can make a similar argument for people with disabilities, but with the caveat that some people with disabilities only really feel inconvenienced when basic public accommodations aren't available and otherwise don't notice/care about the condition. In fact, people who are mentally different may even thrive and bring different skills, lookup dyslexics in data analysis for example or Temple Grandin's story of how her autism helped her understand animals and design devices that made them calmer and more comfortable. In this case, I would still say that if someone decides they wouldn't make a good parent for someone with special needs, then they probably still should not be having a kid with special needs. That is not the same as making a unilateral ban on special needs people or preventing said people from reproducing!
A person who does not exist is not the same as a person who currently exists. Choice by an individual is not the same as gross violation of choice of the masses.
I will never be in favor of forcing someone to have a kid who doesn't want the kid for the simple reason that some people are not good fucking parents. That, and it's cruel to favor a nonexistent person over an existent person! (Where person here is defined as 'someone with a functioning brain who can think and feel'; an egg and some sperm being analyzed for potential disfavored traits certainly don't count.)
Some Links:
Abolish the death penalty and racism in law; Link roundups
Here's a quick link roundup of stuff related to injustice inherent to our system, particularly the death penalty.
https://www.americanbar.org/news/abanews/publications/youraba/2019/december-2019/how-to-confront-bias-in-the-criminal-justice-system/
https://greatergood.berkeley.edu/new-science-implicit-bias
https://www.nytimes.com/2020/08/03/us/racial-gap-death-penalty.html?smtyp=cur&smid=tw-nytpolitics
https://www.usdakotawar.org/history/aftermath/trials-hanging
The death penalty has a long and racist history. There's also the problem with innocent people and often shabby evidence being used:
https://deathpenaltyinfo.org/policy-issues/innocence
https://www.innocenceproject.org/the-death-penalty/
On Deterrence or lack thereof:
https://www.amnestyusa.org/issues/death-penalty/death-penalty-facts/the-death-penalty-and-deterrence/
https://www.thoughtco.com/pros-cons-capital-punishment-3367815
Of course, there's also the problem of racial bias before you even get to sentencing.
https://www.thoughtco.com/common-misconceptions-about-black-lives-matter-4062262
https://www.washingtonpost.com/graphics/2020/opinions/systemic-racism-police-evidence-criminal-justice-system/
https://www.washingtonpost.com/opinions/2019/04/09/more-studies-showing-racial-disparities-criminal-justice-system/
What's the most effective thing you can do to change injustice in society? Vote and convince other people to vote.
https://www.voteriders.org/get-voter-id/
https://votefwd.org/
---
Does power corrupt?
For a long while I had a notion in my head I really liked that it wasn't power that corrupted so much as brought out the shittiness that was already there but never had the opportunity to actually act out, and that certain powers are inherently corrupt.
For instance, the ability to execute someone without giving them a fair trial.
However, I've started to change my mind a little, although I think certain powers are still pretty inherently corrupt, and that certain people would have been corrupt before they ever took power, I now also think that sometimes power really does corrupt. And that's because I realized I forgot to take into account just how utterly inconsistent and deeply lacking in self introspection that most people are.
Thus, it's now totally believable to me that someone might, as in a recent study, while rich literately take more candy from children. The study didn't track if someone used to be poor or not, but while wealth is inheritable being rich isn't actually genetic. And I doubt most people taking candy from a jar that they've been told will have the rest donated to children are really going to think to themselves 'Yeah, I am totally going to take more than I would if I were poor cuz the rules do not apply to me' - they probably don't consciously think about how much candy they are taking at all, especially not versus how much they would have taken in different life circumstances. All it takes is one person who isn't very self reflective getting used to entitlement to get a lowered empathy effect from power and make 'power corrupts' a true statement, although not a universal one.
The study, by the way, was frankly quite depressing.
https://www.npr.org/templates/story/story.php?storyId=129068241
Here's another: https://www.theatlantic.com/business/archive/2017/12/why-dont-rich-give-more-charity/548537/
https://en.wikipedia.org/wiki/The_Experiment
One might also think of famous experiments in general where one person is given power over another, like the Stanford prison experiment, although that one you could argue was more like Milford and had questionable methodology: the guards were in part trying to please the experimenter, the 'authority', not so much being an authority themselves. So that isn't especially revealing. More interesting is the BBC prison experiment, to which I've linked the wikipedia article, as the guards were allowed more independence, and some would do things like give extra food out of guilt, which I found quite interesting. I suspect in real life there would be self selection for people who actually want to be guards taking to the role, and thus more like the end of the experiment where things threatened to get more abusive.
----
It is rational to be wary of (self proclaimed) rationalists
So, I was pretty disappointed to learn there are some f'cked up people involved with Effective Altruism, to the point of helping drive one woman to suicide, because I like the concept of trying to figure out if a given charity is actually effective at helping people, and what saves the most lives for the least money and such. (For the record, if you are completely indecisive, getting mosquito nets for children is very cheap and cost effective, but there are also arguably many other things, like donating straight to malaria cure research instead, that have less immediate payoff that are also good, so I was always a little skeptical about just how 'effective' it is to focus purely on criteria like lives-saved-by-a-given-method, as clearly any cure that doesn't yet exist will have a count near 0. However, it's still much better than no analysis: see GiveWell at the bottom of the post.) For now, I'd go with what I still consider the number one most useful thing: check out if a charity is fraudulent or not and how it compares to other charities that have the exact same goal, and to check if the charity is discriminatory in some way.
This reminded me that I hadn't written a post yet warning against 'rationalist' communities; had I been paying more attention, I probably would have flagged Effective Altruism as one with some crossover with LessWrong (which I was already familiar with being fucked up), but I hadn't realized there was any actual community involved with it, I thought it was just people trying to put resources in place so people could evaluate charities.
Now, for some of you, this warning to side-eye self proclaimed 'rationalists' might be really obvious: any group that is inherently claiming itself to be more 'rational' than other humans because of its teachings could easily delude itself that a given decision is more intelligent than it really is. For others though, this may lead to a depressed or frustrated feeling: if even a community dedicated to recognizing cognitive bias can't do better than average, is there any hope for humanity at all?
That's why I want to clarify that it isn't aiming to be rationalist, by itself, that is wrong. It's labeling yourself that to exclusion of all else that is problematic, because it inherently ignores that, in fact, for most things there is no 'rational' reason at all to do them: you do them because you want to (people forget empathy is part of our instinctive behaviors too), and your want was driven by blind evolution. That is, it has an explanation, but that is not the same as having a reason, if we are being precise in our word usage. 'Reason' usually implies a thinker, although like people talk about, say, 'evolution likes/dislikes' or other anthropomorphisms sometimes it's just a language bias and the user doesn't actually mean to imply sentience.
Now, one may ask, if one cares about being logical and getting the best possible view of how the world actually works: What is the alternative to trying to be the best 'rationalist' you can be?
#1. Science, my friends. Specifically, the scientific method does not assume that its users are more rational or better than everyone else simply because they have awareness of its methods, and as such has mechanisms already built into it to try and combat bias! (Some people even debate ways to try and make science as practiced even more effective at combating bias and converging on correct theories.) The most important being the ability of other people - even Joe who ain't keen on your pet 'rational theory' who you might be inclined thus to view as 'irrational' but who is competent enough to follow exactly written instructions - to replicate your experiment as described and written.
As a side effect, this should also be very rational.
But, crucially, you are not stressing about making every person into a perfect assessor of statistics and their own cognitive errors which was never going to occur anyway; you just need some people who are genuinely competent in that area to make analyses and for their studies to roughly agree (not the people, the studies, the evidence! Scientific consensus is not a consensus of scientist's opinions!) and come to a consensus on a result; you can see this most clearly in 'over-view' literature which often summarizes and references hundreds of studies. You should have some degree of scientific literacy and it helps to have some statistical knowledge, so you can identify completely junk studies right away (the way other studies react to the junk study is a big darn clue; are all studies that cite it doing so just to poke holes in its methodology? another clue is if it is in a predatory journal that publishes any junk for a price), and it would be optimal to have a lot of people like this, but as long as there is a large enough amount of people competent at it and some degree of trust between them and the less competent people (who may be competent in other areas! even scientists have areas of weakness; ours is an era of specialization, no scientist these days can do 'every area' of science, you can't send all the researchers working on, say, 'useless' slug sex to work on cancer research instead, usually, unless the slugs are observed to have less cancer or something related) the system can work really well.
A lot of our political problems is a result of the fact many people don't actually trust science: they like it when it agrees with them, and don't understand what cherry picking is so they'll claim it agrees with them even when it really, really doesn't; for instance they'll read a textbook for teenagers saying males are XY and conclude transmen can't be male, without realizing there is a huge host of other scientific literature (a textbook barely even counts, most textbooks don't even have original research and some weren't even written by professionals) which not only finds cultural-gender and sex are meaningful as very different concepts, but also finds that intersex conditions are well known in the animal kingdom and can include intersex behaviors and not just body type expression; transpeople probably are subtly intersex from a masculinized or feminized 'X' or early hormone exposure causing them to activate in such a way. Some species don't have X or Y chromosomes at all, yet still have sexes, demonstrating just how shallow a 'X and Y' understanding of sex is. The genes on said chromosomes and their expression is much more important; most male genes are on the much larger X but simply inactivated due to lack of testosterone (see 'men' who develop into female phenotype with lack of testosterone or insensitivity), and Y is constantly shrinking by genes jumping ship to X or being unnecessary. In the far future, there may not even be a Y, but there will probably still be males!
Thus, people waste a lot of time hand-wringing about things like sexual assault in bathrooms instead of examining the literature and realizing there has never been an assault by a transwoman on anyone in a bathroom (up to this date) but plenty of cis-men assaulting people in bathrooms or attacking transwomen when they realize they are trans (which being forced to use a mens bathroom would do), which kind of undermines the assumption that this division somehow makes people safer than, say, having unisex bathrooms that allow you to lock the damn door and don't let anyone peer over at you.
You also see this in climate change, or racism and sexism. I chose to focus on transgenderism as an example this time because more people seem to agree on the Left about the other issues but there's still some lingering skepticism about trans people that isn't scientifically warranted at all, and because the things trans people want are, in comparison what we'd need to do to fix the harms of systemic racism or climate change, incredibly tiny. They want to go to the bathroom without being assaulted, and to be called their preferred name and pronoun, and to not have to jump through five thousand hoops to 'prove' they are 'really' trans to get hormones aka they want their doctor/therapist to not be a dick to them even if they don't perfectly match cultural gender norms (considering I'm convinced trans is an intersex condition, and that cultural norms are arbitrary, that's a dumb thing to require). That's it. You maybe have an argument about sports competition, but here's where examining the actual literature to see if transwomen have an advantage due to previous testosterone even after blockers are applied or asking for more studies would be helpful, instead of cherry picking and fear-mongering. (The studies I've read seem to indicate that as long as they are on blockers, they don't have any advantage over women of similar height; the height itself could be considered an advantage perhaps though.) Or, we could start having some more 'all people can compete' competitions and divide people by weight, age and testosterone categories instead of sex; that would help naturally high-testosterone xx cis-women or intersex who face a lot of stress and prejudice. I think this could also be helpful because it might encourage over-weight people and people who would otherwise never stand a chance due to size to enter races.
#2. The other, huge important part is humanism. Making the decision to actually value your fellow sapient beings, even if that other sapient being might be part of a group you don't necessarily like. Because we are not fully rational creatures, we need to seriously confront what our most basal values as well as learned values actually are if we want to actually try being rational, and this is going to conflict majorly with what a 'politically neutral' 'rationalist' community might want, because this is not a politically neutral decision! Examining your learned values means confronting head on what your politics is and how well it compares to genuine facts, and what your biases are.
Yes, it's hard to 'stay rational' when examining or talking about politics, but 'giving up' because it is hard on the theory that this will make you more rational is horribly, horrifically flawed. The real solution is to try an aim at something like what science does: introduce a conflict-resolution mechanism and agree to hold by its results, such as scientific consensus itself. Note scientific consensus itself can only go so far; it can show racism exists, but it can't get you to care about it. To do that, you have to figure out what someone's values are and find a contradiction with them by racism, as well as get them to agree on scientific and historical/humanities consensus that racism exists and is harmful to communities and is often used as tool by the upper class to pit the lower classes against each other instead of uniting under their common interest like higher wages; I mention history/humanities because it doesn't quite fall under the typical scientific paradigm of experiment as we can't rewind time or isolate an entire human society in a lab, but is nonetheless a crucial source of information and one can nonetheless do limited 'experiments' by examining one society, making a hypothesis, and then examining a larger set of societies or finding more historical data from that one society (or simply waiting for more history to happen). You ignore history at your peril.
So here's the greatest reason to side-eye 'rationalist' communities: Often, they don't share the same idea of what is rational as you might expect, and since they think they are rational, they are likely to think anyone critiquing them is irrational.
#3 Use 'emotional intelligence' and focus on your irrationality. Take note of your impulses and when and where you formed decisions on things. When you first had, say, a negative reaction to the concept of global warming, was it after you'd reviewed literature, or before? And did you search for things in the scientific literature that did not support your viewpoint?
Basically, instead of saying 'How do I be rational at all times', say 'I am going to be irrational at some point, so how do I cope with that and minimize any harm?'; it's a shift of focus that is less likely to descend in smugness, because you are acknowledging a weakness and putting it at the forefront.
#4. Instead of trying to proclaim yourself rationalist, how about treating it like a title that other people must give you, instead of giving yourself? Like 'genius'. I side-eye people who proclaim themselves geniuses, it definitely matters much more if someone else does it to you. If someone else called me rationalist, I'd take it as a compliment, but it's not something I'd feel completely comfortable calling myself.
Here's some links:
https://canmom.github.io/theory/phyg - the post that talks about the sexual assaults and cultish behavior that I learned this from
[The blog mentions 'social justice warriors' negatively I think, so possible important context is that as far as I can tell, the term originated in groups like tumblr among leftists themselves to poke fun at some of the more extremist reactions (like people freaking out about people shipping fictional underage characters together, which is stupid as it harms no one and many of the shippers are teens themselves) of some of the deconstructionist and online social activism communities, and then it kind of got co-opted by the alt-right into a generic insult for anyone who cares about social justice. It is thus incredibly difficult to tell what someone actually means by 'social justice warrior' because it has mutated to have many different meanings, meaning any self-aware person should avoid it and stick to clearer full sentences because you don't really want to sound like you agree with neo-nazis, presumably. Since this person seems to care about social justice, I'm going to go with the most generous interpretation of what they mean by 'social justice warrior' and assume it's the older version, although without examples of what they consider going 'too far' it's impossible to say. One person might consider boycotting paying a neo-nazi to spew hate-speech in front of hundreds as unconscionable suppression of free speech (despite the fact the neonazi is still free to say what they want, they just aren't being paid for it or getting amplified) and otherwise be leftist, so this just helps highlight the incredible murkiness of the term and how utterly useless it is these days unless you want to say you're an asshole, in which case, good job! ]
MetaMed the failed 'rationalist medicine' that ignored science and actual medical professionals
Roko's basilisk. - The hilariously sad example of rationality that originally made me lose interest in the 'rationality' community, besides their complete unwillingness to critically look at anything politically non-neutral
Well, that and their bizarro episode hailing a Harry Potter fanfic as Hugo Worthy and harassing one of the actors instead of Rowling about it, a very nonsensical decision as actors have no ownership of Harry Potter.
As an alternative to the 'Effective Altruism' folks, I would recommend GiveWell. It's exactly what I was looking for, a charity evaluator, and it's totally free.
This is slightly off-topic, but I also think it's good to check out who the donors to a charity are, as sometimes this can represent conflicts of interest. Meat companies often donate to Cancer Awareness, so if it should turn out there is, say, a link between high fat meats and cancer (as there may be! although it definitely would not be the sole cause of cancer, herbivorous animals still get cancer for instance) then this would be a serious weakness of any cancer group trying to do awareness or fund research, because it's going to bias them against looking against dietary causes. Another is Heart Awareness and food companies, and this one is much more obvious, as I think I recall studies saying that high fat red meat isn't good for your heart, so that diet of bacon wrapped shrimp is quite suspect. Diet is definitely a topic for another time though, as that'll require a lot of citations and work to do properly.
Finally, I'd like to caveat that I really am saying 'be wary' and not 'distrust completely'; I happen to like Rationality Wiki a lot, for instance, as you can tell by my linking to it as a source I've found it to be pretty reliable. Sometimes people dog-whistle their statements, implying more than they literately mean, so I'm here to tell you I am extremely literal here.
If someone wanted to call themselves an aspiring-rationalist, rather than just calling themselves rationalist, I would be much more comfortable with this.
---
PETA, vegetarianism, and pro extinction carnivores, when opposing views wrap around to resemble each other
If you ever encounter a genie, don't tell it to minimize suffering. But don't tell it to ignore suffering as a factor, either.
There's a weird phenomena where supposedly opposite viewpoints start to resemble each other once you get to the extreme fringes. Usually, this happens because there's another more relevant spectrum of 'authoritarian to liberal', but this case makes me think that, since that isn't quite what is happening but it's similar, there must be an even broader category where this applies even better...
I was looking at rationalist communities (always a dicey thing, because humans are bad at being rationalist, so any community defining itself this way as opposed to 'let us just try to find strategies to deal with human irrationality that don't require every person actually be rational after hearing our speeches' is playing quite close with setting itself up for failure...) and was surprised (though maybe I shouldn't have been) to find that there was a subset who were pro animals going extinct.
Yeah, you heard me.
They really didn't like morality-motivated-vegetarians, and they did list some pretty good reasons, quoting a number of vegetarians who made it clear they didn't actually care about the suffering of the animals, but the more abstract concepts of their 'dignity and freedom'. The kind of people who think putting all pets down to save them from 'slavery' is a good idea. Not sure what dignity and freedom mean to a dog that licks its own ass and begs to be leashed for a walk, but, OK, I'm sure there's some viewpoint where that all makes sense.
The ironic part is that these two groups really resembled each other, even though they came at it for completely different reasons.
The rationalists wanted to minimize the amount of life not directly controlled by man because there is tremendous amounts of suffering in the wilderness, especially of small animals, and they believe that animals suffer less under managed lives by man: a cow that is stunned and quickly bled is going to suffer less than a cow that is eaten alive by wild dogs or slowly suffocated to death by a big cat, or eaten alive by internal parasites and slowly starves to death during a famine. A mouse that lives in the wild is almost certainly going to die of being eaten, and have many many children that experience the same fate, but a mouse in a cage is more likely to die of old age or be euthanized and it will tend to but not always have a much more controlled number of children. That's not to speak of the fact that the quality of life will be very different up to the deaths, as they will never have to worry about food and water or constantly watching for predators. (Believe it or not, but most people don't feed live mice to snakes anymore, they tend to be killed and frozen.)
The really extremist animal rights activists, on the other hand, are pro-wilderness and against any kind of ownership by man, even in hypothetical scenarios where the treatment is as humane as possible. They see any benefit gained by humans as exploitative. That both parties benefiting goes against plenty of definitions of 'exploitation' is beyond the point, because freedom is apparently worth going extinct for. The horror stories about PETA putting perfectly adoptable pets down and throwing their bodies in dumpsters shows that they'd rather all domesticated varieties go extinct than continue to exist. Now, how you enslave a cat is beyond me, but, to an extremist there is no such thing as nuance.
The two groups have different reasoning and motivation, yet they somehow come to the same end point: certain animal varieties should go extinct, and this is a good thing in their world view.
Why might this happen? It's like I said: nuance. The more extreme, black and white your position, the less likely you are to take 'compromise' positions as sometimes being the most beneficial ones which maximize every value as much as possible without compromising other values that might be good too. Diversity and ecosystem stability, for instance, or the desires of the actual animals in question instead of what a human of a particular culture would desire shoved into that animal's body. Rhetoric like 'how would you care if you were put on a zoo on display for everyone to see your private business?' just goes to show how bad humans are at empathy, because it doesn't matter what my feeling on that would be, it matters on whether or not the species in question has a privacy instinct. Many of them don't! I've gone to the zoo and seen kangaroos blissfully fucking right in front of an audience with no signs of stress. On the other hand, a home that would otherwise look comfortable to a human might look inhumane to a sapient alien squirrel somewhere because of a lack of nice burrows to hide in from potential predators or satisfy digging instincts.
Personally, I think managed parks where officials stick a tranquilizer in wounded animals to reduce their suffering (this tends to only happen for endangered species though), and minimized but not totally stopped meat eating where the animals are treated as humanely as possible are the best options. A lot of wilderness these days is increasingly managed anyways; with poaching being a thing, you don't really have any other options. It wouldn't be much of a step to people going 'I should interfere with nature only in the minimum way to keep endangered species alive' to 'I should interfere to reduce pain, but not necessarily to stop the death from happening, because death is a perfectly natural process that can be beneficial to an ecosystem', say by just offering some painkillers to wild animals. Would need to study that there weren't any unwanted side effects of that, wouldn't want the critters getting addicted, but if an animal is wounded and going to die slowly I feel like letting it eat painkillers is not going to change much.
I'm an organ donor, I have no problems with someone taking my meat inside them after I die, and I'd be delighted if the leftovers were fed to vultures. I don't see eating meat as inherently wrong and do think that managed right, animals raised for meat can live less stressful and happier lives with less pain at the end. They're not like humans who would be stressed the fuck out if they were told they were being raised for meat, because they don't have philosophy or abstract language. On the other paw, I don't think we should always make minimizing suffering the only goal, so even if it was revealed that being raised as meat or a pet was the best possible existence I wouldn't be in favor of that being the only kind of existence for nonhuman animals, because too much meat production is actually quite bad for the environment. I think a small amount of pain is acceptable because pain is useful, it tells you when something is wrong, and it makes pleasure richer for the contrast. People without the ability to feel pain actually tend to die younger. So we shouldn't try to get rid of it entirely.
So if we could just reduce the amount of suffering in the wild a little bit, I think it would make the world a much better place to be in.
What we have to remember is that everyone dies, so the question of managed death should not be taboo. The question is what kind of life would you like to live, and what kind of death you would like to have. I mostly lean toward preferring to be in civilization and being not-killed, but sometimes I have dreams of being a wild animal being chased by something that wants to eat me and I enjoy it. Empathy is complicated, more complicated than just putting yourself into someone else's shoes, because you are not the same as everyone else, and that's OK. Good, even. It would be boring if everyone was the same! But it makes gauging the values of an entirely different species from yourself and what they would want even harder, because you can't even get humans to agree on whether they'd rather be a wild cow or an owned cow if they had to be a cow. Personally, I'd want to be killed as soon as possible because I do not want to be a dumb cow that can't even read or write and does nothing but eat grass, moo at other cows, and watch for predators and at least if I was eaten I'd be useful for something, but a cow likes being a cow presumably so if I really wanted to imagine being a cow I'd have to imagine being a totally different sort of being and not anything that strongly resembled myself (which I can do, it just takes more effort). The best way we have to gauge what they are actually thinking is whether animals in different situations show signs of stress and for how long. And from that, we have a pretty good indication that yes, animals in the wild and in captivity can be happy, although under poor conditions they can be really damn stressed. This shouldn't shock anyone who knows that evolution wouldn't evolve a species that was so miserable it wanted to commit suicide before mating: emotions probably trend toward an average middle value over the whole, because it doesn't make survival sense to be in constant joy over everything either.
Human instincts for common sense often fail us. We like to think of our empathy as being rather good, and it is compared to other animals, but that doesn't mean it is immune to sometimes having our 'common sense' drive us to batty conclusions about other beings or people when we try to empathize with them. If you've ever seen a person assume a person suffering from a condition is more stressed than they actually are and go into total pity mode "I just don't know how you live!", you know what I mean.
the invention or discovery of math depends in part on how you define it
[article from 2020]
I was kinda annoyed at reading a new book I bought that suggested that cookies are illogical, because you can ask a small child how many cookies they have and they can respond 'none, I ate them!', and that while you could restrict and say 'no eating the cookies' that isn't much fun or representative of the real world.
Like, what's so illogical about that? Either that's a valid operation and that is indeed how many they have, or you weren't specific enough and meant 'how many cookies did you have before you ate them'.
I know it's a minor complaint to have about a book, but it annoyed me. I wasn't gonna mention it in a post though, until I read yet another thing that annoyed me but from the opposite side of the spectrum. Someone said that anyone who thought mathematics was invented shouldn't do mathematics, which is quite stupid: lots of people believe that and are perfectly fine mathematicians.
Are there clear, consistent behaviors that appear when we are careful to line out all of our assumptions/defining what we really mean - like how many before you eat the cookie - in reality, which we can discover? Yeah.
Is lining out or thinking up assumptions an inventive process? Also yeah.
These debates kind of remind me of mind-body duality debates, where we're implicitly accepting that invention of a pattern is something different from discovery of a pattern, even though both involve physical mimicry and production of patterns in our brains; at best, they are going to be more alike than different. From a certain point of view, invention is discovery.
That said, I think I tend to get bothered by them because I tend to read implications that aren't actually said whenever someone says things like 'cookies are illogical'... It's like the book Zen and the Art of Motorcycle Maintenance which despite the name was not very pro-zen apparently (I have not actually read it as from descriptions I think it would annoy me), using the word 'irrational' in a nonstandard way to refer to stuff like 'creative processes' rather than 'you can make up whatever shit you want' or, heh, non-rational numbers. It's an important reminder that just because someone is using familiar words does not mean that you are speaking the same language.
If one clarifies math to just be about the notation and careful thinking up of assumptions to use, then yes, math is clearly invented. But not everybody uses the word 'math' this way, and while I sometimes do I most often don't myself. I tend to use it more in a way where 'the world is mathematical' is basically close to true by definition (the alternative being that it is paradoxical, because that's what I read the statement 'the world is non-mathematical' as; that you are saying it does not obey logical relationships and up can be down, or additionally that you are saying that we cannot arbitrarily label the things we see with notation which contradicts my experience that if I see a dog I can call it a 'dog' even if I don't understand everything that composes 'dog' and I can later arbitrarily make this term more or less broad as I see more animals that share features with original 'dog' with the understanding I care about the actual things being referred to and not the token), so to say that math is discovered is both true and sort of, well, trivial, even though it is a 'trivial' truth that is nonetheless very important to me.
Free Speech
An older post on free speech I wrote once. [2020]
some context: https://www.nytimes.com/2017/06/01/opinion/when-the-left-turns-on-its-own.html
https://washingtonmonthly.com/2020/07/24/the-gop-hypocrisy-on-cancel-culture/
https://medium.com/@BrianBeckcom/right-wing-cancel-culture-is-the-real-threat-why-are-we-ignoring-it-3d95435df8bc
My personal position is that the right to swing your first ends where someone's nose begins. Outright slurs and dog-whistles should be out, misinformation about medical advice when you aren't a fucking doctor should be out because that can kill people, but something like questioning whether something is racist or not shouldn't be - people should be allowed to make questions even if you find those questions stupid or offensive. How else are you going to know they hold the stupid opinion and provide corrective information if they can't talk? [2023 edit: of course there are places where this is inappropriate, "Sir, this is a McDonalds/Wendies/fast food joint, not a debate team." and some questions are clearly impolite: "Why should you be allowed to live?' is a jackass question that cannot possibly have an 'innocent' interpretation. So my 2020 statement here is a little naive.]
People are allowed to boycott whatever they want, and they aren't obligated to give people money for stupid shit (so peacefully protesting giving someone money for a speech you dislike is fine; it's free speech, not paid speech, that I feel should be protected), and they can moderate their personal blogs/spaces as they like, but public spaces are a different matter.
Getting violent, viral harassment campaigns, and ganging up on someone in groups isn't okay to me. I have mixed feelings about removing people from their jobs. At some point, you can argue that if a professor is making some or a majority of students actively feel unsafe and undervalued that they are not doing their job as effectively as they would be if they were teaching somewhere else; if the professor is drawing huge crowds of students who don't want them there then they probably lost the respect necessary to be listened to when they teach. But where do you draw the line on that if someone with opinions always makes someone somewhere uncomfortable? (that's not supposed to be a trick question, by the way; 'slippery slope' is a fallacy you know, you can choose somewhere below '90% of them' and above '1 of them who was just mad you did not perfectly agree with them'. That said, it should be acknowledged different people are going to have different ideas on where to draw the line.)
I'm not going to weep tears over someone punching a fascist, but different people have different ideas of what 'fascist' means, as stupid as that sounds, so as a general policy, punching first is not a great idea. That, and punching first is more likely to get you shot by cops, and ineffective compared to group action, so, it's foolish.
Political correctness is a phrase with so many damn meanings it is fucking useless - when a major politician uses it to politically signal in a way that is favorable and 'correct' among their followers, you know the word has totally inverted its original meaning and lost all sense. Cancel culture is a word that is used in incredibly hypocritical ways. And yes, sometimes, some very very tiny group on the Left goes too far, but it's a mistake to focus on only the Left as if this is something only they do.
That's just my opinion, but you can find plenty of material to support it.
What is unique about human will is 'over-determinism'
What is unique about will is over-determination, not 'freeness'
Bob's decision to make a peanut butter jelly sandwich is decided by evolution giving him a love of peanut butter jelly sandwiches.
Ergo, one person argues, Bob has no more 'free will' than a rock, because his actions are also determined. Life is a cosmic joke, and it doesn't make sense to talk about punishment based on choice.
Ignoring the nonsensicalness of 'free will', I wish to offer a counter thought scenario.
Imagine there are three paths, and you push a rock on to one of them.
Imagine there are three paths, and you push a human on to one of them.
In one of these scenarios, you can mostly ignore the internal configuration of the mass, assuming it isn't configured to explode or something. We can idealize it to a perfect hollow sphere of certain mass or something and this will give us approximately the right result.
In the other, you actually have to take into account what is going on internally. The human, if they decided on the left path, is often still going to go down the left path even if you push it on to the right or center (and may get angry at you for the pushing).
In other words, the human path choice appears to be more determined than the rock's was, not less. So talking about the 'freeness' of human will versus deterministic principles actually seems somewhat backwards.
It's often suggested quantum mechanics will somehow 'fix' free will, because it has randomness. (This never made sense to me, but let's go with it for the moment).
So let's change the scenario to having path pushes decided by coin flips, and the path push in the human is a random fluctuation giving them the impulse to go down a different path.
Again, the rock will go on whichever path. It doesn't care.
But if the human has a goal, a coin flip giving them a sudden impulse to go down a different path is often going to be over-ridden. If the human can clearly see that only the center path leads to the exit, then a sudden impulse to go elsewhere caused by random quantum fluctuations is something they are (usually) going to try to counter.
Because humans are goal oriented, that is, willful, they should be less sensitive to random perturbations in initial conditions than non-willful objects. Their will clearly alters how determination works in the system, by reducing this sensitivity to randomness.
This may not match any traditional definition of free will, but it does show that 'human will is no different from the pre-determination via initial conditions of a rock' is false. There is clearly an equivalence, but not an equality, because of this initial-conditions' randomness sensitivity reduction that the rock does not have.
Life is what destroys chaos, the sensitivity of the end to initial conditions, because life (in a very general sense) wants one end: survival.
Thus, being alive and displaying willful/goal oriented behavior go hand in hand, and exactly why talking about 'freedom' from determination seems to be in a sense exactly missing the point about what is so special about not just will, but life itself. And that it is a deterministic system behaving very differently from other deterministic systems in how we go from beginning to end, not something 'embedded' in a deterministic system 'escaping' from this determinism, say via randomness which would give a lot of ends a life form definitely would not want.
We lower entropy locally at the cost of raising it globally. Life is a deterministic system that necessarily has individuals, which is why we feel like we're something separate, why we feel like it's a question of the system imposing an end upon us or us escaping what the initial conditions would seem to suggest. But that's wrong. We are the Universe just as much as everything else is.
In the end, the mathematical question is very similar to (if not exactly the same) the problem of time: you have N moments which seem to demand N-1, and N-2 and so forth all the way to 0, which are frozen and don't seem to demand N or N+1. But 1,2,3 does not seem to necessitate the existence of 4, or 5, or infinity. We can imagine a finite system that just cuts off at that. And so we can describe how, having a given individual, all the pieces of the past seem to add up to that individual and couldn't have been any different, but we struggle to describe the reverse, how that individual dictates a future or plays any role in it. Why is one future necessary and not another, or no future at all? But once we've reached it, it looks like we can describe in terms of static pieces (1,2,3) and ignore the dynamic actors they must compose.
Further, we can practically view them in isolation, seeing things as just the result of a simple static sum. But this sum doesn't answer: why do we have a future?
What would be radical is if we found the mathematical equivalent of 'This 0 demands a dynamical 1 exists for self consistency', and 'this dynamical 1 further demands other 1s that aren't so easily reduced to to a simple static sum in aggregate, because a simple static sum never demands the future'; the single line of whole numbers from 1 to N we can of course pick out, but the whole of the system is like the Uncountables, you can't compute it with a simple sum without leaving something else out. In other words, that the static timeless N pieces demanded N+1 was an illusion born of our ignorance and the benefit of hindsight. In reality you need to confront the behavior of larger (or just stranger), more dynamical actors who do demand certain futures.
The whole demands the parts. But counter-intuitively, the parts also demand the whole.
That would solve the problem of time, and the nothingness problem. And it would also change how we looked at will, humanity, and life itself forever.
My initial theory on consciousness [2020]
As protothoughts:
See, some other people think emotions are like thoughts. But they take that to mean that emotions are cultural artifacts, that they are 'top down', whereas I think they are bottom up, that they are basal and involuntary proto-thoughts, especially when you consider the line between general feelings like cold or hurt and emotion are pretty blurry and consider how exactly you would think a certain concept without words. How would you think 'that is good' without words? Images, sounds and feelings (probably more feelings if you are someone who can't mentally picture things), right? Before we can become fully verbal, we have to master a variety of abstract concepts.
Some feelings can be really abstract themselves, like the 'tip of the tongue' feeling that you almost have something but not quite what it is.
As categoric math with a dash of group theory, and the inability to say what blue is like to a blind person:
My basic hypothesis for how consciousness works is that it is a mathematical phenomena, but not the kind of math you learned in high school most likely. Rather, it is some form of a categoric mathematics, not strictly of numbers but of relationships, because that's what the basic self awareness problem is: recognizing the relationship of other objects to yourself, your body, or your goals. An animal with minimal self awareness might recognize a relationship between food, a path to food, movement, and its goal to not be hungry, but because it has poor ability to perform mental rotations (so, some group theory, particularly groups of symmetries, may be useful here as well) and map touch to sight (a series of rather complicated tasks if you think about it) it doesn't recognize recognize itself in a mirror or realize where the food must be when the reflection reveals food that is otherwise hidden from view. Its ability to perform categoric and group manipulations just aren't advanced enough to do that many rotations and transformations of one content type to another and recognize their underlying relationships.
2023 edit: (I would say not just 'categoric' math is involved, with categories talking about morphisms between things, but the basal entities themselves are more algorithmic in nature than your typical number. So the morphisms themselves can be very flexible and we can have morphisms between categories to new categories. If babies have an initial very synesthesia heavy state, they may learn which emotions/qualia are most appropriate for formatting the information they are facing and will help them decipher the information in terms of 'what things really are' and 'what they are not'; as a baby learns what a sound is, it begins to become conscious of it, and my guess is the qualia is a by-product of the math for 'making an entirely new kind of phenomena' and the math for 'learning what a thing truly is in a deep way that current neural networks do not capture' happen to have overlap.
This is basically what I say next, but this edit gets to the point more quickly, also, previously I was thinking less in terms of the morphisms themselves being dynamic / algorithmically controlled and had my brain stuck a little more on categories that just happened to talk about algorithmic entities as the things that have morphisms between each other, which wouldn't have captured the 'learn what qualia to make from scratch' concept nearly as readily: we want to generate the categories themselves if we are categorizing things. This is a higher level of abstraction, but if we want to describe consciousness, we probably need a really sophisticated apparatus. Although identifying the relationship and differences between two dynamic algorithm-like entities, one of which is yourself, is probably the bare minimum you'd want for a self conscious critter, since you self-modify and don't necessarily stay with the same 'algorithm' over time it is better to have an even higher level of abstraction.)
old post continued:
I also think that it is not much like any of the binary mathematics that underlie the proof of the unsolvability of the halting problem, but some richer mathematics that deals a little more easily with it (though not necessarily perfectly, it should at least be able to provide 'algorithmic' answers instead of just false binary ones). To understand why, consider the fact that the proof relies on humans understanding that the machine halts when the guesser says it does not halt - that is, the conscious thinking human correctly understands what the system is going to do even though it apparently isn't computable. This answer however is not a binary one but phrased as an algorithm. More than that, it's an algorithm that deals with the relationship between two parties: the guesser and its goal, and the halt-flipper and its avoidance of that goal
What's really exciting about this is the fact that once we accept this scenario, we immediately notice a mathematical fact that other theories of consciousness don't cover very well: why is it hard to impossible to convey qualia merely by speech if it is just information? Why is it so damn hard to say what blue is like? In the halt flipper system, one cannot convey the information even if one knows it, as long as one answers in binary. It's the wrong format. (see my related post on Mary's Aphantasia)
I don't know any other hypothesis that really manages to cover this behavior, and certainly not 'predict it' from math alone.
As a mirror to the nothingness problem:
There are two main ways to tackle the qualia problem. One of them is to assume that we have to figure out a way to generate it from a system that has absolutely nothing like qualia to start with, the other is to assume that everything is qualic from the start. The latter approach appears simpler, but it actually just throws the ball further down the line and falsely dodges really confronting the true difficulty of the issue.
In both cases, you have to answer the question of why a 'zerolike' system has the properties it has and what it is going to do in the future.
I am personally in favor of it being exactly like the nothingness problem, for the simple reason that the nothingness problem* will also require rich new mathematics. (*Which should be read as 'why do we not have nothing, why did the most zerolike possible system have the properties it had and not different ones, why did it apparently support or equal a generator of a nonzero whatever that generator might be' and not 'how did we get here from nothing' because the latter assumes too much) Even phrasing the nothingness problem appropriately, in a way that preserves all the information of the question, requires going into categorics. And if we think about adding rich new categorics, we can think of what new thing would pop up at the same time we get self conscious creatures:
A rich categoric logic apparently capable of solving the nothingness problem, but now with self referencing added on top.
If that doesn't sound like a potential solution to you, solving all the problems including why a self conscious creature should be qualic as well (with a side effect that we may, with a rich enough description, be able to ban p-zombies after all), then I don't know what to say.
The final cherry on the top though, is that it actually makes the qualia problem easier if we approach it the same way we might approach a (likely necessary!) simplification to the nothingness problem: let the absence of a qualia itself be a qualia, in the same way an absence of all other things but the void/absence itself could still be considered 'one void' and not just a zero. I don't know about you, but I consider the state where I'm not super happy but I'm not sad or bored either to still be an emotional state. Seeing the color black could be as simple as getting the brain to start thinking in the right format and looking for inputs and getting '0 input' (Question to ask yourself: do blind people from birth see the color black? Recently blinded people do, because they can still think in that format). Once you have at least one qualia, even a trivial one, it's much easier to imagine some manipulation transforming that into other qualia. This is not entirely unlike the 'guess everything is somehow qualic' strategy, except that one doesn't mention a 'null qualia' or building to more complex ones; you're just given the impression that perhaps a random electron somewhere is seeing blue. What we're actually doing is 'defining everything as qualic' and that's actually a very different strategy, because then it is literately logically impossible for anything not to have a qualia, it's just the qualia would be very trivial qualia most likely in most circumstances that most people would not consider to be much of an experience. Were the qualia to actually be non-trivial, then it would match the usual 'guess all things have qualia' strategy. You always have to be careful when people are defining things differently from the norm.
So now, all we have to do is find a really rich categoric calculus/mathematics that can cover the 'why did we get a universe generator of some kind and not nothingness that continued to do nothing' problem. Easier said that done, of course... but that does not mean it is impossible.
slightly newer musings
Qualia:
I spent a fair bit of time thinking about consciousness and keep revising my ideas of what its origins could look like mathematically and what the consequences of that would be.
The most recent idea I've had sits in contrast to an older one. I supposed that 'emote-thought' qualia had two components: the nonverbal thought that something is 'good' or 'bad' which I guessed at first to be almost 'silent' qualically by itself, and the simpler qualic environment-informational formatting like how your gut is doing. These would combine to give 'bad pain', for instance, with the prediction that there could be 'good pain' (as bizarre as that sounds, masochists are certainly real) or 'neutral pain' that you feel but for some reason just don't care about.
The more recent version of this is close but subtly different in that the 'good thought' information could be not that fundamentally different from the environmental and have a qualic association too - that their basis is the same mathematics putting itself into slightly different formats. This should result in slightly different predictions, namely that if you could somehow mute all body sensory feeling you'd still generate feelings versus the other scenario you would not.
edit: hah, I almost forgot there was actually another thing!
This was the thought that since organisms are not a closed system and highly motivated to manipulate entropy, even if qualia is motivated by a 'from zero' process we shouldn't expect it to perfectly mirror necessarily the way global physical systems work in the demands that this once generated should stay summed to zero in some sense (like charge), and that even if it does, there could be some interesting symmetry-breaking behavior that hides the most important equalities.
I feel like some of the most cool stuff about consciousness is going to be something people are really going to enjoy digging into in the 22nd century or later 21st. We're not quite there yet, the AI we've built is in my opinion kinda crap, but that doesn't mean we know nothing at all.
I was really interested to look at opponent-process color theory. I'd actually noticed it before I knew it had a name and thought something like it should pop out, that intense color could produce its opposite out of motivation to keep the system closer to 0, or rather an equivalence to it. What bothered me until the above was that this isn't a perfect equality to zero, obviously, but here's where the opponent process color theory shines: it explains the result as tiredness of the system. A tired, exhausted part of your system is no longer going to do its job!
So, I think prediction wise my ideas on consciousness are actually doing pretty well and mesh nicely with existing work. This is quite nice.
a slightly older post on determinism, the sun and swallow ; Hume on free will
[2021, the previous determinism post is newer; you might wish to skip this one.]
So, I recently learned the notion that free will actually requires some degree of determinism actually dates back to Hume (And I will probably forget this later, so my deepest apologies to old Hume). Therefore, I thought it would be interesting to clarify where my point of view differs, and more fun than the usual because it isn't just a definitions argument but about how I view physics itself. So one should be aware that this has some speculation in it.
The old central worry is that we might under determinism be no more 'free' than the sun along its path or the swallow along its migration route. Thus, the argument is that we must 'inject' somewhere some room for humans as 'special' causal agents.
The obvious counter, for the swallow at least, is that this amount of freedom is perfectly fine for all purposes; surely the swallow is conscious. For the sun, it is less obvious a choice, because its path is surely dictated by very 'boring' laws of physics that don't take into account anything like agent choice.
Objection 1. The view that there is some book on high dictating the path of the sun is to be extremely misguided. There is no 'outside physics in some law book': the physics IS the sun. In math terms, we are counting on our fingers with a direct representation, not an indirect representation. The sun does, in fact, determine its own path. It just doesn't do so consciously.
Objection 2. That the laws of physics of the sun have no resemblance to the physics of agent choice in a way that we would recognize as 'meaningful' and nondemoralizing/completely robbing of the concept of choice in the first place. The laws as known in Hume's day were formulated purely classically, but the real world is more complicated than that.
My current view is that a deeper understanding of physics down to the quantum level requires not classical numbers, but algorithmic bodies interacting with other algorithmic bodies in a manner that, looked at from the view of the classical wholes, would necessarily lead to a corresponding Cantor diagonalization: you either need more than one line/dimension going on simultaneously, or your line needs to be literately uncountable and have no simple addition scheme that will get you all of its pieces. That means there is plenty of room to generalize and talk about an algorithmic actor Bob who reacts according to his relationship with another algorithmic actor Alice and his own personal preferences, and that this is richer and more complex variations on an existing theme rather than something completely unfathomably new and special.
For most objects like the sun, the necessary multi-algorithmic behavior from the quantum parts smooths out (it isn't coordinated together but cancels) and can be essentially ignored and classical equations used to approximate to very good precision its behavior. For large classical bodies with conscious behaviors like humans and swallows, they are configured to re-introduce this algorithmic behavior (it doesn't need to be quantum computing, probably, in my opinion, although that wouldn't hurt), into more complex coordinated configurations which handle agent recognition tasks (self and body awareness) with varying degrees of quality (even a little body awareness is generally better for survival than none), and none of this physics is 'dictated from outside' but purely inside.
You are the physics.
Another way of looking at it: You are a dude/lady who often signs notes, and one day you are extremely embarrassed to see you unconsciously went on autopilot and wrote 'I love you, see you later' on a note that is supposed to be for your boss. This is an unconscious behavior, but it would be hard to say that it was not something that deeply related and only can be made sense of in context of what kind of person you are and what kind of actions you typically take. That is, as a willful agent. Your unconscious is as much a part of you as your conscious, so the idea that our choices go through an unconscious phase should not be frightening. Ultimately, he/she did consciously review what they were doing and decided very appropriately not to mail that note to their boss.
We can view things from a 'bottom to top' view of asking how to configure a system to get particles to coordinate like this (in a manner which may or may not, we'll notice, cause particles to cancel out certain properties of each other or reinforce, making the many body version different from the single body), or a 'top to bottom' view where where we ask how to write in terms of actors that make causal choices, but my view is that these two views are actually completely equivalent and a result of arbitrary divisions. The universe doesn't really care about chopping things up between 'big' and 'small', or actor and nonactor. It just is. However, for humans, it can definitely be more fruitful to sometimes view things from one view rather than another, as answers may be derived to much more quickly or be too much of a computational hassle in one way of handling our indirect representations (words/notation) of direct systems (the actual system) than another.
This means a problem of why Bob hit Sally may sometimes be easier to answer with a bottom up understandable view of 'Bob had a chemical imbalance from his diet' and another time may be easier to answer with a more top-down-view-understandable explanation of 'Bob learned bad behavior from his parents', even though they're the same system of Bob. We could hypothetically answer each with the other system (keeping in mind the 'agent' view is actually going to talk in something like algorithms, so it does talk about nonagents too, but adding that a chemical is 'not very agentlike' in an agent focused description is pretty superfluous) but it would be more of a pain in the ass to do so.
(Technical Sidenote: This also explains a peculiar feature of qualia that I cannot tell you about my experience of 'blue' very readily, if qualia is akin to a Halt-guess problem between two actors and one of them is forced to answer in binary format even though the other actor always flips the answer, which is extremely obvious and understandable yet completely inexpressible in a single binary or a sum of binaries to make a larger classical whole number like '2'; however, I've talked about this before and it's a little off-topic, except for clarifying that, yes, I really do mean multiple algorithms even though typically you can do whatever you want with one algorithm, because Halting problem for two bodies is relevant here.
If we write Halt for one body, we get a body trying to halt to tell us it doesn't halt, which obscures the issue of binary answering a little by making a physically impossible situation, but one will note that this would work if instead of halt/not halt we asked it to answer 0 if it will answer 1 and vice versa because then it becomes obvious that a body capable of answering in non-binaries and algorithms would answer that you asked it to perform nonsense, by writing out the algorithm and highlighting its contradiction as an answer. This would still require that it be able to recognize algorithms, as in, agent-recognition of the contradicting algorithm, and an alternative non-contradicting sensical algorithm in what it actually performs, or sensical algorithms that don't require spitting out 'it is nonsense' since it is a generalized guesser, so we are implying multi-algorithmic systems in a slightly different way.
My argument, essentially, is that nonbinary Halt Guessers are increasingly better at consciousness type tasks as we increase their generalization to more tasks. A generalized problem solver is very useful for survival, so if the energy requirements can be met it without too much cost it makes sense to select for it. Hence, we have a very good reason that conscious willful systems with very weird qualic-style format-specific math should develop, and an explanation of how this relates to nonconscious systems and their own math.
The sun and the swallow can both be understood with the same physics, it is just expressed in very different ways, and the 'determination' of the swallow is rather different from the sun in that it involves actor recognition processes.)
Can we have ethical animal experimentation that satisfies mostly everyone?
[December 2020]
It's been awhile since I've thought about the subject of 'loving/valuing all life'. I got rather burnt out on it (making me lean towards a 'no' to the question), but recently I had another idea that gave me some more energy. It came from an analysis of noting that some extremist animal activists on both sides (meat eating, particularly the 'extinction of wild animals is good, suffering of tame animals is better than wild' micro-movement, and the larger but still small vegetarian PETA-type folks with their 'extinction of pets is good, suffering of wild animals is better than tame' ideas) have a tendency to fixate on one form of animal welfare and not another, and you don't see much talk with extremists about compromise. Of course, if you did, they wouldn't be extremists, but that's besides the point.
What is the point is that attempting to view the situation neutrally I realized there was another solution to the 'animal experimentation' problem. People who are against exploitation on the more extreme PETA side of things, tend to care less about whether the animals actually suffer and are more about the evils of the concept of exploitation itself, valuing freedom as a thing in and of itself to desire even above life. Whereas most people who hear that who don't agree with them tend to just shut down, going 'well, I am not going to value an animal over the life of a cancer ridden child'.
The 'animals are like children' comparison activists are so fond of actually highlights the over-looked middle ground solution. Nobody thinks offering experimental treatment to cancer ridden children with parental approval is exploitation. Animals clearly get cancer like everyone else. I had a pet rat that died of cancer, it's really fuckin' common for them - two of them, actually. I had a pet dog that died of cancer too. I would have been more than happy to okay an experimental cancer treatment, and thinking back I can't help but feel frustrated that this wasn't even offered on the table!
I realize there are ethical standards involved when someone expects a miracle cure to work and so incentive not to just let people have access to whatever trials they want willy nilly, but I don't want to be talked down to like I'm a moron, give it to me straight that the trial is just a trial and likely not to work, and if the alternative is death it seems stupid to bar an option because it might have some side effects, like, I dunno, more death.
You can't die twice.
So mostly I just want to say that there can be real value in not having a knee-jerk reaction to choose one side or another but to think about whether a genuine compromise position is possible. Of course for the real extremists who think people shouldn't even have pets, this won't satisfy them, but for everyone else I think it is not too bad a solution. We should be doing less animal experimentation, or at least on a wider variety of species, because it's hasty to assume a cancer cure that works or, maybe sometimes more importantly, fails to work on a rat, will work on a person. And we clearly should still do animal experimentation for pets because, duh, pets get cancer and all the other diseases too, and experimentation is not inherently bad, exploitative, or painful for the animal!
So this is making me feel a little more optimistic about the 'love all life if you want to' idea, although in practice there are some people I don't want to love because they're flaming assholes and the emotional burnout is simply not worth it from extending my hand and watching it get burnt on their flaming farts. I'll settle for 'show basic minimal levels of compassion to' instead. You could argue that's a sort of love, but it has different connotations for me and sounds less exhausting.
Against longtermism; let's value people for people not life-units or net happiness
https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
I recommend reading the above article. However, here are a few key quotes:
'So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 10 to the 23rd power biological humans who Bostrom [Father of Longtermism] calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”'
'Whatever traumas and miseries, deaths and destruction, happen this century will pale in comparison to the astronomical amounts of “value” that could exist once humanity has colonized the universe, become posthuman, and created upwards of 1058 (Bostrom’s later estimate) conscious beings in computer simulations.'
What this immediately reminds me of is abortion debates where we're supposed to evaluate potential humans as worth more than actual, existing humans and their right to bodily autonomy (you aren't allowed to steal someone's blood even if it would save your life) taken to the logical extreme where we aren't even talking about fertilized eggs any more.
I was initially interested in Effective Altruism, because it had the word 'effective' in it and I was young and I thought, hey, I should think about what kind of altruism would be more effective! But it's actually rather insidious, because Effective Altruism the philosophy says that it's okay to work for an oil company or a polluting and exploitative Wallstreet company if it means you could donate more money to wildlife conservation, which is fucked up. Why this is fucked up should be obvious: you can't always solve a problem by throwing money at it, and if everyone boycotted working for polluting companies, then there would be no more polluting companies and we wouldn't need to donate to charity in the first place. Obviously, that won't happen, but one should also keep in mind that one might be committing more net-harm by working for the company and earning, say, 1 million or 1 billion dollars, even if one proceeds to donate all of that, if the pollution the company creates costs more than that many dollars. You would be spending all your charity fixing the crap you broke, and you'd still need to eat and pay rent at the end of all that.
To people who are logically minded, there's a bias towards anything that we can make concrete and numerable and to stay away from things that are hard to mathematically model. But, I think this is a severe mistake: we should take this as a challenge to improve our mathematical modeling to capture more complex scenarios and values, not simply take the easy way out and go 'Okay, how do I maximize the number of lives? How do I maximize the amount of happiness in the system?'
There's a joke that seems appropriate here (source unknown):
The most utilitarian thing to do is to stop being utilitarian because it makes you miserable.
There's a good reason to avoid happiness maximizing, and that is that most of us can recognize that a scenario where we create a new race of 'humans' as small as possible so we can create as many of them as possible, with just enough brain matter in order to feel happiness, is actually pretty nightmarish. And in that moment of horror, you realize that you value so much more in life besides happiness or the net number of lives.
My philosophy is autovaluism: we automatically create value because that's part of what it is to be alive, we value things, and we choose actions that show how much or how little we value others. We can then choose to value the value-makers themselves, recognizing that without them there wouldn't be any value at all. That is, we can strive to think about the values we are creating automatically whether we mean to or not, and work to have them concordant with each other.
Valuing the value-makers, aka sentients and sapients, isn't the same as valuing lives or valuing happiness, it's more complex than that. If you genuinely cherish a person and their own values, you might very well let them choose an action that makes them unhappy but satisfied. For instance, say they want to commit great personal sacrifice that will lead to their death or capture and torture or just general misery. One cannot possibly view this as a happy situation, and yet, they may be fully aware of the situation and decide this is genuinely the path they want to take.
This is not to say we should never weigh number of satisfied lives; obviously if something benefits a majority and only minorly inconveniences a minority we should probably go for it. Happiness as a measure is not a linear function: if we're torturing one person to make 1000 happy, the unhappiness of that one person outweighs the many. Letting thousands drown for the comfort of a few billionaires is unconscionably evil (where 'moral' is defined simply as behaving as if you actually value people, because that's the only definition of morality that makes sense to me; no, the rocks and the stars and the rest of the universe don't care if you are evil, but nobody cares what the rocks and stars think and if having the whole universe agree with you or give you consequences is what you mean by 'objective' you are an idiot).
Now, nobody is going to force you to care whether or not you are evil, but chances are evolution wired you to care about other people at least a little bit, and to care about not being a hypocrite at least a little bit, and so if you like not living in shame (In both the sense of being shamed by others and feeling ashamed), you should value people this way and not by an oversimplification just because the math is easier the other way.
But we should also value the lives that actually exist in the here and now, and not hypothetical lives that might never exist. Global warming represents a genuine threat to human existence, as we keep realizing we under-estimate factors that contribute to warming. We underestimated how little carbon soil captures. We underestimated from models how hot prehistoric poles could get after the dinosaurs went extinct. We know that insects and fish are dying in huge numbers and the ocean is acidifying quickly, so we could be looking at massive food chain collapse and an extinction as bad as the Cretaceous or even the Permian. This is not a situation where we should be shrugging and saying 'well, we probably won't go extinct, and there will be loads more people in the future than in the present, so let us develop tech for imaginary living-in-machine people we don't even know if we can actually build, we don't know if we'll keep the concepts for if civilization collapses, and we don't know if we could keep sustainably in a globally warming world where we don't want to perpetually warm things forever'.
People who math up their morality often pat themselves on the back, and say 'how clever am I'. But if it came a little too easily, then you should ask yourself whether what you really did was a lazy over-simplification in order to justify what you wanted to do anyway that might be viewed by some as not moral at all. We humans go to great lengths to escape guilt. The word 'mindfulness' is usually applied to body awareness, but one of the best things you can do is to be mindful of your mind and ask yourself:
1. Am I currently engaging in a process that could be viewed as guilt avoidance?
2. Am I currently engaging in reasoning with motivation, such as attachment to a certain point of view? If so, pause and consider how you would feel and reason if things were slightly different, for instance try imagining that your family had always been vegetarian (or consensual cannibals, or whatever doesn't apply to you), or if you didn't have an immediate gut response to something.
Ask whether there are scientific studies in the subject with a majority consensus against your point of view, or work by mathematicians, philosophers or historians if the question is not scientific per say (as questions about whether we should let a bunch of people starve aren't, as that's a moral question and not a scientific one you can do experiments to prove or disprove, but that doesn't mean we can't try to utilize scientific thinking as much as possible before we make our decision, since if our reasoning is we get some outcome from starvation then that better be the actual outcome and not wishful thinking).
3. Did I come to my conclusions before I actually reviewed the evidence? (For instance, global warming; did you immediately conclude it was wrong and then look for evidence it was wrong?)
4. Am I actually competent to properly evaluate this mathematical model that I am using to make judgments, not just in the sense of computing it, but in understanding how it compares to other mathematical models that may be more complex and have more factors? Do I have an inkling of where it breaks down in validity if I do have it oversimplified?
Is love the most important thing? an autovaluist / literalist musing
You may have heard that 'love makes the world go round' and someone declare, based on zero evidence whatsoever but their own gut feeling, that 'love is the most precious thing'.
Well, love definitely doesn't make the world go round: that's rotational forces. But the second claim is a somewhat more interesting one. Let's take a literalist approach to it and assume by 'precious', you mean 'has value to a being'.
Therefore, we can immediately conclude nonsentient parts of the universe are irrelevant to the problem and discard them from consideration. Then, we just have the entire set of sapient, or if you want to be even broader, sentient beings to consider.
Is love the most precious thing to all of them? If we consider sentient beings, many of whom are solitary species but presumably enjoy their lives, then no. If we consider sapient beings, it's more ambiguous because social life is quite likely a necessary precursor to sapience due to the richer set of problems it presents as necessary to solve and the rewards of such living, although not necessarily so (see octopus intelligence; not quite sapient, but being smarter than some monkeys makes it seem more likely some solitary alien out there might achieve sapience) and of course, most species sexually reproduce and asexual species might have a harder time spreading vital mutations or attaining desirable mixtures of heterozygous and homozygous gene setups. Although for the last part, we may presume the author did not mean that sort of loving, but rather affection.
The question of whether someone can live a happy life without love is one easily solved by finding just one individual who was happy solitary. We know voluntary hermits exist, so this does not really seem an impossible task, and rather like a black swan situation: it would be quite easy to get a black feather mutation, and there is a large unsearched domain, so even if we hadn't found one yet, it would be quite hasty to declare its nonexistence. Your typical introvert likes less social interaction, not no social interaction, but there are more extreme introverts who genuinely prefer their own company and who are completely asexual and aromantic. At a different extreme, we have psychopaths, who may be sexual and use romance for opportunistic abuse.
So, by the standard of 'values held by beings', and not some strange objectivist standard of 'values held by the whole universe including rocks, though why the fuck do you care about opinions of rocks idk ffff' or whatever, we can conclude that love is not universally the most precious thing, but it is the most precious thing to some people.
Is that a boring, obvious result? Well, yes. But some people are apparently incapable of concluding it, maybe because they believe in some strange definition of 'precious' that doesn't take into account how arbitrary 'preciousness' can be. One man's trash is another man's treasure. If there's one thing I've realized, it's that humans are, despite being the best animals at the task, startlingly bad at empathy even when they declare how much they love other people.
Often, if you scratch deep under the claim of someone who says they love their fellow man, they really mean a very specific picture of man that excludes a large number of actual homo sapiens (such as women) who they happen to disapprove of or they mean a very specific definition of love that excludes what others might consider actual loving behavior and includes some very hateful behavior like telling someone that their affection for another person of the same sex is worth damnation.
What I really wonder, though, is why so many people feel the need to have their personal values be important to more than just them - that they be the most important and precious values in the entire universe, even. Why the ego? And if love or whatever it is you think is most precious is so self-evidently great, does it really need your personal declaration and defense of it? If you need love to 'make the world go round' to matter, doesn't that say something rather sorry about love? Or perhaps, does it really just say something about your own insecurities?
Against insects as meat
Initially, I was keen on the idea. But more research has come out in favor of the idea that at least some insects do feel pain, or at least discomfort. That would mean putting many more animals (because insects are so small) in pain rather than less if we switched to them. [edit: we now know bumblebees even have a sense of fun and can play]
I honestly think we're best off trying to make artificial meat or cultured meat, esp. since some artificial meats exist already that are decently tasty. b'ef and ch'ken are good, just ludicrously high in sodium. (Someone, please make low salt versions of fake meat that isn't tofu. I fucking hate tofu. And make dried jerky versions and soups that I can order online, thanks!)
Now, there is one thing I think mass using insects could be good for, and that's dealing with plastic waste. It was recently discovered superworms (a kind of darkling beetle if I recall correctly) can actually eat some plastic, and we do have a huge plastic crisis. If we did decide to do some insect production as a more sustainable alternative (and to be fair, some insect production is already done for feeding pet lizards and other critters that only take live prey) then getting rid of plastic at the same time would be pretty good.
I also think raising animals in happy conditions and acquiring milk and eggs from them can be ethical (for instance, if you skim a little extra milk from a tame mother with calf I doubt she'd mind much, and chickens will produce unfertilized eggs that will just go to waste if nothing eats them), but we obviously do not have such conditions in captivity right now where our system encourages mass production and shoving into tiny cages.
Pointless and pointful at the same time
https://www.scientificamerican.com/article/learning-to-live-in-steven-weinbergs-pointless-universe/
I was excited about the article for a second because I thought it was going to talk about some theory of Weinberg's that involved a physics with no points, which could be amusing. If it was just strings again, that would be a little dull, but maybe it would be something newer, like talking about a system purely in terms of relations and treating all 'nodes'/points as fictitious. That could be fun. Or, a world of all vectors, no scalars. Or a world where all points are replaced by turtles.
Then I realized it was just talking about philosophy and that line of his I read and promptly forgot at the end of his book, because, well, it's not exactly shocking to me. So I'll repeat myself, basically, my attitude:
Define first what you mean by 'having a point'. Regardless of what this is, it will either be equal to the set of all actual things, the cosmos, or smaller than it.
If the 'point' of the universe is smaller than it, then one may in a sense feel that this is actually pretty insulting and petty.
Let's say the point of the Universe is one event on a particular Monday. If it's in the past, it makes everything after seem pretty superfluous. So we'll assume it's the last Monday ever. That means most beings will never, ever see the point, in fact none of them will if the world dies via heat death or crunch.
Let's be more generous and say that it's many moments, such as every moment of love. Well, that still seems awfully small, doesn't it? I mean, there's plenty of gorgeous things in the world that aren't love, and not everything in the world revolves around sex and reproduction; it seems awfully, well, small, when you put it like that. And it again makes so much of the world seem pretty darn superfluous. What does a black hole have to do with love?
So we have two options that don't involve a reduction. It can be its own point. Or it can be pointless. When we realize these two are isomorphic to each other, and in fact in some sense one in the same because they can cover slightly different totally arbitrary choices/definitions, we are free.
We are pointless, and that is wonderful.
We are pointful, and that is dreadful.
Apparent paradoxes that aren't. This is my koan, this is my heart, this is my laugh and my lament and my story and my joy that stretches across all space and time. We are point-adjacent, point-pondering, point-curious, point-choosing, point-denying; sometimes self-purposeful, sometimes part of someone else's purpose, sometimes not. Who says this has to be a sorrow?
Not I.
Yes, you can be pro doing one thing and not another similar thing without contradiction
I see this sometimes in internet arguments and it pisses me off, because the arguer is always making a heap of unexamined assumptions:
1. That the argument is a morally motivated argument. (It doesn't have to be. We're used to people having secondary motivations when they argue about things, but if we're focusing on just the logic /of/ a thing, then an economic argument is an economic argument and calling the other person a hypocrite for something unrelated to economics is a bit non-sequitur.)
2. That the argument is motivated by the same morality system you are using.
3. That it actually matters if it is contradictory to the person's ultimate support of the thing. If someone is supporting something for an emotional reason and justification is secondary, telling them their emotions/logic are invalid is not going to make them suddenly support you.
Take the death penalty and abortion. People who decide that fetus=adults get confused and think supporting one but not the other is some sort of hypocrisy. If to the other person they are not identical, it isn't a form of hypocrisy: that would require they be the same under that person's value system.
This actually goes both ways, for people pro-death penalty but not abortion, and people pro-choice (nobody is actually 'pro abortion' in the sense that they want abortions to happen, it's seen as a least-bad option when things have gone wrong) but anti- death penalty. In both of these cases, fetuses != adults.
For the pro-death penalty people, fetuses are innocent and adults are not (or at least, they think the adults are not: in practice, far too many innocent people are hurt by the death penalty, which is why while I'm sort of neutral to mildly against about the death penalty in abstract in practice I'm firmly against it because too much goes wrong). For the anti-death penalty people, early stage fetuses do not have any cognitive capacity (if it's late term, it's not an abortion, it's a premature birth or something has gone horribly wrong and the fetus is not viable in the first place) and so are not considered sentient beings and thus have no more moral status than a brainless jellyfish (less if the fetus in question has only just attached and does not even have the most rudimentary forms of nervous system, because a single cell does not a mind make), OR/AND (non exclusive OR) nobody has the right to another's person/body against their will so their sapience status is irrelevant; the fact the fetus will die on removal is irrelevant the same way a vampire that wants to drink your blood against your will being disallowed from doing so will die is irrelevant to the moral disallowance of that forced blood drinking as it is based on the principle of bodily autonomy.
If a monk sleeps with someone after declaring no one should have sex, that's hypocrisy. If someone else hears that monk and then sleeps with someone else, the monk can't call them a hypocrite for disobeying the monk's moral system because they are not the monk!
Finally, I'll add it's bizarre when people say the state committing 'murder' would be a 'crime' and thus make the state 'just as bad as the murderers'. A crime by definition is something that is illegal. You've confused criminality with morality, something can be a crime and still be be a moral act, such as if we criminalized doctors giving life saving care, or someone stealing an overpriced 1 million dollar medicine to save a child, or someone freeing slaves.
Secondly, 'just as bad' implies a morality system where we judge the morality of the act purely by whether it is the same act, irregardless of circumstances or motive. Many people do not have such a morality system, so if you want that to be convincing, you have to argue why your moral system is the right one instead of just assuming everyone shares your assumptions.
Mary's Aphantasia
(a thought experiment) and knowledge that can't be stated
Mary's room is a thought experiment about a woman raised in a black and white room who nonetheless learns everything there is to know about color. Can she spontaneously see it? Since the answer seems to be 'no', this is taken as evidence that color must necessarily not just be information.
However, I'd like to posit another thought experiment which makes this picture more complicated. Let's call it the equivalent of testing the unstated background auxiliary hypothesis, where we assume that if Mary had seen color and therefore knew it as information, she could spontaneously see it because we can think of information we know.
Imagine that Mary has aphantasia, that she has in fact seen color, and she knows everything there is about it. She's in a black and white room. Can she spontaneously see color, just by thinking about it?
The answer is that she has aphantasia, so of course, no. People with this condition can know what color looks like, but they can't make their minds generate the mental image.
So Mary's room is more complicated than Mary knowing everything about color and not seeing it: she could genuinely know everything about color and what it looks like and still not because people do not have perfect control over their thoughts, and not because color isn't a form of information/thought.
On the other hand, this thought experiment could also show some support for the position that isn't just about information, because the person knows and still can't picture. So it's actually still ambiguous.
My two cents is that I think it's information in a certain formatting - you want a certain part of your brain to take the information and map it, like your visual cortex, or if that gets damaged for a different part of your brain to take on its role and start acting like it. This would explain aphantasia and also give a weird, possible answer to the thought scenario where if Mary is an AI built by an advanced civilization, maybe she really can spontaneously make herself see color if she has enough information, but us mere mortals can't, because we don't have that much control over our own thoughts. It's like not thinking of a pink elephant after being told about pink elephants, except in reverse.
Here's a related, more mathematical/code-like demonstration/thought-experiment that just because you know something, and your knowledge could actually be rather simply represented, doesn't mean you can actually convey that.
Imagine you are a machine that can give only binary yes/no 0/1 answers, and you've been given a program (based on Turing's thought experiment to show the halt problem isn't solvable) and asked to solve whether the machine running this program halts or not.
From reading the program, you know the program will take your output on what it says, and then it will do the exact opposite!
So no matter what you output, you will give the wrong answer.
But you also know exactly what the machine is going to output: your opposite.
You know the answer, but you can't give it or it will no longer be the answer.
In this case, it's the forced formatting into pure binary answers that causes the knowledge to be unstate-able/ungiveable to other parties. It's not hard to imagine something like that could occur in qualia production as well.
Don't feed trolls and beware of deliberate ambiguity
I occasionally come by supposedly moderated sites to lurk and it annoys me how often there is fighting in the comments section.
In the art of argument, you may see some time devoted to fallacies and appeals to emotion. What you see far less often is people discussing outright trolling. I'd like to address some common trolling tactics today, and some tips for dealing with them:
1. Don't 'feed' the trolls, but you can still address them without doing so IF the site has moderation, and, actually, to some degree even if the site doesn't. The troll wants to drain your time and energy, so simply having a post (say blog post you can link to over and over) not directed that the troll specifically, but the kind of tactics and ideas the troll uses (chances are they will not be unique) and can then simply point to whenever this kind of argument pops up all over again, rather than typing out over and over. Typing out over and over and enraging/exhausting yourself is exactly what they want.
2. Be aware of goal post shifting via ambiguity.
It's an extremely common tactic to leave a one liner phrased in a way that it has a very common (per the way speech usually works) and very fallacious, implication, but fail to actually say that implication explicitly, so that people will jump on it. At this point the troll can now play the double game of saying that other people are (a) too stupid to get the fallacious point (b) did not understand what they were REALLY implying. Kind of like criminals saying "I didn't do it but if I did it would have been what was deserved and totally OK."
3. Don't descend to the troll's level.
It doesn't look good to have comments that just amount to 'person x sucks'.
If this site has a moderator, I strongly suggest removing such posts on both sides, and leave only the ones explaining why certain implications are fallacious - these can be beneficial even if the troll wants to pretend that isn't their intent since such may be read by others who come later and certain tactics/statements (example: "Elon/Zuckerberg/whoever is a genius!111 so you must be wrong about their missteps managing a company which by all accounts is still really successful, and you use a service so you must be endorsing it!") are extremely widespread. [edit: Anyone still using Facebook/Twitter in 2023 should be aware Mastodon and the fediverse exist and offer the exact same services for free; there is literately no reason to give those assholes money. You can even use Mastofeed to embed mastodon feeds.]
For an example counter: it is entirely possible for a company to do well /in spite/ of an incompetent leader, especially if that company did not even have that incompetent leader a short time ago, or if the nature of the incompetence is actually malice (in which case one would expect the company to do some horrible things and continue to make profit, and the statement that they are too incompetent to address an issue actually benefits them because that's what they want you to believe.). Personally, I actually lean toward a lot of 'incompetent' actions being malicious.
As for the other argument, that's really fucking stupid. Imagine you were on a spaceship, and the person in charge of producing the air was poisoning it. Would you be endorsing the person by breathing? Of course fucking not. Some things are so ubiquitous that it is impossible to avoid using them, such as a lack of competitor for a service. But sometimes, such as in this case, it also just makes sense to use a service: the people who actually use facebook and twitter are the people who need to hear most that it sucks. Using a service in order to sabotage or undermine it isn't an endorsement any more than sending a botnet to mass denial of service attack a website by repeatedly loading its services is being friendly. (that's a tiny joke: the latter behavior is a subset of the former, obviously.)
Don't let people use zero sum games to manipulate you
It's a common tactic: to claim either group A benefits and B loses or B wins and A loses.
However, the reality is this often just isn't true.
Here's the latest example: remote learning as a method to deal with corona as a kids vs. adults issue, when in reality, some schools are so short staffed by teachers and have so many students with COVID that they physically can't open.
https://prospect.org/education/folly-of-school-openings-as-zero-sum-game-coronavirus/
In reality, the vast majority of the time someone presents something as a 'you benefit or they do' issue, they're trying to manipulate you, and they get something out of it for doing so. And unfortunately this manipulation tends to work. One study (which I sadly can't recall the link of, so I can't vouch for how high quality it is or if it was replicated) found that when people were asked to imagine a policy measure wouldn't negatively affect them in any way but would positively benefit minorities, they still tended to vote against it out of a belief it would harm them in some way, despite having been explicitly told that this was otherwise for this thought scenario.
People are fools and this is why we can't have nice things.
Is diversity a form of evolutionary fitness?
One of the more debated forms of fitness is group level fitness. While limited, as it must be good for the individual's genes or it will not propagate, it certainly does exist, the primary example being eusociality such as in bees where the individual workers often do not even breed but depend upon the group, which is related to them, for the spread of their genes through the population.
What I want to talk about is something a little different: species level-view fitness of genes. This is a source of common misunderstanding and confusion.
First, survival of the fittest is actually somewhat of a misnomer. It's really survival of the fit: most of the time, most members of a healthy species will actually manage to have children. Having a single individual do all the breeding could rapidly lead to extinction if the result had too little diversity (although, counterpoint, some asexual species do really well; they tend to be fast-breeding like bacteria though or have horizontal gene transfer to make up for it). People often seem think that there is some 'superior' version of a species and that evolution marches in a straight line, rather than frequently branching into multiple new species. Nothing could be further from the truth.
However, there is a very simple example from game theory that clears things up:
A game of rock, paper, scissors.
Imagine there are a 'rock paper scissors' set of genes, where one gene beats another but not all others. Imagine also that mutations will occasionally reintroduce genes if lost. What is the stable ratio of each in the population? Well, if the population ever become dominated by 'paper', it would become vulnerable to a takeover by 'scizzor' mutations, which would then also be vulnerable to rock. The stable result would be having each gene take up an equal slice of the population.
Is there a clear example from nature of such a gene set? There actually is: a species of splotched lizards have males that come in different colors with each having a different strategy and color that it 'beats'. Overall, it's fairly rare to have such a clear-cut example, but my point is made: survival is not about being the fittest. It's about being fit enough to pass on your genes, and diversity can actually help rather than hinder you. If you had offspring with both or all strategies, you'd be better poised to take advantage of any temporary over-populations of a given strategy!
Another scenario is variations in environment, such as a climate prone to swinging between dry cycles and wet cycles. If you are a short lived species, you would find what genes are optimal for survival to be swinging and to have a single strategy dominating too quickly, despite being the 'fittest' at the time, would actually be disadvantageous. Thankfully, evolution is pretty slow. A generalist strategy that doesn't beat 'wet specialists' or 'dry specialists' during their given periods but nonetheless manages to survive in both situations could happily linger in the population without risking being wiped out by either of those, provided the period did not last too long (thousands or millions of years) or the population was too small or genetically bottle-necked to start with (say there was only one generalist to start, it would be easy to have them accidentally die).
Indeed, one might guess this to be a possible reason for the existence of sex itself: if there really was a single good strategy in all situations, you wouldn't need to mix your genes with others to get variable offspring.
One can pick out more scenarios: sprinters versus marathoners. It is not possible to make a human who is 'best at running' in all situations. You only need to look at real life species to realize this. Cheetahs and wolves are both good runners in very different ways: cheetahs are high speed sprinters, wolves and other dogs are marathoners like humans. As such they tend to specialize in different prey. Speed comes at a cost of bulk, so there is yet another specialization in stealth sprinters like leopards who focus more on the stealth part than the sprinting part and can afford more bulk and strength to go after slightly larger prey on average. They do better where there are lots of places to hide and climb, so you see leopards where there are trees and cheetahs on the open plains.
If we imagine an ancestor of both of these animals for a moment, and imagine it beginning to diversify into two directions, the cheetah strategy and the leopard strategy, which one is 'fittest'? Obviously, neither is. This will eventually lead to speciation, but in the meantime, it leads to a more secure future for these populations: if forests or plains suddenly disappeared, the population living in the unaffected area wouldn't be doomed with the rest.
A diverse group has a better future than a less diverse one. If bears as a group were limited to only pandas, they'd be in severe trouble right now. It also pays to be a generalist. If there's just been a mass extinction and you have only the one species, that one species is poised for a big take over and expansion into many different species, provided it can survive long enough. Being willing to eat anything left over after the disaster will help with that, and help one develop more quickly into a greater variety of species, since you don't have to go as far to go from 'omnivore' to 'herbivore' or 'carnivore'. (Although, even herbivores will often eat some small amount of meat given the opportunity, so that part doesn't really matter as much as the 'surviving' part.)
The very fact there even are multiple species, rather than one single 'fittest' species, tells you a lot about evolution and how it really works compared to the popular descriptions.
Is demanding clear definitions and logic use too much? My different cultural standards
[older post, 2019, but still good.]
I kind of have a 'weird' sense of politeness. I don't, for instance, really care if people swear, but I do care if people blatantly engage in cherry picking or state something that is not mathematically proven is 'undeniably true' unless it has really, really low error ranges and a high number of successfully predicted digits/decimal values (such as qft, and even then, the better thing to say is 'something like QFT is undeniably true, there is the caveat here that QFT is incomplete without gravity') and often see these things as kind of rude?
It's not hard for me to realize that my standards are heavily influenced by those who engage in the use of science and logic as the primary way to decide arguments. There is, in a sense, a kind of agreed upon 'scientific discourse' where if one plays it correctly, your hypotheses will be acknowledged and other people's will be by you in turn and you will be respected even if your ideas turn out to be proven wrong. There are certain things you just do not do, in this discourse, and may kind of make you look like an ass.
Cherry-picking (such as ignoring multiple studies for the sake of one random person's blog post or your friend Bob's anecdote) is one of them. Implying unsavory things about another person's character because they don't support your pet hypothesis that hasn't been backed by data yet or implying they are inherently unworthy of having their hypothesis considered is another.
Failing to show any appreciation for experiment or connecting to reality or any way that your idea might actually be weighed against others non-subjectively, refusing to use the same definitions everyone else has agreed to use even if they dislike the naming scheme because everyone else realizes you can't resolve an argument without agreeing to definitions first, coming in and saying your theory is the greatest theory of all time and everyone else is stupid and there is no possible way you made any mistakes, are several big ones. They aren't just bad if one wants to genuinely arrive at a correct theory (even if your theory is the best so far, science is kinda all about continual improvement and not assuming the best so far is the best possible), they are also, in my opinion, kinda fucking rude when you declare your idea is great but you can't be bothered to examine it properly when other people are trying to actually put in the extra effort for their own ideas, even when it often results in painful null results in their experiments they know that they actually bothered to try, whereas you can't or won't.
When you do these things, you look like a gigantic jackass.
Now, there are times when my standards of 'politeness', such as the standard that one should at least try to put some minimum of effort to match the other person's minimum of effort in a conversation, may run into other people's inability to articulate. I realize that many people don't really mean to be rude when I or someone else (usually someone else because I don't talk to people that often!) take a great deal of time to try and phrase something carefully and they come in and show what looks like total laziness and lack of effort, that it is often just inability rather than malice. They don't necessarily have cherry picking as part of their vocabulary or understand what a scientific consensus actually is (it is not the same thing as a consensus of scientists, but a consensus of scientific evidence/studies, see my post on philosophy of science if you are still confused.)
or they just don't plain understand what the issue being talked about is. It's frustrating, but stupidity or ignorance isn't the same as rudeness. The best thing to do here is correct, and then if they completely ignore all attempts to educate about logical fallacies, contradictions and accepted definitions and methodology, assume they are suffering some kind of severe cognitive dissonance or are trolling and move on to a different conversation. If they are trolling they want attention, so repeatedly calling them a jackass isn't as effective as just telling them they'll be blocked or deleted if they continue with poor behavior or even just plain ignoring them and dropping the conversation without a goodbye.
Is 'leaving' without a goodbye rude in many cultures? Undoubtedly. Do I really expect someone in a non-livechat situation to tell me that they won't be conversing anymore? Not really. It's nice if the conversation was long and friendly, but not mandatory. If it's unfriendly and you've been dropping insults in my direction, I don't really need your 'I am going away now' message, just go away. I won't be mad if you don't say goodbye, I promise. On the other hand, if you do a drive by message that made it seem like you wanted to converse, it IS polite to let others know that you are not in fact sticking around. Doing repeated drive by spam on a blog is not going away, it is continually showing up and then refusing to engage with any actual counter response. It is horribly rude and will get you deleted here. If you absolutely must, post 'This is a drop off response because I felt like putting in my 2 cents before leaving.' or 'I am not going to watch this post but feel free to post responses for others.' or the like, so people know they don't need to respond to you except for their own satisfaction. If they are not too annoying (ones full of insults will still be deleted, ones that simply repeat the same arguments as rebutted before will be likely deleted because posting the same message over and over is in fact the definition of spam) these ones are much more likely to be tolerated on this blog.
A lot of it comes down to, if you want to have a possible argument with others, you need to agree on standards by which that argument will be resolved, or having that argument is pretty damn pointless. And having standards by which arguments are resolved, is basically to dictate a form of the local politeness. For some people/cultures, you can say anything no matter how dehumanizing or evil sounding as long as you don't swear when you say it. That's definitely not my standard. My absolute minimum is that there needs to not be so much insulting or dehumanization of your opponent that the 'standards by which an argument can be resolved' is at risk of being destroyed or undermined by said dehumanization, and in practice I use a cutoff well above this absolute minimum because there are much more enjoyable things to do with life than arguing on the internet or being insulted!
Definition arguments
(older version, you may wish to scroll to the updated)
I've said that I consider refusal to use the definitions everyone else does often rude, and I realize that probably needs some expansion. I do not consider reclaiming words to be part of this, I am referring to something more specific, when someone is trying to 'win' arguments simply by redefining words, reclaiming insults does not count in that category at all. I'm also not talking about using a more precise definition of a word than the layman one, because people who do this are not using it to win an argument about definitions but to be more precise than the layman definition allows or to name a term that doesn't actually have a good analogue to layman ones so they borrow a normal word for it ('Theory' being a good example, scientists needed a word, it was similar enough to do, although I still find myself wishing they'd come up with a different one to minimize confusion).
These without a doubt have to be my least favorite arguments of all time, mostly because you get people who seem to think that if they get the definition they want used then they 'win'. This is not how definitions work. Redefining a word does not suddenly change reality. There have also been times where I have watched in amazement when what seems to be two people who actually largely agree with each other quibble endlessly because they got into a definition argument. (My memory is a bit foggy, but I believe it was over the definition of racism. I think I've seen this more than once over multiple things, actually.)
Here are a couple of examples:
Once, I had a person try to define the word 'monster' as also some kind of compliment. This was not to reclaim the word (there is no group of monsters living in our sewers who desperately need it made into a positive term instead of a pejorative) but because they wanted to win an argument that an anime character was cool / perceived as cool. This was doubly problematic because it is one thing to say you are using a word to mean a thing, it is another to try and assert someone else is meaning a thing by a word when you have no evidence that the word was ever used like that by anyone much less that person.
A second example I saw is someone asserting that evolution means a species suddenly abruptly jumps to an entirely new one in the next generation. This is another example of trying to not only redefine a word but trying to assert what other people mean when they say it, which is pretty rude I think even by normal standards. Darwinian evolution is more or less today defined by actual proponents of it as being gradual change over generations which is driven by natural selection, with the biggest jumps being limited by how much one individual can mutate or combine recessives into unusual combinations (Darwin himself had no concept of recessives or genes, so his concept was just gradual change driven by natural selection of traits). You do not get to pretend other people are saying something different than they actually are by redefining a word to suit yourself. This is horribly, horribly rude.
Third example. Transgender people.
This will probably be my most controversial example.
It probably doesn't get said enough that how you define men or women is actually totally irrelevant to the question of the treatment of transpeople. If you are an empathetic person and know it gives people happiness to be called certain pronouns, and you know studies have shown trans people are absolutely no threat to people in bathrooms (And in fact, bizarre chromosome based policies would force post-op transmen to use women's bathrooms, so they make even less sense than they first appear to), then it makes no sense to be a giant jerk.
That said, simply as an example, let's show what happens if we for a moment accept the definition that some people really want to use (and use to justify attacking trans people), which is that 'men are XY chromosomal'.
You now have a problem: transmen are no longer transmen and you need a new word to describe what they want to be or what they are trans to, because plenty of transmen have no desire to become XY chromosomal or even to have surgery*! If redefinition forces you to come up with new words and makes your argument more complicated than it needs to be, it probably isn't a good definition? Just a thought.
In practice, we can see that the way people use 'man' doesn't refer to just having an XY chromosome at all, because intersex individuals who are born XY but develop completely female due to hormone insensitivity to testosterone do not get called men, they get called women. We also, when we encounter different species, do not stop calling the males 'males' simply because they do not have XY chromosomes like we do. We also call men with extra chromosomes men even though they are not XY. If we're using the criteria that we should strive where possible to use the same definitions others are to lessen confusion, then redefining man as purely 'XY chromosomal' fails miserably. So does having male body parts, because people who get maimed in war don't stop being called men.
*The caveat being that trans folk who do not want surgery are usually motivated by feeling it isn't good enough yet or would eat too much into their bank account, but some trans folk are genuinely non-binary and simply prefer to present as masculine or feminine and have, say, a beard or boobs without the rest of the package.
Definitions confusions update
[this one is still pretty old; you can tell from the political references.]
Awhile back I went through a phase of frustration at people trying to 'redefine words', mostly from weird encounters where people would try to do bizarre things like, say, declare 'monster' was actually being used as a compliment when someone was talking about a mass murdering cartoon character and context made it pretty clear it wasn't a compliment. (yes, that actually happened to me once...) Or people trying to redefine 'male' as meaning XY chromosomal even though males of many species do not have XY chromosomes. People who try to assert when you say by X you mean Y that you still really mean 'Shitboogers'/other term are also a massive annoyance; I've seen this in discussions about evolution.
So I went 'alright let's put a blanket ban on people trying to redefine words except for those people trying to reclaim insults as something positive or people borrowing words for an entirely new term'.
But looking back I don't think this is really always the best tack to take, simply because you often end up with two distinct populations that imagine very different things when they hear the same word. So I'm changing this to 'choose your definitions carefully, try not to make them too weird compared to what everyone else uses / figure out what the other person actually means by it, and try not to be an asshole about it'.
Anyway, today I'm gonna just quickly talk about confusions over definition differences for sexism, toxic masculinity, socialism, and racism and then runnnnn awaaayyy quickly before anyone else can open their mouth to say something obnoxious. Because y'just know how the internet is...
There's a common source of confusion in that these words are often actually being used differently by different audiences, which sometimes causes people to read insults into things that were never actually said.
Example one: systemic sexism
A guy opens his mouth and has an astounding insight: gosh, what if the academia problem isn't sexism, but is really about... women being expected to do all the childcare?
Cue blank looks from pretty much all the inter-sectional feminists (yes, there's more than one kind of feminist, if you don't know that you probably shouldn't make sweeping comments about feminists 'in general') for whom that is literately the major form of sexism they've been railing against for years, that women are disproportionately expected to do child-work and that they do the bulk of giving birth and breastfeeding but men who have children don't have to pay these same costs to survive in academia as their wives will take care of it. (Most men can't give birth, of course, but that doesn't mean expecting women to silently bear all the costs of childbirth with no compensation and only a few months parental leave to breastfeed isn't a discriminatory system that disproportionately impacts women!)
But maybe that's just a sign that inter-sectional feminists don't get enough press. It is odd considering one of the major candidates in the running right now (early 2020) is literately someone who advocates economic reform to help deal with societal injustice, so you'd think this sort of thing would get more conversation, but whatever...
Example two: systemic racism
A person starts talking about racism in a given policy, and person B opens their mouth and says 'But you do not really think all X people are racist, do you?'
Which completely missed the point person A was actually making, as they were actually talking about systemic racism, not 'I want to join Ku Klux clan racism'. This also causes people to read some insults as worse than they actually were, as someone who says 'I think the policy you are supporting has racist implications' often gets misread as saying 'I think you hate all colored people' when this is really not necessarily the case.
A policy can disproportionately act against a member of one group without anyone who advocated the policy actually going 'lol I hate all those people'. Intent is not magic, so the saying goes. Although if someone shows a repeated pattern of such advocacy it doesn't speak well of them, of course. But a good example is our policing and legal system. One person can say 'How we have structured policing disproportionately affects minorities and is deeply racist' and another person can mishear this as 'All cops are KKK wannabes, and we all know KKK are evil, so let's all hate cops.' even though that's not what the other person meant by 'racist'.
One person can say "Immediately chiming in that all lives matter, right after people say black lives matter in response to a cop killing a black person, is a racist response to drown out the message that black lives are being lost to violence; it acts that way whether you meant it to or not, due to context" and another person can misread this as "Person A thinks only black lives matter." because to them, racism is not something that can have nuance and they have no idea what systemic racism even is, so someone failing to be colorblind is the 'real racist', even if that person noticing color is someone who is shouting 'someone please do something about all the black children getting shot, this just does not seem to be something that happens to white kids!'.
Example three: toxic masculinity.
Pretty much every feminist I've ever talked to agrees that when they say 'toxic masculinity', they don't mean all masculinity; if they did, they would say 'all masculinity is toxic' instead. Many of these feminists are men who say they support a healthier form of masculinity. (Obligatory disclaimer: there are radical feminists who don't like men or XY chromosomal people. They are a tiny minority, and they and I don't get along because they are transphobic. You may have heard them referred to as TERFs, trans exclusionary radical feminists. TERFs are - and I do not say this lightly - assholes.)
This is a prime example of people not bothering to figure out what the fuck other people mean by a term before they comment on it, because you get a lot of idiots telling the very people who actually use the term that they must hate all masculinity.
Erm, no.
Not at all. I identify as masculine, and I agree masculinity can be used in a toxic manner, so I'm a counter-example right there.
(Side-note: The reason you don't hear 'toxic femininity' in contrast very much is that while notions of femininity can hurt, it mostly hurts the woman herself and other women, and it's not actually always totally clear in a pro-choice context where the line on that is, since of course you want to give a woman the choice to focus on raising children if that's what she really wants out of life, just as you want to give her the choice to have a childless career, so 'toxic femininity' would only really be in a situation where it was being treated as the only option... whereas toxic masculinity is when a guy thinks being a man means things like punching your wife is acceptable for her being uppity, or never showing emotions or ever apologizing; never showing emotions hurts men a lot, but the aggression part hurts other people. That makes it kinda bigger priority and easier to clearly demarcate as toxic.)
Example Four: Bernie Bros
So. People coined Bernie Bros to refer to the worst Sanders supporters...
And then Sanders supporters complain that it invisibles all the black women and queer women supporting him.
Um, no. Bernie Bros are disproportionately male, because males disproportionately engage in harassment (assuming statistics for males disproportionately committing more crimes carries over to internet bullying, including death threats). You don't actually want to be included in the group labeled 'Bernie Bros', trust me, because to get in that group you have to be an asshole. The people I heard back in 2016 using this in casual conversation were talking about the assholes, not all supporters.
However!
There is a twist in this story, in that some people in the media (they really seem to be losing their minds over the dude becoming front-runner) really do seem to be trying to characterize /all/ Sanders supporters as asshole white males (note: this blatantly flies in the face of reality). So we can see 'Bernie Bros' is a word that basically seems to shift definition depending on who you talk to. That makes it a fairly useless term, doesn't it? Unless you bother to carefully define what you mean by it first.
Now, if you wanted to offer a real critique of the 'Bernie Bros' label, you could try asking how sure people are that these individuals are actually Bernie supporters and not, say, Russian bots stirring up controversy (update 1: we got an intelligence report confirming Russia is 'helping', and since we know they don't give a fuck about Democrats or civil conversation, I think we can assume this 'help' is exactly the botlike harassment behavior I've seen, with probably a few human idiots going along with it, barring further evidence), or how much the harassment compares to that from other campaigns: every campaign probably has a minority of asshole supporters. The question is, does this campaign have a disproportionate number? It would be good to have actual polling on this, but frankly, there's absolutely no evidence for it (anecdotal data, like the bot-like behavior I've seen, isn't really concrete evidence). If anything, the fact that a large number are minorities would lead one to expect simply from population ratios and existence of racism and queer-phobia that his supporters on average would get harassed more.
Update 2:
Well, fuck. I did find further evidence. This is exactly why I say 'citations are important'; it's not enough to just say 'Bernie bros' are nasty/encouraged by Sanders, you need to show actual examples of how this is any worse than other groups or how they've been encouraged.
https://elections.americablog.com/2020/02/bernie-sanders-homophobia-sexism-extremism-anger.html
Actions speak louder than words. If you say you disavow bad behavior, but then you reward/pay positive attention to an abusive person and fail to call that person out, your actions are part of the problem and do not match your words.
By themselves, the existence of bad elements doesn't say much about Sanders, since all political groups have total assholes. But actively choosing to associate with those elements? That DOES make you responsible, at least partially, for them, because you're signaling to your supporters you don't actually care and that your words don't mean much of anything.
I /was/ switching over from Warren to Sanders, but this is making me seriously reconsider, although frankly I don't think Warren is doing well enough that she's going to make it as she's consistently under-polling, so I'm honestly not sure who I want to support now. Anyone who isn't fucking Bloomberg, I guess? There really is no such thing as 'the perfect candidate' it seems like. But some things really piss me off, and a sexist transphobe, like Joe Rogan (who said a transwoman 'was not even a "she"', and he's called women 'cunty') is one of them.
Ugh, this post is getting really off topic from where it started, isn't it? I should have known better than to choose something from politics as an example...
Example Five: Socialism
This is another one where the definition shifts a lot, so it's important to figure out what definition the other person is using, and not what you personally feel is the 'one true definition and therefore what they have to have meant', and where you need to make clear to other people what definition you are using.
Person A might say they hate socialism to Person B who interprets 'socialism' to mean government taxing the rich to provide healthcare, and who is then thus surprised and confused when Person A also says they really love their social security.
Or, Person A rails against a new taxation program to provide free education as 'socialism' and gets confused when their own children who can't afford to go to school suddenly 'love socialism', and asks if they really want to support communism and dictatorship. In this case, Person A used an inconsistent definition of socialism, and the other people took them at their word the first time that that sort of policy was what they meant. Government taxation of wealth is not at all identical to having the government owning all means of production and getting rid of the ability to vote. It's possible Person A was using a slippery slope fallacy, but then they need to
say that one policy necessarily leads to another, not re-use the same word for two different things and cause confusion.
Arguments about definitions, continued, this time on Buddhism
Arguments about definitions, one of my least favorite things, yet here I am talking about it again.
I've thought of another subject, this time on a religion I don't typically think much about since I don't encounter it as often, but still heartily disagree with since it makes claims that are not grounded in evidence, like people reincarnating. (Would a Buddhist celebrate or be mad to learn they've already escaped the cycle of suffering rebirth because there is no damn cycle? Also seems to contradict the next concept, since if there is no soul how the fuck are you reincarnating... eh, but I am no Buddhist and it does make sense that people would end up occasionally merging parts of Hinduism with Buddhism even if the end result makes little sense. With a broad enough definition of self you could argue we are all 'reincarnations' of each other, no souls needed, though this would be rather pointless.)
I must clarify that just like it makes no sense to claim all Christians believe the same thing, I know that not all Buddhists believe the same thing (otherwise it would be impossible for there to be violent Buddhists, and unfortunately there are) and it's possible I've misunderstood what the majority believe / translate their concepts to, since there is indeed a language barrier between me and most believers here. Nonetheless, the concepts I am going to go over here are very commonly encountered ones when one does encounter Buddhism, so that is the version I am going to critique.
Anatta (anattā), or the ‘no-self’, is denial of the soul or spirit, and that part I agree with, but also says in contrast to most Western philosophy that there is no individual, or basis for identity, even though it doesn't disagree that there are associated sets of qualia/experiences, that is, we seem to experience being a self.
See, this is where I get very confused, and feel that there is something lost in cultural translation not just between there and the West, but to the rest of the West and myself, because to me, it makes sense to have a few terms that you simply define by pointing to a phenomena and labeling it (I often call this 'axiomatic definitions' or 'definitional axioms', should probably decide a term and stick with it at some point), and due to our limitations the identity-driving set of experiences, the only set actually readily available to us and the set we have to use when arguing with, say, a solipsist, seems one such perfect phenomena for such labeling. What does 'being' even mean if not this set of connected experiences, thoughts and motives, such that one can then deny it exists? Yes, we eventually broaden it, by noticing other individuals mirror us, to include other people in the category 'beings' and 'identities', but this is necessarily a secondary step as we are presumably aware of ourselves first before we become aware of other people (though I suppose this may not always be true, in a world without mirrors does one first become self aware by observing others instead?).
However, I never see anyone else in the West making this argument (it is of course entirely possible I have not read enough people, mind you, I don't actually have a huge number of philosophy books and I don't even know where the ones I do own went after the move to the new house a few years back), so on some level the notion, while baffling, must not seem outright 'false by definition' or 'translational error' to them.
I'm aware that some of the philosophy of the East is fond of koan-like statements, but these are often solvable (though perhaps this is not their intention) by careful definition analysis. Take the joke that a white horse is not a horse. Under normal definitions this is nonsense, but taking it as an equality statement about the word sounds instead it makes perfect sense, as the sound or word collection 'white horse' is definitely not the same as 'horse'.
Here I am forced, if I am to make any sense of it at all, to use a very different set of starting 'definitional axioms' / definitions and rules that help define that we take for granted, a new set which, for whatever reason, make things more complicated than they need to be (because reality does not actually change when you re-define terms even when they are ones as fundamental seeming as person-hood, it often just forces you to invent a new term for what the old one used to mean, see my post on different cultural standards of politeness and how I have a relatively weird one where I don't care about swearing per say but I do care about backing up your arguments instead of pulling facts out of your ass) by taking these collected experiences and motives as a basic starting point for our understanding of the world but not inherently labeling them as 'person', even though there is no claim of causal disconnection as far as I'm aware, and further seeing this starting point as something to be surpassed and even nullified, perhaps to go to nothingness or just to become one with the universe in happy Nirvana (again, not all Buddhists believe the same thing and I believe this is one of the points of variation?).
I think it is probably this secondary assumption, that this is a state to be surpassed, that is somehow bleeding into the defining of the first. We tend to be very attached to our identities, so it takes something radical like claiming we don't really have one to make the idea of giving ours up more appealing. And that the first definition is already starting with the broadened version of 'identities', as an unproven statement to be torn down, rather than taking 'identity' axiomatically / by definition (that is, it cannot actually be untrue) and then broadening it by a series of reasonable empirical inductions (hey those other people act like they also have 'identity' qualia-base set, let's make identities plural) mixed with fairly simple definition choices (It is useful to put myself and you in a category that does not include also rocks, even if you later turn out to be a hallucination, although I imagine it would be rather harder in a world with no other people to decide if you are hallucinating or not this is thankfully not the one we live in).
In the broadened version there is indeed something to potentially argue with, due to the empirical component, so I suppose that there is where the heart of the claim lies and what they are actually attempting to say. That is, under some frame of view, there are no identities as well as no continuous architecture in the outside universe devoted to keeping your qualia experience going forever or even the same set of motivations while you are alive.
Unfortunately for them, I find the empirical component extremely compelling and enjoy the usefulness of my definitions, and do not accept claims simply because someone supposedly wise says so. It is true that we are not something separate from the world, and under a certain definition, we are indeed the Universe (And thus can say fun things like 'the Universe can be a caring, loving place', which is great if you want to mind-fuck with all the people who expect you to say the exact opposite, and how often can you say that about heart warming statements?*). We are not, however, the whole Universe, and our actions really have no effect whatsoever on this fact except in as much as we could work to make that percentage we take up 0%. A definition is not useful to me if it cannot be used to clearly communicate what you mean such that it could be used in a proof, even if it is not necessarily clear to a layman who struggles to hold really precise concepts in their head even after someone has spent effort defining in layman friendly terms what all the technical concepts mean, it should itself not leave someone to spend a lot of time debating what you actually meant or what the results of your reasoning are. A good philosophy should endeavor to lay out as clearly as possible and define terms as needed where they could potentially confuse, and then construct their conclusions from a series of reasoned arguments, not simply, say, start with the conclusions as many ancient books do, or make one half of the argument and then stop (looking at you, Krishna and his special afterlife rewards for believers, which is an argument in the sense that it's an appeal to emotion I suppose, or all the other religions that do that shit but then don't offer any more proof in their book than the book itself).
That brings me to one of my biggest issues with many philosophies, and that is that they are simply too old to enjoy the weight of modern logic and on some level it isn't really fair to dissect them this thoroughly when they are simply not capable of speaking on this level, or it wouldn't be except people keep using them.
In a different world maybe instead of parents asking what religion the potential spouse their offspring brought home is they'd ask how much of a Utilitarian or Idealist they are and if they are a humanist who doesn't believe in beating their spouse even if their spouse does something 'wrong' to make them mad like cheat on them. Although considering how some different cultures consider asking about religion rude, it would probably be easier to move in that direction than an even 'ruder' direction of asking if you would murder your spouse if they cheated on you, even if asking that might save lives (some people are brutally honest, and someone who might murder you for cheating is someone who could murder you if they falsely suspect you are cheating, to point out the obvious, and for both genders I believe you are far more likely to be murdered by a spouse than a random stranger statistically although I haven't double checked this in awhile). If you can't weather such a question from someone's parents you probably shouldn't be marrying them.
*Have I mentioned that warm and fuzzy Eldritch Abominations are my favorite kind next to cute sneks? I'm sure this surprises no one who knows me.
Invariant Morality, and two kinds of relative and objective morality
[2019, sequel post to above]
I've seen relative morality defined in two different but related ways in conversation*, which sometimes makes it confusing when someone mentions it and doesn't clarify which they are using, though you can generally tell from context.
The first, which I will call 'culturally relative morality', is where morality is viewed as heavily or even entirely a cultural artifact and thus we have no right to judge other cultures and their cultural mores. This can lead to a bit of a contradiction if it is in our culture to judge other cultures.
The second, is 'situation relative morality' which is actually pretty reasonable: it is the kind of morality that says it is more moral for a man to steal a loaf of bread when he is starving than when he is not starving, and for a poor man to do so than a rich man.
There is the occasional odd situation where someone tries to borrow notions from general/special relativity and compare it to relative morality, usually to exclaim that everything is relative these days to those darn whippersnapper liberal kids. This is, for very obvious reasons to anyone who knows the slightest thing about general relativity or special, incredibly confused: the most important thing about special relativity is not the relative part, but the invariants! Namely, the most famous thing about: the speed of light does not vary between observers. It is the exact opposite of relative.
Now, if we were to borrow from the real theory of relativity, I actually would not mind using the concept of moral invariants: some thing may vary, and in fact may be made to vary in order to preserve the invariant, but the invariant always stays the same across situations. One may ask: isn't this just objective morality? Not quite. Or at least, not necessarily, because if objective is meant to be the opposite of relative (it is actually the opposite of subjective, which may or may not be part of your definition of relative morality), then we can quickly end up with two different definitions for it as well.
Objective morality often ends up meaning, in conversation, 'some higher being decides what is right', which isn't terribly objective sounding if you ask me, it is just as subjective as a lower being deciding what is right. What it is supposed to mean, however, is morality that exists outside of yourself and is not subject to your whimsy, which counters cultural relative morality but not necessarily situational. Another definition, closely related, is in opposition to the second kind of relative morality, where morality is not situational; this generally counters the first relative morality as well. Normally, I would have just lumped it in with the first definition as a more strict version, but in conversation I most often see this one as used at the same time as 'higher being decides what is right and this is true objectivity, because higher being decides everything', which makes me suspect that 'on the streets of the internet' the definition does actually vary and you can't assume the other person means the same thing you do unless they agree on a definition first. People also sometimes try to use objective morality to mean morality existing totally outside of people and their desires, in an attempt at contradiction such that morality must be subjective since it clearly deals with people's desires, and claim that if there is any influence by people's desires then it must be subjective.
What discussions of objective and subjective morality sometimes miss is defining the second word: what exactly is morality and why should I care about your definition, especially if it does not seem to overlap what I consider the objective set of 'kind, thoughtful and equitable empathetic actions which do not discriminate toward anyone', or 'social justice', which I care about by default because I already have empathy?
You may claim this or that is moral and perhaps get some system somewhere to agree with you, but you cannot claim that an action was motivated sympathetically if it wasn't, and if the action turns out to do more harm than good and you persist in it then you cannot claim to be acting kindly. If you take all the food for yourself and offer to share with no one else, you can't really claim this matches anyone's definition of fairness except perhaps your own, either. Whether or not an action does harm or discriminates, this is something that exists objectively outside of yourself, although certain situations may be harder to judge than others and have trade-offs in harm rather than a 'no harm' action one still cannot truthfully claim no harm happened. If a discussion about morality is getting bogged down with no agreement on the definition in sight (which would be a prerequisite to saying if any part of it exists objectively outside of your perceptions, as it is rather hard to gauge the properties of something undefined), it may in fact be more helpful to abandon the word entirely and simply directly discuss what one actually cares about, since the word can get used for different things, like 'code of conduct' rather than 'empathic thoughtful behavior'.
Now, if you ask me, I think morality is possessed of invariants, for a suitable definition of morality that happens to overlap with what people expect you to mean by morality, which doesn't have to be very complicated or even super-well defined here: things that harm the group as a whole are The Bad, things that are happiness inducing and desirable for people as a whole and help them out are The Good. There are relative aspects, like the stealing when poor and starving to death being more acceptable than stealing when wealthy (even if our legal system reverses this and makes it more acceptable to steal when rich, since they can pay fines and bail), but one notes that this could easily be re-phrased as a single invariant rule/axiom rather than two different rules: 'Do the least amount of harm, including to yourself, possible'. Theft is not immoral in and of itself in this situation, it is not axiomatically bad, but because of its capacity to cause harm we can derive from the other axiom, which does not vary depending on situation, a less fundamental rule to not steal if we aren't truly desperate.
The rules that are enacted only sometimes relative to the situation are the variants that our invariant demands if it is to stay always true.
*I picked up much of my philosophy knowledge off the wild streets of the internet, if you haven't guessed by now, 'tho I have read the occasional philosophy book, I mostly dabble in philosophy for the sake of being better able to say what my own is and to steal concepts that I like.
Use citations in your work and think carefully about what will actually convince in argument
Let's first go over an example of a bad, bad way to argue, a way of arguing that unfortunately, social media has encouraged or in some cases made more difficult not to engage in. (I'm looking at you, Twitter.)
The one liner with no citations or sources whatsoever.
Example One: Popularity of politicians
I'll use a quick example, unfortunately from politics but that is what comes quickest to mind when one thinks of obnoxious one liner arguments. Around the aftermath of 2016 election, people would spout things like 'Hillary is the least popular presidential candidate ever' and other people would respond 'Wtf are you talking about? She won the popular vote!' and, anecdotally, in my experience, because the other person was not really interested in effective arguing but more in yelling in their opinion, they wouldn't respond further on that particular inquiry, giving off the false impression that they had realized they were wrong and had run off.
This is the kind of thing that would be easily solved if the initial party had simply bothered to use sources for their claims in the first place. I was somewhat surprised when the next time I saw this particular argument on media, someone actually did bother to respond 'There was a poll, Hillary and Trump were both unpopular on it.' Okay, one can see now why some of the trolls may have avoided wanting to post that information if they were pro-Trump, since it doesn't make them look any better, but one still boggles that ostensibly liberal people wouldn't be bothered to give an actual source. Even that person did it: they said a poll, wouldn't even say which or by who, which might be relevant if, say, the poll was done by Fox News which a lot of people would discount as a reliable pollster (presumably, if you are arguing with a left-wing person, you want to use sources they'd trust, yes?). At a minimum, you'd want to know the margin of error, if this is a truly important part of your argument rather than just a talking point you are using to try and 'win' at all costs but do not actually care about that much.
In reality, there was an poll
https://www.usatoday.com/story/news/politics/onpolitics/2016/08/31/poll-clinton-trump-most-unfavorable-candidates-ever/89644296/
it was done by telephone and included 1020 adults. Trump was more popular than Hillary on it, but the margins were fairly close, 59% vs. 60% favorability among registered voters with a 3 point margin of error meant either one could actually be the least popular among registered. Including non-registered, the gap was a bit wider than the margin of error.
It is entirely possible, with 1020 adults, that one might prefer the data of the voting patterns of millions of people instead, and that one might be suspicious of possible skews in data simply based on who answers telephone polls: that tends to skew older. But in an argument where one person is simply shouting a one-liner answer with no sources, all possible nuance is lost, and if that person is an untrusted source themselves, the other people might, the moment the original poster fails to respond when asked for a source, assume that there is none since they have better things to do with their day than fact check every person who one-lines at them. Especially if their computer is extremely slow and it takes 10 seconds to load google, this kind of thing is going to lead to the most dissatisfying end of an argument: it does not actually get resolved, and both parties may have thought they 'won'.
If Person Two was the kind of person who would have those kinds of misgivings, and Person One had an argument that would have soothed them (they could have pointed out times when small sample sizes have nonetheless been pretty representative, or try to look into how the margin of error was calculated and see if they took into account possible bias from who answers telephone poles in the error) and actually cause Person Two to then agree with them, it simply never got that far.
This kind of thing happens a lot, not just on Twitter which seems hand-built to nearly force people to one-line without citations making it twice the effort it needs to be. There's also, not coincidentally, a huge problem with art-theft on Twitter and many other media platforms. All Twitter would have to do to fix this problem is add a source/citation box underneath their main message box, and someone could select 'Me' or 'Someone else' or type in a link. Since half of twitter seems to be just re-posting links, I'm sure many users would enjoy the extra space this would give in their actual message. Other social media could follow suit, and while this would not totally stop the problem, it would at least have the citation box staring at you every time you posted, encouraging you to do the right thing and provide a source for your quotation or artwork. It would probably make people chasing copyright violations happy, so one would probably want to add in licensing information from time to time as well, like 'By someone else but under fair use', although that would mostly be for images as you would have fair use by default for a single one line quote.
At this point, you may find yourself thinking 'But this would be work!'
Well, yes. The entire point is to have higher quality arguments instead of people shouting one liners and quick talking points at each other that are just repeats of talking heads elsewhere. If you don't actually want to convince in an argument, just to make yourself feel like you've 'won', is your shouting really making the world better or just making yourself feel better? Especially if you are just repeating what you heard elsewhere, possibly in far more detail?
[[While I used politics as an example, it is not the topic and I'd appreciate if you left politics out of any replies as this seems red bait for exactly the kind of one liners without citations I hate, thanks, unless it is a non-aggressive example of, say, arguments that worked on you or a friend! You can also repeat your experiences of what kind of arguments you saw made that would have been improved vastly by citations if you like.
]]
Example Two: The effectiveness of diets
This is a big one for me. I get very annoyed whenever I see claims that 'diet x is the best diet!' as I find this really obnoxious; chances are you did not do nearly enough work analyzing studies to actually support this conclusion. Anecdotes are next to meaningless here.
What happens here is that, at best, they will generally cherry pick a single study rather than looking at the broader set of studies or more critical ones. Science is a consensus of studies and scientific evidence, you cannot just cherry-pick a single study. If I run 100 experiments, each with 100 participants, by random chance I could get one of the studies to say something like 'Green peanuts are unhealthier than red colored chocolate peanuts', simply because it wouldn't be that hard for one group of people to all get sick for a completely unrelated reason after eating green peanuts, such as flu doing the rounds at the testing office.
Another big one I see occasionally is fat-advocates claiming that 'diets do not work', pointing to a single study, and saying that follow-ups on people who did X diet after the study ended often found they had gained back the weight. You would need more to be convincing than this. Why? Because the study ended, the people's food intake was not watched. Did the people actually continue the diet after the study? If they did not, then it's not really a smack against the diet's effectiveness. What it really says, if the study was not continued, is that diets are hard to stick to, but everyone knew that. (For the record, I don't think demonizing people for being fat is a good idea, and I think starvation diets should be eyed very warily, because making your body think it is starving to death sounds like a stupid idea to me.)
What would be more convincing is finding that diets continued after X number of years start to fail to work, even when food intake is carefully monitored during a study. A lot of studies do suffer from the problem of being too short: if you follow for only one year, you don't allow for the possibility that the body will try to re-normalize over time even at the cost of other bodily functions back to the weight it was at.
The short of it is, it often isn't enough to use a single citation. You also need to think about how that citation actually supports your argument and if it offers enough information, and you need to look for other studies that successfully reproduce the results. Try to aim for at least two citations, if you want to do the bare minimum.
What I DO know is that studies in rats and mice find reduced calorie diets where the rats are given a short period every other day to binge as much as they want of healthy foods, and a bit less food on the other days, had increased lifespans and did maintain a healthy waistline. The rats had no ability to cheat on their diet, but they were not starved / deprived of opportunities for (healthy!) binging until full either as some diets ask you to do.
I encountered one person who claimed that going on a diet made their hair fall out. This person later turned out to be a con artist who constantly claimed to have something ill with them, so I would take such a claim very, very dubiously. A proper diet with good nutrition, where you have an hour every other day (or if you can't manage that, every day) to binge as many fruits and veggies as you want, should not cause your hair to fall out nor to convince your body you are starving and have to hoard resources away from nonvital things like hair since part of the idea is to make sure you do feel full.
End
I get it. Using sources, making effective arguments, figuring out what your opponent actually believes instead of a straw-man that you can demonize, is a lot of work. I don't always source as often as I should myself. In fact, this very post could probably use a lot more citations and helpful background information links for the example arguments, and I'll try to get around to that sometime maybe. But even just putting in a little bit of effort I think could make a lot of difference, and give us a better society. Once the argument is made, the wonderful thing is you don't have to repeat it, you can just post a link to the original argument or give the name if it's a book (Be aware many people will not want to read a giant political screed that they have to buy and spend hours on, as opposed to a short 10-minute essay). In fact, it may not even need to be you who makes the argument you want to re-post that goes into the careful details.
It could be some weirdo with a blog.
Just do me one more favor. Try not to demonize too much. It's very hard to convince a person if they feel you think they are a demon or someone they half-like is a demon. Some people are not convince-able and the only words to describe them are pretty awful, however, so don't get swept up in tone policing others either: they may not be trying to convince those people in the first place, so a critique of that may be mis-aimed, unless you think those people really are reachable and they are making a mistake to abandon them, in which case it may be helpful to provide evidence of that before you tone-police, otherwise you are just making yourself look really obnoxious and, crucially, for the theme of this post?
Not convincing anyone but yourself.
https://www.sciencedaily.com/releases/2022/05/220505143753.htm
Loose categories and consciousness as a physical phenomena
Phenomenal/physical consciousness and categories
Important context/background: Mathematical categories are something rather specific, and while I don't think they per say are necessarily actually the math we are looking for, and thus will be using category in a slightly looser sense here, I think something similar at least will be useful, because a rotation of 'red' doesn't stop red from being red, and categories are all about isomorphisms like that. I'm more interested, however, in the concept of extra data relating the object to other objects than a set itself naturally conveys, and that's the main concept you should carry with you here.
---
I got myself a bit confused in the past by what the 'consciousness is an illusion' people are talking about, but I get it now. They don't mean that qualia aren't real, they could mean the qualia aren't physical or the 'steam of consciousness' part is nonexistent (we only exist on Tuesday :P). The physical one which, ugh, is a potential confusion by itself: what do you mean by physical? To me, the definition of an illusion is 'a misleading qualia/representation of information that makes something appear to you as representing something other than it is, such as an optical illusion', so it is difficult to imagine how you could have an illusion of qualia without that itself being a qualia.
The first problem with saying 'if qualia are physical, then we must be able to observe qualia of other people' is that observation itself is going to be filtered through our own qualia. We already know that a red apple is not inherently 'red', someone else with color blindness will see something different, the red is processing effect of the brain. We stimulate the brain, we can make it see funky colors. That's physical. No one in camp Physical is claiming if you pry open someone's brain, you'll see colors floating there; that would ignore the whole apples are not red but look red thing.
The other problem with this is that, in a sense, we maybe can? It's called empathy, when our brain understands the relationships and what the other brain is going through it can mirror it. We could hypothetically imagine hooking up two brains, one that experiences a qualia the other doesn't, and have them communicate the experience. If you consider the split-brain hypothesis, our brains may already do this between the two halves of the brain itself. The problem is getting the unconscious brain to load the correct state, since the conscious and unconscious don't have perfect communication and it's very difficult to communicate qualia information.
The same people who say consciousness is an illusion are the same people who say p-zombies are possible, that we could imagine someone having responses and yet have it completely 'dark inside'. Well, I can imagine energy not being conserved, that doesn't make it logically consistent or possible.
The thing is, I count 'dark inside' as a qualia itself. If we want to make a mathematical theory of consciousness eventually, we will need something more sophisticated than set theory, but we're still going to ground it in numbers (if perhaps more sophisticated ones, with categorical relationships and perhaps vectors 'baked in'), and 0 is still a number. That leads to the question why that qualia and not a different one? We then note from our own experiences that certain physical actions cause different qualia in us. Occam then says the most reasonable conclusion that explains all the facts is that qualia is generated by physical things.
Now, in order to confront the 'qualia not seeming to map to we see a thing and then do a thing' question, we actually need a more sophisticated mathematical model, because there is one way which maps very poorly and one way which maps much, much better. The first is the one that you are probably imagining, which is based on set theory.
If I have a set of wavelength numbers, say 0 to 600, these seem to have no correspondence to qualia whatsoever. IF, however, I have these numbers in a categorical space where the numbers have extra data (morphisms, blue transforming into red from two different directions, blue to green to yellow to red, or blue to violet to red; this contrasts 600 going to 0 which is single directional, you can't add +1 to 600 get 0 unless we add on extra information about how it is a modulo clock space) about their relationships to each other, and the symmetries of the space are not that of a simple line (if it is 1d, it isn't Euclidean 1d but a circle, a 1d object we actually need two dimensions to look at properly), we could now have something that actually looks like the mappings red, blue, green. I'll do a simpler example with black and white first. Black and White are opposites, and -1 and +1 are opposites, but black is not -1 light, it's 0 light, or very little light.
When we take into account that our 'mental space' will never have negative light, that our space is not possessed of any -1s, we can imagine encoding relationships to the numbers that /do/ exist in our system of '0,1'. 0 and +1 can now be opposites in the transformed space where we encode information about their relations to the space as a whole and have no '-1' to play opposite to +1, leaving only 0 to play that role. Now we have numbers - think of them as a 'mutated 1 and 0' if you like-, that map much more closely to black and white's actual relationships with each other.
For red, blue, and green, because we have 3 opposites, we'll need a more sophisticated system, but the point is that we could in principle make a mathematical system that does this, where a '0 to 600' space actually maps in a very different way than traditional '0 to 600' space; think of it as like loading a different axiomatic system for the same numbers, if you like, to make those numbers act in a different way, but still have them the same numbers, the same raw physical states underlying it. Think of it as like curved space in general relativity: we curved our coordinate system so much we bent new axes in our 0 to 600 line and made new opposites! If we bent it in the middle, 0 and 600 would now be opposites, and 300 the neutral 'zero like' middle value between them.
Then we would need yet more sophistication for the fact there are multiple sets of qualia, not just color, and they also have interesting mathematical relationships: the somewhat ignored 'sense of body place in space' qualia seems to have overlap between sound, touch and color senses (If translate between sound and color information, or touch and color, I notice my 'sense of space' is a common factor between them; likewise, if I translate information between touch and sound I notice my sense of timing is important for both of them - this may be why we dance to music) and seems like a more primitive version of them to me, and sound encodes its relationships a bit differently than color does, and so does taste and touch. Touch/feel is fascinating because it is entwined with value judgements, it's not a fully 'neutral' sense, and this lack of neutrality also pours over into the other senses more subtly. We may find, in the process of chasing the most basic qualia, that we must make a value-space which is also more sophisticated than simply -1 to +1, with values in the mental space having meta-data relating them to one another above simple set values, and I suspect higher level categories and topological transformations: a bad state that one is averse to makes little sense without the concept of a good state that one prefers over it, a notion of change, and change as a notion is something that simple 1,-1 does not encode very well either, yet which is in terms of relationships between numbers not that difficult to express: it is an inequality of states over time, with equivalence rather than strict equality for the object being changed.
Is a river the same river a moment later? Yes and no, is the mathematical answer, depending on what equality or equivalence you wish to invoke. The mathematics we must use must be far stricter and more sophisticated than what we normally use, demanding precision in questions like these.
And we could imagine, if we found out that our neurons corresponded to such a mathematical mapping in this more sophisticated categorical manner, that we could develop a way of transferring the information directly from one brain to another of these mappings, and then actually experience another person's qualia. Observe a thing that was originally outside of ourselves, matching most people's definition of physical.
Now, it is possible you define physical in a different manner than me, in which case that would still not be proof for you, but for me that would be worth including as physical, even though it is not an object per say but an arrangement in a physical system. At that point it just becomes whether you count patterns like 3,2,3,2 as physical, or things that arise from the physical and are bound to them and exist 'on top' of them; I see this as a distinction with no real meaning as it is just a definitions quibble and those are by nature unable to alter reality no matter which definition wins.
Scientism and autovaluism
https://aeon.co/essays/science-is-not-the-only-form-of-knowledge-but-it-is-the-best
I would add an additional category. I would argue that there are parts of philosophy which are not knowledge at all, yet still useful. For instance, people might say 'It is my philosophy we should be kind, not based on any mechanism I am aware of but because it is just my preference.' which makes it clear that most of that isn't really any kind of knowledge (excepting the 'that I am aware of' part) but rather a personal choice.
So I would consider myself a 'medium-strong scientism-ist', because I think there are useful things which are not, strictly speaking, knowledge, but just personal choices that fall loosely under the handle of 'philosophy' even if it isn't terribly traditional. My autovaluism, for instance, is largely a system of definitions that one finds useful to use and a system of personal choices made within the context of those definitions, and largely independent of the knowledge system (for example, I would not stop being autovaluist if I discovered I was a brain in a tank tomorrow) beyond the basic fact that consciousness exists and there may be multiple consciousnesses, which value and find meaning in things.
I label it a philosophy, but if one considers philosophy a kind of 'knowledge', then it is not but something different, like a code of honor perhaps.