Science 101
First, what is science? If you don't have a very clear idea of it, my added philosophy on the subject won't be of much use to you.
Science is not the same as math, although it often uses math to make its statements precise.
Science is, roughly, the attempt to study the world in a self correcting manner that compensates for any individual scientist's (or experimental apparatus's) biases and errors. It does this by accumulating observations and asking what hypothesis best fits all observations so far without contradiction or leaving some part of a phenomena unexplained, and by offering predictions of
future observations (this is where it diverges from psuedo-scientific statements like 'A wizard makes everything happen', which has no predictive ability but can 'explain' everything after the fact).
Scientists may conduct experiments to increase their pool of observations and to try to force competing hypotheses into contradiction until there is only one survivor. However, science is not quite the same thing as conducting experiments!
Note I said future observations, not future phenomena. It is possible to make predictions about the past if the past is imperfectly known. And without a time travel machine, it is difficult to conduct experiments on the past. Instead, scientists of the past (such as all cosmologists) usually try to get more data and refine their instruments to be more sensitive and accurate.
So! That part should be fairly uncontroversial. Onward to my philosophy and personal thoughts, which may be somewhat more so. For instance, Carl Sagan once said apparently that 'Absence of evidence is not evidence of absence' which I disagree with slightly, although it's possible we are using the word 'evidence' differently.
My philosophy
I am somewhat sympathetic to Lakatos, personally, even if I'm prone to forgetting their name; the concept of the research program which may become degenerate rather than the single hypothesis and single test or refutation of that hypothesis somewhat echoes a notion that has been bouncing in my head that we really need to put more emphasis on a synthesis and consensus of studies rather than individual ones when we report on science. The one-off experiment is not very meaningful if it cannot be replicated and was done poorly. We/they also maybe need to put on more emphasis on margin of error when talking to laypeople and what it means. More difficult is to include wiggle room for 'unknown unknowns' as potential error, but being clear about one's assumptions and how much one is able to bend those assumptions without the entire thing breaking does make this hypothetically possible. You don't necessarily need to know what the error is to know that your measurements aren't fully consistent (and thus potentially not fully accurate) once you demand a certain level of precision and that you've ruled out common sources of error to about that level.
However, I still think falsification is an important ideal to aim for in a scientific theory, I just think that, generally, it must develop a fair bit before this is actually possible and that in practice it is quite difficult. To explain why, I must go back to that old peeve of mine:
When is something evidence of absence?
If, as is often said, absence of evidence is not evidence of absence, and I go into an empty room and see an absence of evidence for my grandmother, and then cannot say that she is not in there despite that absence of evidence, have we not encountered an absurdity? There must come a point where, in fact, an absence of evidence is evidence of absence, and that is when an absence is observed in a given fully or nearly fully searched domain. If the domain can be searched completely, then it becomes proof of absence (although good luck with that in practice, of course). Given certain parameters, like the knowledge my grandmother cannot shrink or turn invisible, looking over a completely empty room should prove that she is not there. I have falsified that hypothesis.
In the context of evidence of absence requiring a domain search, then we can quantify how close we are by the size of the domain and how much we have searched and if 'doubling back' is a necessary behavior, i.e if the domain we searched can change and require being searched again. In practice, in most circumstances we cannot perform a full domain search. However, we can search a damn good portion of it.
A man, before the discovery of Australia, declares there are no black swans. This is somewhat reasonable if he keeps the statement purely about the domain that has been searched, England. If he extends it to mean everywhere, including where he has not searched, then it has several continents worth of potential error. One of them, Australia, turns out to actually have black swans, which is less surprising when you state 'several continents worth of potential error' than the phrase 'black swan event' would usually like to invoke in common usage.
It is because of this context that I can say with reasonable confidence that there are no naturally polka dotted purple swans. The domain Earth today has been far more searched. I can't claim it with total confidence, of course, but a large population of birds with flashy coloration needs somewhere to live and humans have been just about everywhere. Unless these swans are living in an unexplored cave system or the deep ocean on the sea floor, which is very unswan-like behavior, there really isn't a whole lot of room left for them to be in that we haven't searched yet or aren't currently looking at. There's no deep physical principle preventing them, as there are purple birds and there are spotted birds, although it is less common to see them in combination, but if there was a deep principle against it then this would be another point in favor of their exclusion. (In practice, it is true that purple feathering is not as easy to evolve as black or white feathering, requiring far more mutations, so we should 'expect' black swans even if we haven't seen them far more than we should expect purple swans, because only a single mutation is necessary in most species to give black coloration.)
Lakatos would say that because we must test a theory and auxiliary hypotheses at the same time, say Newtonian theory and the existence of Planet Vulcan, we can't really falsify the theory, only a combination of it and the auxiliary. I don't really fully agree with this, although I mostly do (especially if you go to extremes and count anything outside of solipsism as an auxiliary hypothesis). For instance, I could propose a theory which demands only certain sets of particles exist. If someone discovers a particle that is not in that set, my theory is then falsified, end of story (although, yes, the discovery of said particle assumes several auxiliary hypotheses, like they set up their experimental apparatus correctly, let's assume there was significant replication, so that this auxiliary has been pretty well covered). Not all theories are amenable to amendment by tacking on extra pieces, like another planet or dark matter, and the degree to which a theory is amendable that way is a reflection of how early it is, how much of a domain it covers. For example, if a slightly nutty one in that we would never be able to fully calculate it: Were a theory to cover the totality of the universe and predict, to exactness, every particle and every action that ever happened, any deviation larger than the margin of error on our instruments no matter how small would falsify it, because it is a 0 parameter model there is no wiggle room to fix it if things go awry. You could not insert an extra planet because it specified exactly how many planets there should be already.
In practice, even if we have such a theory there will still be wiggle room and error, simply from our instruments or from the fact we would surely have to make approximations for over 13 billion years worth of calculations. But the degree of error could be driven so low that, like the degree of error of my eyes (which sometimes blink and do not see microscopic objects) when I search an empty room, we could still conclude that it was reasonable to declare it false or true (my blinking is not going to make it reasonable to conclude my grandmother was actually there). For instance, a discovery of a particle that the theory says cannot exist, which is something that would be much easier to calculate than the position of every planet in the universe. One can go quite a bit away from this extreme of a perfect theory and still have falsification, but the science must be somewhat developed before this is really possible. For one, you need the ability to perform a search in the first place. For another, it needs an actual domain which will not suddenly shift on the whimsy of its believers; this is one of the things separating science from pseudoscience. If the domain is vague, like Newton's was as he never clarified where gravity comes from, then there is more room for insertion of auxiliaries which may save it. However, since it pretty clearly demanded another planet, given the knowledge we have of the solar system, and the domain for that planet was searched pretty well, I would declare it in fact falsified and the fact there are auxiliaries pretty irrelevant: we are not going to suddenly discover the planet Mercury or the existence of the moon was fiction all along or that scientists have been hiding that Venus has a secret twin. Since it never clarified what gravitation truly is, only the base behaviors, there is still room for a modification of the theory, but not the theory itself in the domain it actually worked in ('things appear to fall like so'), and arguably, that is what GR is, as it approximates Newtonian behaviors in the domain that they did work well in. (The 'absolute space' part of the theory was not something anyone really tried to falsify until GR came around, in part because of a lack of alternatives, and so should perhaps not be considered part of the 'bit of the theory that works well under testing'; when you throw that out, GR looks even more like a modification of it, if a very drastic looking one)
In fact, under the 'domain search' model of talking about theories, we might say Newton's laws are still true, under the appropriate domain of low speeds and low accuracy where relativistic (and quantum effects, if one wishes to include a quantum gravity) can be discounted. Just not the reasoning behind them, which assumed absolute space, which not coincidentally was not part of the domain being searched at the time it enjoyed its greatest success; people asked how they could test gravity, not absolute space. Part of the lesson of this is we should be wariest of the parts of a theory that have not actually been domain searched, only implied by the success of the theory. Applying to GR, we actually see a similar story to Newton. General Relativity enjoys great success over its searched domain, which is also the domain it speaks of. Where we have not searched and where it does not speak of very well (and in fact breaks down or goes silent, like inside of a black hole or the very first moments of time or anywhere quantum mechanics inter-playing with it becomes important, which may include mundane macroscopic objects in superposition), we know it must be replaced. The difference is that it is much more difficult to search where it breaks down, because we can't exactly go to the center of a black hole or time travel to the big bang. General Relativity has a broader and more searched domain than Newton's does, and is a more developed program. Nonetheless, it still has some wiggle room in dark matter and dark energy. Dark energy actually does potentially follow from its equations as a 'new' phenomena as the cosmological constant, but dark matter is more like Planet Vulcan in that we have to assume we screwed up our domain search, and by this point we've searched pretty extensively, so if I were to bet, I would say dark energy is real and dark matter is not. Well, unless you count quantized space-time that only interacts gravitationally and via 'dark energy' expansion as dark matter (though I think that would just be gravitons, I suppose there could be multiple), a possible version of quantum gravity, in which case the same point that it sneaks a hint to us that gravity must be modified remains.
Basically, I am quite Team Popper under appropriate conditions and with modification to have some consideration of auxiliaries (it's not just about falsification, and modification of a hypothesis is often more appropriate!), but I think Lakatos introduces some useful vocabulary and that there is sometimes too much emphasis on single studies without considering the whole picture, and this leads to laymen accidentally cherry picking because they don't have the context to know the new study on the health benefits of milk they found is only one of many and how to compare them, and I think papers themselves benefit from taking a whole-field view from time to time. When this is done too infrequently or not at all, we should demand it. My impression is that this is a bigger problem for the media of science than science per say, although there are some difficulties sometimes in getting attention/funding for experiment replication which is 'not sexy' unless it claims to do something novel, and I do consider replication or 'failure to find interesting results' an important part of the totality of the research program. It's good to know what was tried before and didn't work to generate novelty.
I do not think falsification should be the only criterion, especially when a science is still in its very early days and struggling to figure out how to search and what the domain is (like the field of the study of consciousness right now), but I don't think something should be considered properly scientific unless it aims at a way of falsifying itself at least hypothetically as a major goal. (Astrology, not to be confused with astronomy, may be falsifiable but psuedoscientific; the problem is that the practitioners don't take any falsification seriously and make excuses.) Falsification in that sense is a minimum. You also want confirmation, and novel confirmation at that, although again this runs into domain search: hypothetically, you could search a domain so completely there is nothing novel left to uncover, although in practice this will never happen. In that instance, you would settle for the theory which explains exactly the totality of the domain and predicts nothing else, since any excess would be something that is not in the domain. Predicting things that don't exist is not a good look for a theory.
This is, in a sense, the difference between the 'this is reality and it is limited in the physics it can have by mathematics and logic' hypothesis and 'we are a brain in a vat or bedeviled by an evil demon' hypothesis: one of them predicts 'more possible universes' than the other, which basically would predict anything whatsoever, and so it 'over-predicts'.
In this sense, I would say that dark matter starts off as a good scientific field but as the domain continues to be searched and the program doesn't really generate any new novel, correct predictions, it is threatening to become degenerate. Desirable particles with just the right properties for the WIMP miracle weren't found, for instance, and positing new particles that lack any justification except 'we need more mass' starts to look iffy after awhile as they keep not interacting. The domain unsearched moves to the ridiculously exotic and the super-weakly interacting tuned to be just out of previous experimental reach. At this point, I wouldn't consider a new form of dark matter to be very scientific unless it was posited for reasons other than inching just out of the last domain sweep; something like the axion which serves an actual framework purpose is appropriate. I don't think it is right, mind you, because the searches for it have not panned out either, but the original motivation for it was quite scientific in my book, and the movement to both broaden our domain (tying gravitational mysteries to a mystery of the strong force) and put stricter standards on it is the correct movement for a scientific program.
When at all possible, one should aim for the decreasing of gaps and vagueness and for both broad and strict predictive power in a domain.
That is science.
It is also, interestingly, just a twitch away from mathematics, where movement toward a proof also often starts by proving some number of objects obey a certain principle before it is proved that all of them do or that one of them does not, and where a lot of improvement could be considered making statements less vague and yet also more powerfully generalized.
[ TLDR version of this section:
Do not cherry pick. Falsification needs to be considered in the context of domain sweeping and as a subordinate of the even greater concept of strict (you didn't also equally predict a number of non-satisfied futures) a-priori prediction.
Update: I wrote a much shorter post on falsification later about doughnut and coffee cup theories, although it doesn't cover talking about absence of evidence, it does make the 'the still true under a certain domain' argument a bit clearer by talking about equivalent theories that look very different at first glance.
Another update: Here are some of my latest thoughts on how falsification is actually a subset of caring about strict a priori prediction matching. The 'strict' part is important and makes Occam's razor a subset behavior too. The a-priori part may be a bit looser: if you have some data where you generate a model from just that section of the data, and then can go over the rest of the data (which you did not generate the model from) and see if your model 'predicts it', I'd be willing to count that, although it obviously is not as optimal as genuine a-priori matching. This is basically one method of training AI; if you trained it on pictures of cats and dogs and now want to test it out, you don't need to go and get new pictures of cats and dogs if you reserved some of your original pictures and didn't use them in the original training, it will thus be effectively new to the AI even though it isn't to you yourself.]]
Other stuff:
I also mused a few times that a mathematics that could solve the nothingness problem would be essentially the same as the problem of time: where '0' or the simplest possible state implies/requires 1, a 'parts imply the whole', 'past implies the future', kind of math, dramatically opposed to the math today where a number like 3 requires 1 and 2 to exist for self consistency, a 'the whole implies the parts' logic. This would solve the Last Tuesday problem where the universe could have simply started last Tuesday, since if you could show there was any kind of 'parts imply the whole' logic, then this would be rendered inconsistent.
I think it might have been in the 'what is unique about humans is over determinism' post I first mentioned this, although I've been thinking about it since about 2015. (Note: Even if you believe in God, you have to have logic answering why reducing to the simplest possible state and simply 0ing everything out produces God instead, which presumably you think is described by 1 and not the 0 of nonexistence, right? Alternatively, you think God is the simplest possible state and there is no extra step after zeroing out, which is to say the most zero-like system is incredibly complex and, well, not really any different from the Last Tuesday hypothesis, since presumably you don't think God is less complicated than Last Tuesday; hopefully you can realize why this does not actually solve the problem but merely pushes it back one step, and if you are claiming we can skip past generating 1,2,3, etc (Presumably your God could count to at least 1, right? And imagine the number 2?) from 0 to N, you are essentially claiming also a 'parts imply the whole' mathematics since a whole package came at once and the simplest part cannot exist in isolation in your system. It is just you are implying it over space instead of time, but those two things are not as different as they may first appear. In fact, space makes very little sense without events to measure distances.)
I think I have a notion of what one should do next, to find said math. It's wonderfully noncommutative. But I'm not planning on posting it here. I am struggling with low motivation but do want to eventually try to actually publish a proper paper, so in the meantime you will just have to suffer. :/
Moar posts ahead. First up, the sequel post to this one.
doughnut and coffee cup theories
Sequel post to the above. If that doesn't interest you, then just skim down until you reach 'proving a negative: the antiswan' for newer material.
Doughnut and coffee cup theories: Equivalence effects on falsification
I've nattered on this before but I felt like doing it again. Rephrase it, use a different analogy, I guess.
Suppose you have two very different objects, donuts and coffee cups. You make some theories based on them as building blocks, and then are surprised when you can swap the two. They are completely different things, and yet, somehow, your theories using them have some degree of equivalence, even if one turns out to be better than the other.
The reality is that it is not just the theories that have equivalence. The dough-nugget and coffee cup both have a single hole. They are topologically similar.
So, when we consider whether a theory is falsified and find ourselves, say, weakening auxiliaries when something doesn't add up quite right (guessing that there is another planet when Newton goes awry) our first conclusion shouldn't be 'gee, falsification is not actually that great after all' but to wonder at the domain of equivalences to the theory and to the allowable deformation with our degree of knowledge so far. Maybe there is another theory which lives in an area of accuracy we haven't searched well yet, which is a reasonable transformation of our current theory and, in a sense, 'topologically equivalent' in some manner.
If we allow, for instance, cutting new holes and pasting and gluing to fill up holes, then our doughnut is now not just capable of equivalence to the coffee cup but to an object of any number of holes. This is obviously undesirable, as we've lost a very interesting relationship and indeed any kind of meaningful relation at all.
In a theory, this would be like guessing there is another planet when we've searched very thoroughly for another planet in the given region and have a pretty good idea there is not an extra one, say, near the sun. We glued to fill in a hole where there was not one, just to make sure all the pieces get used. But this is actually then no longer in the set of appropriate topological equivalences of our old theory, in spite of the fact that it is the same theory.
Although we effectively give up talking about falsification in black and white terms for theories that have offered correct predictions (ones that never, with any amount of 'appropriate' stretching, but instead rely on every kind of transformation to work, offer any predictions are still falsified), in favor of talking about falsification in domains instead, we have not lost power in our framework but arguably gained it as we can see when a theory starts to turn into psuedo-science, and we have taken the mystery of why two theories are so much alike despite using completely different objects out of the bag.
science is not about falsification per say, more a-priori prediction matching
While falsification has some merit, and clearly one should aim for it, it has the 'problem' that it actually depends on the thing it is meant to replace: 'a reasonable degree of confirmation'. It is, therefore, something of a kind of a bonus, when one already accepts a number of background postulates - like that one's experiment has so and so margin of error, and that so and so theory confirmed by a number of other experiments, like the theory of light having wave and particle nature, is correct.
So what most clearly demarcates science from psueodoscience/woo?
One thing that really stands out is the sheer specificity of predictions.
In woo, you see things like 'Next week, something good will happen to you, although it might be balanced by something bad on another day' which are so vague they are almost guaranteed to come true, while with science, you will see something more like 'One person among Bob's work group must be up for promotion considering company needs and pattern of promotion, and Bob is a white male in a company with a tendency to hire white males. Further, Bob is well liked and seen as confident, which is a common predictor of whether or not people will see Bob as competent regardless of whether he actually is. Finally, Bob is actually competent at his job, and has made noises about asking when his next raise is after all the over time he's been putting in - something the others in his office have been too shy to do due to seeing it as too impertinent - and his boss made a thoughtful noise instead of outright shushing him. After a comparison study of other men who got promotions, we can deduce there is a good probability Bob will be the one to get the promotion.' or 'The moon will be at exactly x,y,z position next month and appear full on day 7'.
Falsification is swept up as a subset of caring about a priori prediction matching.
When Susan makes a prediction based on her taro cards that her favorite pick will win the presidency and claims this is scientific because it is falsifiable, and then promptly makes excuses when it doesn't work, or when Mr Preacher claims the world will end tomorrow and then claim he got a personal message from God delaying it to next year, and then proceeds to do so again when Jesus yet again fails to show up (and then, 2000 years later, Mr. Preacher has followers who do the exact same thing and also get startled Jesus didn't show up), the fact that these claims were falsified stands out strongly against them, but the base of it was the a priori prediction matching had a long run of being completely lousy.
So this allows us to make a correct contrast between the difference between an experiment accidentally stating neutrinos move faster than light, and experimenters figuring out that this really was just a mistake in their apparatus, and some wooster who keeps claiming over and over they didn't get the results they wanted was due to some excuse.
Science keeps a tally of what actually worked out or not over a long run, and while it might hold some postulates as fairly firm in one experiment (for sanity and practicality reasons, you can't test everything at once) it will test them in another. This is why it is even capable of making noises about some field of science having a replication problem - because it does try to self correct. You will pretty much never, ever see preachers complain too loudly about their 'always true prayers' having a replication problem, unless those preachers are about to convert to atheists; at most, you'll see them blame themselves for not appealing to God well enough to not drop a roof on 27 of his worshipers during a tornado, they'll never, ever consider the notion that maybe their prayer isn't working consistently above random chance because God just plain doesn't exist. Or they'll claim that a once in a million chance happening, you guessed it, once in a million times, is a miracle and therefore God, even though that's an awfully ineffectual God if their batting rate is that bad that it can be mistaken for the same thing as a system where there's no God at all.
Another strong part of science is quite simply, Occam's Razor. But another way to think of it is Liebniz's Simplicity. A law is not a law if it is as complicated as the system it explains. In computer terms, you'd say that it has no compression. In total layman terms, you'd say the explanation is as bad as 'it happened because it happened'. 'A wizard did it' is not really an explanation, but it's not because scientists have some inherent bias against magic. It's just that once we explain it, it isn't magic anymore. If we see things in terms of predictive quality, then Occam's razor becomes, like falsification, something we care about as a subset of that, because we can predict successfully with the least number of pieces/open-parameters, that is, the 'least wiggly' or 'strictest' number. We could use an infinite number, and that would technically be easier in that we could predict anything we want, but then we lose all actual predictive ability. There's a saying: give a fellow four parameters to vary as much as he likes, and he can make a graph of an elephant wiggling its trunk.
There are a number of things today that in the past would be seen as magic. Flying vehicles. X-rays. Magnetism. TV. Cloning. Any number of medical cures. But we don't call them magic because we can explain why they work such that, with the right training and tools, hypothetically anyone could use them. They become mundane. But that doesn't make them any less amazing!
To me, they're more amazing for being part of a coherent, logical framework that we can understand and use to better our lives.
Now, one final objection may be that it is difficult in some fields to make a priori predictions. What about evolution, where most of it is going to take millions of years? Or historical trends in general? But we do not actually have the full set of data for that domain, so it is actually quite possible to make predictions, like that one will find a missing link between an ancestral species and a modern species that displays transitional features. This happens all the time. Any hypothesis that wanted to replace the theory of evolution would have to not only do better at such prediction, but also deal with the incredible amount of laboratory and genetic evidence, as we can evolve microbes fairly quickly. (And no, real scientists do not tend to use the terms 'micro-evolution' or 'macroevolution', it's evolution no matter what time scale it occurs over as long as there are individuals producing offspring.) In fact, evolutionary theory is needed to deal with figuring out how often to roll out new vaccines, because viruses will evolve into new variants periodically which can be more or less infectious and potentially more deadly.
Yes, one cannot always perform an experiment in some fields. That does not mean one cannot try to do their best to do strict prediction matching with as few open parameters as possible.
Proving a Negative: The Antiswan
It's said that you cannot prove a negative. I've gone into one method (exhaustive total domain searching) previously whereby you can actually do this, although it is very limited in how much you can prove by this. (Basically, if I claim my grandmother is a normal human being, with no ability to shrink and go invisible, and I go into an empty room and search it, I can be pretty damn confident she is not, in fact, in the room that I am searching.) Now I'd like to go into another, stranger one.
Imagine that you want to prove that there are no black swans on a given planet, but you do not want to search the entire planet for swans. This task would be very easy if you could do one thing: show that the entire surface is covered in magma, or even better, that the entire planet is made of antimatter.
If the entire planet was made of antimatter, then there could definitely be no swans on it, even though one did not conduct an exhaustive domain search for swans. There could be antiswans, but no normal swans, because they would explode if they touched this planet. To see an antiswan is thus to conduct an indirect domain search. It is basically to flip the formula of proving a negative on its head: instead of proving a negative, you are proving a positive which implies necessarily a negative.
Now, antiswans are not exactly common, so this is not really a method that can be employed very often. But if one can employ it, then it must be taken very seriously. One can thus fight 'Are there any black swans?' with 'Are there any black antiswans?', although with the familiar limitation from the last time that there is still a given domain to consider. The absolute ideal version of this then would be an 'antiversion' which is somehow completely incompatible with its counterpart being anywhere in the universe. A 'anti-swan-ideal', rather than just an antiswan. This is, you note, very difficult to get and going to be rare at best.
However, it may actually have some application. Consider the current cosmological dark matter crisis, and suppose a scenario where people keep proposing dark matter that is just out of range of detection. This would threaten to make dark matter completely unfalsifiable and render it increasingly unscientific.
If one could make a modified gravity theory that was completely incompatible with dark matter models and made a very clear prediction that dark matter models could not match, it would be strong evidence for falsification. The problem here of course is that dark matter models can be arbitrarily fitted to match a lot of things, so this is not exactly easy. Ironically, dark matter has a much easier time playing 'anti-ideal' to modified gravity than modified gravity does to it, as we would only need to find just one particle to throw out a whole host of modified gravity theories, leaving basically just the poorly understood singularities or the quantum realm to fiddle in.
Thus the modified gravitationalist's lament: 'Please find these dark matter particles so I do not have to keep lecturing on this anymore!'
Final note: Anyone going around claiming their paper that they couldn't even get published in a legitimate journal 'falsifies' the big bang theory or GR or quantum physics is a crank who doesn't understand what falsification in science is (or the fact we don't generally aim at that, but 'close' to it, being the real world is very messy and rarely hands us clean cut antiswans on a platter or easily completely searched domains). Ultimately, it will always come down to evidence and experiment, not the writings of some weirdo on a blog who looked at some experiments and theory and creatively interpreted them. Even if we take the most generous interpretation and assume they meant they had a mathematical disproof, it is very unlikely that the person did anything to handle an infinite number of axiomatic systems at once, which is the bare minimum* of what you'd need to do if you wanted a hope in hell at arm-chair falsifying something without an experiment, because we do not know what the base axioms of the world are, we just have evidence and our best fitted hypotheses.
*You could go through an infinite number of things and still not find all of them. Gotta love mathematics.
Questions everyone should ask themselves before examining an issue
Suppose you have just encountered an issue under debate, or are about to debate an issue. Before you begin, you should ask yourself always the following things:
When was the last time I acknowledged I was wrong about something? (My answer: I thought my cat's sickness would turn out to be something easy/cheap to cure with antibiotics, instead, it was a urinary blockage and liver failure was making them vomit; the only procedure we had the ability to afford had a less than 50% chance of success. Thus, he died. However, that's a bit of an extreme example, so I'll go with a different one. I thought when I spotted one that the invasive lantern fly was a cicada. It was actually a distant relative.)
Was I bitter and in denial about it until the last possible moment? (I was a bit bitter, but not in denial. I felt a bit confused how they could look so much like cicadas, and felt better when I found a logical answer that answered this confusion.)
Am I a sore loser? (I feel grateful my brother corrected me. I then set traps for the bugs, which I otherwise would not have done.)
Do I suffer from apophenia? (To some degree, all humans do. I've learned the hard way to try to not gauge small samples as significant, which was something I kept doing when I was younger, for example, I got baffled when I caught three wild female mice in my house in a row. When I finally caught a male, it clicked that I was just suffering from some relative of the gambler's fallacy making me feel like the ratio should have been 50/50, but in reality the sample was far, far too small to really expect any kind of balance.)
Do I actually think I'm better than other people, and if so, have I actually based this assessment on my performance in the field related to the issue at question (that is, being excellent at basketball does not mean you are a champion at dog racing or an expert in photography, being a good engineer doesn't make you an expert on primes, human doctoring expertise does not make you a veterinarian, etc)? (My answer: I think I am better in some areas where other people have demonstrated incompetence that is extremely easy to rectify and verify that their information is wrong, but this is not the same as being better than experts at the subject.)
Am I capable of remaining relatively neutral for a period to allow me time to actually assess the available evidence and accounts from both sides? (It's been a while since this has been relevant to me, but, hm, I guess the last one would be 'how much does masking help' / 'would natural herd immunity fix the corona problem'? I was content to let the evidence stock up, but also take preventative 'just in case' measures, and I'm very, very glad I did considering how bad things turned out...)
Do I have or have I had a gut feeling that one side must be wrong before I have even examined the evidence? (I did have a feeling that simply trying for herd immunity by letting everyone get sick and letting the unfortunate just die would not be effective due to mutations, but I'm not sure I can call that a gut feeling as it was really knowledge based, although, yes, the fact it is also callous as hell could have been a gut feeling influencing me at the time; when the evidence did come out, it favored my position in a rather unfortunate way, as corona is still ravaging pretty bad...)
Do I get most of my information from a small bubble of people? (No. I look for the results of large numbers of studies. I do sometimes take the time to read comments from the other side. It's just that this has always resulted in my learning they don't even understand the basic positions they are opposed to and seem to be operating on some alternate reality that cherry-picks studies / is blatantly contrary to easily looked up facts or simply outright ignores them.)
Have I ever actually listened to the opposition without interrupting? (yes.)
Do I have a high reading comprehension level and decent mathematical ability? Note: You can easily take some online tests; if you find yourself making errors on 'advanced' reading comprehension, which typically isn't very advanced at all, that is a bad sign. Math is more difficult to fully measure the abilities of, because it actually isn't about just brute computation in the higher levels but also creativity and sense of beauty. One simple test I like is whether the equation 1+ e^pi*i = 0 makes intuitive sense to you and you immediately comprehend what it means. If it looks like gibberish, your math skills are very bad and you need to study more. If you understand what it means but it doesn't feel intuitive, they are OK in a 'basic ideas' sense but you probably still need more study or to spend more time visualizing relationships. And this is still pretty 'beginner' stuff, in a sense, so if you do get it you should not immediately get cocky. Math is work, not just raw talent.
(Answer to the last one: My abilities are OK; I am prone to getting attention deficit and skim reading if bored, at which point I tend to make a small error that is easily rectifiable. I try not to do this with things I am interested in. I would like to say I have decent mathematical ability but there is always room to learn more and I am wary of how easy it is to make errors.)
Additional question: what do you mean by evidence?
link to an article titled ' the trouble with the title big bang'
I find myself thinking about various arguments I've read, and people don't really seem to use 'evidence' in a fully standardized way. Some people seem to use it as 'any support you can offer for an argument', others something more specific, such as direct observation.
Sometimes in particularly bad discussions people even mix up 'evidence' with 'proof', which definitely should be regarded as very different things, as the separate concepts they usually correspond to are too useful to just mush together.
Another thing that comes to mind is conspiracy theorists who like to shout they have proof/evidence of something when they really, really don't and are in fact really gullible.
So consideration of just what one means by evidence is an important step everyone should take when thinking about, well, literately anything of importance.
I personally find it useful to categorize some things as weak evidence/support, but very far from proof.
Take the scenario of multiple, but independent, solutions to an equation one of which is unobservable, and a solution that gives additionally some unobservables, in both cases the math being used to make some observable predictions. To me, the latter scenario is stronger evidence for the unobservables existing, such as, say, quarks which have never been observed independently and may be permanently bound to each other. Neither of these things are the same as absolute proof, but they don't need to be.
Similarly, I find the strong support GR and the expansion has to be weak evidence for the singularity, but, being that we can't actually view the earliest past, also weak evidence for a whole host of other theories of what it looked like provided they give the same results without convoluted parameter fitting or 'over explaining' (in example, say you imagined a magic leprechaun is what awaited and they just snapped their fingers to make the universe into its last viewable configuration - that 'over explains' because it could have just as easily made any other configuration, and thus doesn't really explain at all; ideally, you'd want a theory with just a few free parameters or even none, predicting exactly the universe today we see and nothing else). It is strong evidence /against/ a bunch of possible starting states, such as the universe always looking exactly the same. And none of it says anything whatsoever about /how/ this starting state came to be. Yet.
(As a side note, I always used big bang to refer to 'expansion from a smaller universe that we can extrapolate backward to a singularity', with the understanding that pretty much no one takes the singularity part seriously so it's really about the expansion, which has plenty of evidence going for it. The conflation of 'expansion' with 'how the universe actually started' has always struck me as odd, because even if you go with 'was a point at the start' that doesn't actually tell you /how/ it originated, only what it looked like when it originated, and pretty much no one really takes the singularity seriously. I can't regard a single point as a 'bang', so referring to just the point as the 'big bang' makes no sense to me.
But the author advocates using the term for the origin, not the expansion. So that makes me think people talking about stuff should always just clarify which version they're talking about.)
An Artist Teaches Her Student to Define The Color Blue; My True-By-Definition View of Objective Truth
Once there was an artist who was teaching her student colors. She pointed at the blue sky and said, "We can define that color as 'blue'." Then she pointed at her dress. "And we can define this as 'yellow'. If we mix them, then we get a color 'green' that we can prove we get as a mix of the two. Similarly, we can mix red and blue to get purple." She demonstrated this. "But we can also do something different, and define blue in terms of other colors. We can take 'purple' and 'red' as our givens, and get blue as just purple with red taken out of it, even if that's harder in practice to do."
"That's circular," complained the student. "You've defined a thing in terms of other things that are defined in terms of the first thing."
"You've misunderstood," said the artist, exasperated. "It's two different systems that are compliments to one another, proving what the other cannot. Blue is only taken as a given in one of them, not the other."
"Teacher, my blue could be different from your blue."
"Ah, but what if by 'blue' I just mean what we experience when we look at the sky at this particular moment? I merely need to step up a level away from the subjective, to the shared invariant between our systems. The fact we both have experiences is not relative, is it? If you define or experience blue in a different way and I define blue in a different way, that doesn't mean there can't be a communication of objective, invariant fact that holds true for the both of us. We just need to build a bridge."
And with that, the student abruptly understood one of the most beautiful pieces of art of all.
---
This story was 'inspired' by stupid people who hate math and don't get dictionaries. Thus, it is intentionally rather simple, but hopefully not so simple as to make the depth disappear. A common argument I hear shouted by math-phobes is that all math is circular. That's only really true if no piece of it makes contact or resemblance to the real world / our experiences, but even the most abstract mathematical object, like points, is only imagined because of things in the real world (like us).
Mathematics is the art of speaking precisely: hating it is like hating someone who says on seeing a squirrel "That squirrel is fuzzy" and lo and behold, the squirrel is fuzzy when you look at it. Maybe on very rare occasions the squirrel will turn out to be a hologram and not fuzzy, but that was a failure of data collection, not a failure of language to communicate concepts that have a high likely-hood of being true or are true in some fashion (the hologram squirrel still /looks/ fuzzy so it is a type of fuzziness).
Maybe the squirrel isn't uniformly fuzzy and has bare paws, or is only fuzzy on one side; that doesn't signal a breakdown of the 'squirrel is fuzzy' hypothesis requiring a complete re-think, but only a minor modification to create a new statement that captures everything 'the squirrel is fuzzy' did right.
If by 'fuzzy' you mean 'like that squirrel', then your statement can never be wrong, even if the squirrel is just an illusion, hallucination or the 'fuzziness' turns out to be scales, but you still need to take care to convey your definitions when actually talking to other people and can't magically win an argument by declaring your definition the 'best' one. (Ergo, basing your definition of fuzziness off of deformations/equivalences to a base image of a hallucinated squirrel may not be the best idea.)
This seems to be a point many people don't grasp. Or even seem to treat as if it is something morally suspect, to post-modernists who would try to make 'truth' cultural. You might define something that way, but then you may as well be speaking an alien language because the 'truth' I am interested in isn't something that can be decided by group consensus or a dictator killing everyone who disagrees until there are no other cultures left, and doesn't come in different flavors based on feelings, unless the truth in question is 'what does everyone feel about this'.
A rose is a rose by any other name, but if you start calling a tulip a rose, I'm not obligated to play with you.
The kind of truth I'm interested in is objective truth. It is what is, even if zero people believe in it or have the ability to acquire it. Some people try to define away objective truth as not existing because our perceptions will influence our ability to get truth - but they are confusing the process with the definition. There is nothing stopping me from defining a 'truth' that is true even if there is no process for getting at it. You can then attempt to claim that if there is no process for getting at it, then that object can't have ever actually existed (if someone swears in a room, immediately dies, and someone comes in the room, the swearing never existed in this worldview), but then that's just the statement that all objective truths are 'trivial' and concern NULL / empty sets; you still can't just get rid of my truth by definition, because it's a truth by definition, love.
There's another kind of mathematics definition which I'm even more interested in than the 'spoken' version / art-form, and that's the structure of truth in all its raw detail. This necessarily requires the artform, but should never actually be mistaken for it. When I use English to say 'one plus one equals two', I'm being vaguer than the underlying structure, which would make it clear if I listed it in full whether I'm talking about clock arithmetic or not, or if I'm talking about all forms of 1s with both clock and nonclock. Truth relativists love to bring up modulo clock arithmetic as if it's some form of 'Aha! Gotcha!' to absolutists. It's not.
Relative systems are special because of their invariants. This is the most important lesson of general relativity that goes straight over people's heads. Without saying what your invariants are, you risk being needlessly confusing. Pretty much any relative system is going to have invariants, even if that invariant is a 'trivial' invariant.
The best example of this is how moral values are relative to different people and without the existence of life would be entirely meaningless - what would murder mean to a rock? What is an invariant is that different life forms get enjoyment from parts of life that would not be possible if they were dead, and that cooperation is useful, and that some of us are capable of empathizing with other life even if their goals aren't identical to ours. Maybe you find that trivial and dissatisfying and wish the invariant were a little stronger. I find it perfectly sufficient.
Without going to some radical hypothesis such as solipsism or conspiracy, this is an objective truth: many people have empathy and a lot of people like being alive.
If we do go to solipsism and ask to derive the universe from only the tools available to a solipsist, then we have to deal with the 'garden'/many forms of 0s, since even a solipsist will acknowledge 0 and 1 as numbers, and ask whether the possible logically consistent solutions to setting the universe to the most zero-like possible state (one of the 0-forms) produce as their first result a consciousness. If they do not, then we must conclude that objective truth is nontrivial.
Is there a truth nobody knows? A proof.
One person asked me to show an example of what a 'truth nobody knows' looks like, after I told them the version/definition of truth I prefer is the one where something can be true even if zero people know it (I was expressing my dislike for an article suggesting we should have multiple kinds of 'truth' since we update our scientific understanding, which I felt misunderstood how many scientists themselves use the word truth, a mis-caricature unfortunately well motivated by some very loud mouthed scientists misrepresenting how science works to the public, such as idiots declaring dark matter is the 'scientific truth' even though what's truth is a measured acceleration discrepancy, dark matter is a hypothesis for this discrepancy which is tentatively supported by the fact you can contort the models to match data, although the most basic versions of DM all don't work and no dark matter has ever actually been measured, although that part bothers me less than the failure to predict ahead of time and the bad sportsmanship by being willing to update dark matter models when behavior like halo failure occurs but not extending the same courtesy to MoND; as far as I can tell, both need to 'mime' the other in some way, so more is needed to get a tie breaker, like a deeper theory demanding one or the other getting observational successes and fixing their mistakes).
This demonstration is actually quite easy:
Suppose I throw a coin or a die and do not look at the result. Does the result exist even though I don't look at it and don't know what it is?
1. If it does not exist, then this fact it doesn't exist is a truth I don't know.
2. If it does exist, then this result is a truth I do not know.
You see that no matter what, you get such a truth in the system. This is an invariant that holds true even if we swap out radically different axioms! That makes it ("Truth is what is factual about the world* even if we don't know it personally") seem like a fairly good definition to me. A version of truth that actually requires us to have multiple different versions for every axiom system on the other hand sounds needlessly complicated and confusing!
*World here includes yourself, or if you want to be uber duper skeptical, the 'illusion' of yourself. As you may have noticed from the fact solipsism can cause the 'if I do not look at it it does not exist' version to be true.
As I like to say: the world is a meaningful place that values you, if in a rather trivial way, because we exist in the world as part of the world - it is not something separate from us, so if we value, then a part of the world (us) values.
The principle of kind caution
[written 2021]
So by now, we have enough information about the Sweden experiment to know that their anti-mask, anti-'anything but business as usual' policy worked so badly that they've walked it back a bit and are now encouraging masking in very crowded conditions and are discouraging visits to elderly homes. Which they should have done from the beginning, but hey.
Note that policy and the behavior of actual Swedes are two different things: plenty of people voluntarily distanced and tried to stay at home as much as possible even though it wasn't mandated, and thus, their economy ended up taking a hit, the very scenario they hoped their 'do very little' policy would prevent. They also didn't actually do nothing: their schools went online. So the pandemic didn't hit them quite as badly as it could have, but when you do a comparison to other countries similar to them, it's still horrific. And their hope that they would hit group immunity and prevent a second wave did not pay off, either. It's still on-going. [2021]
https://www.newyorker.com/news/dispatch/swedens-pandemic-experiment
Now, on to the subject of the post: when is it appropriate to act on a policy that one has insufficient evidence to say whether it actually helps or not? When the pandemic began, it was true that there really weren't that many studies on masks. Now, of course, there have been far more of them and have shown to have definite benefits repeatedly. That, however, is the benefit of hindsight. So what is a person to do when they do not possess such future knowledge?
My opinion is that we should act according to 'kind caution'; veer on the side of policy that is kindest, not on the side of policy that is most convenient for our short sighted self interest. This would have meant wearing masks, a low cost endeavor that is, at most, a mild inconvenience for someone, since the potential benefits (saving numerous lives) well out-weigh this minor inconvenience.
This principle can be applied to many scenarios. Global warming, for instance. Kind caution would dictate we should try to limit pollution, especially air pollution, which would have a side-effect of saving many lives as this pollution kills many people, even without the potential for causing climate change factored in. It might hurt the wallets a bit of some major corporations, but there have been many episodes of regulatory changes and none of these have ever caused the world order to collapse. It's helpful to remember we had another climate crisis which we did successfully deal with: regulation of pollutants that eat away at the ozone layer. This is a giant non-issue today.
If we weigh the 'worst case scenarios' (neither of which I think are likely, but we're viewing this from a perspective of ignorance: what policy should we choose when we genuinely don't know much of anything?), one of them being the extinction of the entire human race and most other species, the other being the economy collapsing for a little while, we can see that one of these things is not like the other. One of them has the other scenario inside itself and is much, much, much worse. We must emphatically agree that the principle of kind caution demands we veer on the side of avoiding the absolute cruelest scenario.
When we do weigh evidence, it becomes pretty clear there is in fact actual climate change going on. Even most skeptics of it have switched from denying it to denying that it is human-caused, and when you look at why exactly they are so frantic, you see their motivation is that they really don't like regulating corporations and that this scares them, even though we could actually create a number of green jobs by heavily investing in green technology to try to improve it further and get greater energy independence. Energy independence would have a nice side effect of lowering tensions between countries over fossil fuels and making it harder to disrupt supply lines of our military.
Indeed, if the human race does go extinct, I don't think it'll be from climate change alone (we're extreme generalists and we're everywhere, we've got better odds than most species even if we're a bit big), but from plastic pollutants driving male sterility. [edit: I changed my mind. That study on male sterility was not well substantiated, but people keep being absolutely stupid about global warming.] And even that could work out okay with new fertility treatments to turn eggs into sperm being developed, although we might end up losing the male half of the species...
So that's another issue that the principle of kind caution would help; we generally assume that a new substance won't be harmful, unless it's medicine where-in we usually demand checks and proof that it actually works. But we've seen time and again that often new substances are harmful. So when in a state of ignorance, it actually would probably be better to assume the substance may be harmful than to assume it isn't, as this could save us millions or billions in clean up costs later on and prevent countless deaths or birth deformities.
A good example is lead. We used to add it to everything before we realized just how bad it is. Another example is radioactive substances!
Humans have a very long, very bad history of adding when maybe we should have subtracted.
However, in some circumstances, there aren't really any other options. If someone is going to die no matter what unless something changes, then it may be more ethical to offer them something of unproven but tentative benefit than nothing, while being plain to them about just how little the drug may actually help. After all, even the placebo effect helps. A good story is about the 40,000 dollar drug that was canceled likely because it was of not much profit interest to the company, and not because it necessarily didn't work; the trial was set up poorly, but the doctors involved thought they were seeing real benefits. In a better society, competent officials would be running a non-profit trial of the drug to see if it really works.
People who argue that acting kindly will cause the collapse of society as we know it don't know what the fuck they're talking about.
Consider using the principle of kind caution in your own life. If we had been using it from the beginning, it would have saved countless lives.
Blindsight and the nature of consciousness
https://aeon.co/essays/how-blindsight-answers-the-hard-problem-of-consciousness
I really like this article: it agrees with an intuition I already had that consciousness is something 'extra' than just processing the raw signals into data, an extra that has to do with the 'value' this has to the organism. This is obvious if you think about how we don't process light as going from low values to high (red to violet/ultraviolet) but in fact loopy, violet/purple looks more like red to me than green or blue does. It's a color-wheel, not a color line. We transpose the raw data on to a different model of 3 'dimensions', blueness, greenness/yellowness, and redness, one that has significance to us: red often means danger, so it is a very catchy color and looks more energetic to me than blue of the same brightness. Green is sedate. Yellow of the same brightness somehow looks brighter than blue of the same brightness. If black/white were the original model and other colors were developed from splitting them into new kinds, my guess would be yellow/red are 'closer' to white the qualia than blue/green (or cyan).
Blindsight has to be one of the most amazing things ever discovered, but it's less surprising if you were already aware some people think without qualic imagery.
So, I do like the basic idea, being that it aligns with mine.
However, I have some strong disagreements when it starts talking about frogs and lizards and how cold blooded animals don't pleasure seek.
Firstly, frogs and lizards didn't stop evolving when they split off from us, and some lizards ARE in fact known to play. So do some fish, and some species even pass the mirror test!
We can't judge based on evolutionary groups, we have to judge on a case by case basis because every animal kept evolving after they split off from us. Even if we attain consciousness in a particular way, that doesn't mean a different structure couldn't evolve that happens to produce the same data structure and processing; the very fact a lot of it is probably a 'software' issue means much of it is only going to be incidentally notable in the 'hardware', the way you can guess someone does a lot of gaming from their computer's graphics card but you can't know for sure.
There are some arthropods I think must not be conscious: there's a solitary wasp that will do a 'check things out' behavior based on a stimulus even if they've already done so, and there are stories of insects walking on damaged limbs and of crabs ripping off their own limb - not something you do if that hurts!
But then you get some really sophisticated insects with complex social lives like bees, which can do things like recognize 0. It's possible they aren't conscious, but I would be a lot more reluctant to assume so. Of course, I don't think anyone has ever seen a bee play, which is a mark against them. But they have a need to recognize kin and some social insects (I heard of an ant doing it, dunno about bees) do do things like fight over who gets the preferred job of tending the nursery: that's a preference behavior. And they have color discrimination and learning abilities, which is very suggestive of having a mental color map, although what would be more conclusive is if they ever showed a tendency to 'loop' colors, like seeing pure violet (or ultraviolet? hm.) as closer to red than yellow.
To my knowledge nobody has done a 'looping color' test on any animal. Although tests have shown color discrimination:
'Menzel's results showed that bees do not learn to discriminate between all color pairs equally well. Bees learned the fastest when violet light was rewarded, and the slowest when the light was green; the other colors fell somewhere in between. This evidence of inherent bias is evolutionarily reasonable, given that bees forage for differently-colored nectar-bearing flowers, many of which are to be found in green foliage which does not signal reward' - wikipedia
The fact different colors have different evolutionary meanings for other animals than us, I suppose, makes any sort of 'looping test' more difficult to carry out meaningfully. But in the case of bees there is some similarity: for us too, green is usually a less important color. Green foods tend to be of less value than other equally brightly colored ones. So I would expect a 'sedative' psychological affect on animals with internal processing of what color means to them, even when 'full', rather than simply reacting to a stimulus to forage.
Do bees get excited when they have a full stock of nectar in their guts but they spot a particularly enticing reward for them to come back to? That's what I want to know.
EDIT:
It was discovered bumblebees do play! I strongly suspected bees would be the first animals discovered to have a sense of fun, and now I was proven right, haha.
Difference between cherry picking and appropriate noise handling
[2019]
Cherry picking is when you pick out only the data that favors your theory, never anything that might falsify it.
That said, there are situations where it is actually appropriate to toss out some of your data. I'll choose a simple, unambiguous example, then build off this:
Imagine you have camera traps and you have a hypothesis about a certain wild animal. Your traps also pick up lots of other animals, but it has no relevance to the hypothesis, so you toss it out. You have one blurry picture that can't be identified either way, so you discard that one too, even though it might have bearing on where your hypothesis is right or not it's just not high enough quality to be considered. Noise from things that are very clearly of other species, and noise from systematic errors expected as part of how the machine itself works, are clearly OK to toss out; you can't get anything useful and clear from them about your hypothesis.
A more difficult case is if you have two wild animals that look very much alike, say two species of shrew. Your hypothesis is about only one of them. Clearly, the more your 'noise'/error looks like the 'real thing', the less accurately you can speak about it. If Shrew Species B is our desired one, and we want to know about its parental care and have a hypothesis it engages in it for long periods, if Shrew Species A looks like it but engages in it for a long time as well it could give us a false positive if we do not correctly remove Shrew A from our samples. Alternatively, it could also give a false negative, if it engages in the opposite behavior and we see young Shrew A's wandering about without their mother and confuse them for Shrew Bs. Both cases are undesirable.
Now here's a more advanced example, where things are somewhat more abstract than something easy to see if it is a different 'species' of object than animals. Waves.
You have two different kinds of waves, and you also know your equipment is prone to a little bit of fluctuation, so you have the two kinds of 'error' mentioned here: different 'species' and equipment-expected error. You are already very familiar with wave type A, but it is B you want.
You want, then, to clarify two things: how often the error from the machine will give a false positive (keep in mind that modern medical machines for breast cancer detection give false positives often compared to true positives with both still having rather low rates compared to the times the machine correctly says you have no cancer, but that does not mean we should stop screening for breast cancer!) and how well you can isolate Wave A from the data without accidentally removing wave B's or more crucially, making it look like Wave B's are there that aren't. You can then try to estimate with confidence from your data how many positives are true positives, how many are false; this will hugely rely on correctly figuring out the overlap between Wave A's appearance as a 'species' and Wave B's. At some point, you may have put the caveat 'If this theory, which so far has never been falsified, continues to be correct, we did have a detection' if they are so similar there is no wiggle room for deformation / small failures of the parent mathematical theory. That is, if they are shrews rather than cats and dogs. This kind of analysis is obviously going to be fairly sophisticated, but there should be no real doubt that the people doing it are not trying to cherry pick any more than someone collecting data on dogs throwing out data on cats in dog costumes, especially if they bother to output the expected range of error from cats in dog costumes - err, I mean, Wave A's.
Now here is a clear, unambiguous example of cherry picking:
A guy takes a bunch of people and tests them for psychic ability, and throws out any data disagreeable to his hypothesis, like every time they guess the card wrong. His excuse is the 'shrew' that psychic ability is variable from time to time and so tries to claim this variation is what we would expect to see, but he does not bother to quantify the range of error expected from this, nor the overlap with the shrew 'There is no psychic ability at all' (which should also occasionally throw up false positives, this needs to be considered as well). They are not actually distinguishable, and thus, one can draw no actual conclusions in support of psychic ability. All the data has to be thrown out if the two shrew species in our traps cannot be distinguished (particularly if the theory was Shrew A behaves like this and Shrew B behaves opposite, which is the correct analogue in this case, as one shrew has psychic abilities and the other doesn't), and no firm favorable conclusions drawn on our exciting shrew hypothesis.
Now, the shrew metaphor is not the absolute greatest one, since you could still draw conclusions about shrews in general since the two species will be relatives of each other, and often your two 'shrews' will be completely different phenomena that nonetheless look alike: a better example then would be a mouse-like marsupial and an actual mouse, or a shrew-mimicking tenrec.
You, hopefully, get my point now about the difference between appropriate noise handling and cherry picking.
When no foundations are actually appropriate rather than circular
[2019]
It is easy to wield as an attack that something has no deeper foundations, that it is in some sense circular.
However, there is a loophole in that there are places we might expect no foundations. That is the case if reality is not fractal or infinitely divisible and there is indeed a deepest level: at some point, you cannot dig any further, because we hit the number 0 or a 1 that lacks sub-components and is thus basically for most purposes a 0 (in other words: If I start counting from 1 instead of 0, that doesn't really change the structural relationships of my ordinal line, only the starting entity name). Reductionism hits an endpoint, and the only thing you can do is lay out rules for construction of rules so you can start building up again. There is one save in this game, and it is that, ultimately, our descriptions of reality have to actually refer to reality.
So even if there is not a 'deeper' foundation at some point, there is an authority, and that is the outcome of experiment. At that point, one asks why we care about the outcome of experiments, why we care about matching to reality. Surely it would be very circular if the system came up with an answer to justify itself here, such as performing an experiment to try and find a justification for performing an experiment?
The answer is no one is forcing you to (Well, maybe your boss, parent or angry spouse). Caring about truth and the nature of reality is a choice. There is no hardbound rule-set here, just a freedom you may not wish people had, since ignoring reality is incredibly damaging. But no one can force you to care about self-damage or damage to others.
Science and logic can do a lot. It can suggest strategies to make people not act like shitweasels. But it cannot make you into not-a-shit-weasel all by itself. You have to do that yourself, or, alternatively, someone else has to figure out a strategy that will work on you, but in both cases someone is making a value-judgment that they did not get by running an experiment. There is no particularly deep logic in liking the flavor of peanut butter over jelly; it is something you mutated to do, so there is a logical reason it exists in the sense that there is a non-contradictory explanation, but if you had to choose from scratch whether to re-wire your brain to like peanut butter or jelly more there would be no real reason to choose one over the other.
I don't go running around murdering people because I fundamentally do not want to do that. I could make this slightly less foundational by deriving it from an axiomatic given of valuing people or wanting to be valued myself as a person, but ultimately, there is nothing really 'underneath' valuing people: either you have a capacity to appreciate other humans or you don't.
If you do want to murder random strangers, and some asinine reason is the only thing stopping you like believing angry sky faeries will punish you by no longer giving your children infinite tooth fairy money when they die or torturing you by pulling out all your teeth, then by all means, continue to delude yourself, but do not tell me that everyone else must share your delusion or that you should be able to mix it with your governmental or food service job that effects normal people. If you think it is actual fact backed by evidence and not just a belief, but also that you do not want to murder, it is just that everyone else wants to and thus think I am a liar for claiming to not want to mass-murder everyone, then I am not quite sure what to say at this point since clearly you'd think everything I say is lies anyway. If you think it is fact, but also that no one wants to murder except a few bad cases, then that seems very odd to add to the situation since having most people not want to murder seems perfectly sufficient to catch and stem many of the bad cases to a degree that society can function even if it is not perfect, and since society clearly is not perfect your magic sky faeries have clearly not done a better job than this 'hypothetical' scenario where most people are not murdered walking down the streets.
---
I'm pretty sure most of this has been said before except maybe for the part about foundations, and I'm pretty sure the need for equality for zero at the base foundation is original to me. I could not tell you the original thinkers of the rest of it, however; it may be one of those things that get re-thought repeatedly once the time is ripe for it.
Why deductive proof breaks down if we can have 'true by definitions'
In logic, one can can simply define things to be true: this object is object A.
One can also do the same for things in reality. Take the meter. We can define the meter length to be exactly the length of a specific metal bar, and this is not something that will ever alter with empirical investigation: it will always be true, by definition.
Following from simple definitions, we can also state a number of other mathematical truths that must be true about this bar.
So one may wonder: if we can do this, why does it not work all the time?
Equivalence, rather than equality, is a big part of it. There is no reality where adding 2+2 does not equal 4, because we can bake that relationship into the definition of 4. But if, instead of the 2's already being present, we 'physically' add them by dragging them together rather than just counting them, this operation is not equal but equivalent. This physical 'addition' could have entirely different results, like exploding when they meet, or breeding if the 2s are rabbits. In reality, there was a lot of mathematical manipulation that was not actually conveyed when you condensed physical addition and counting - namely, the element of both time and space gave two sets to have equalities toward, not just one. If we count seconds, using a clock in our own frame (so relativity does not come into it), we can define 2+2=4 seconds just as readily. But when we combine spatial manipulations and time manipulations, representing this by 2+2=4 actually mushes 'different' objects together, the spatial and the temporal, and the transformations that go on to move those objects physically close were not part of our original definitions of 4 as composites of 2 at all. If I drag you to my house, did I perform an addition operation on you? There aren't two of you now, so if I did, it's an addition to a different 'part' of you. The house enjoys an additional thing inside it, that's true, but there are not now two houses, either. The equality is isomorphic, or equivalent, not an exact equality to all 1+1 additions: there are multiple kinds of things to consider now, with multiple subcomponents.
The Hilbertian concept that we should be able to replace points with tables and lines with chairs is, in my book, also incredibly wrong. [edit: My past self was too bombastic. I'd say slightly wrong, because you can't just make a table obey all the same axioms you'd want a point to obey.] We can't just ignore substructure/relationships (and here, I do not necessarily mean division in space, which makes little sense for a point / would go against the definition of a spatial point; having a time component for instance makes our spatial point a spacetime line or curve, the more dimensions we add on the more different our 'point' on the x-axis can be) and isomorphisms that result from this substructure's relationships if we wish to be precise. 'Points' in nature, act absolutely nothing like chairs, and in fact very little like classical points at all, but they do sure seem to love their symmetry groups, a form of isomorphism, which is just another way of saying equivalence.
The other part is that humans are really bad about choosing things that aren't actually self evidently true to be 'truths', and these axioms are typically not actually 'definitions', meaning they aren't actually things that must necessarily be true but just something we take for granted for simplicity. When you do this, you should not be terribly surprised when they break down; you made a guess, not a true-by-definition statement.
The map is not the territory, but that does not mean there is no territory
So there's this really common confusion I see. People will start talking about how our mathematics is not the same as the real world, and they'll manage to mush several concepts together and ask how we can ever know what the territory is, and start remarking how 'weird' it is that reality seems to have logical relationships in it.
First, labels. Math and logic notation is just a label. If I have two doors I can open, and can only go through one at a time and have noticed this empirically, I can label that relationship OR. If I can go through both, I can label that AND. I could also label one relationship FLOOGLE, and the underlying behavior would be exactly the same.
Second, maps. My understanding of these relationships, regardless of how I label them, is a map. The labels themselves, once I agree on a meaning, are also a map once I put them together in order. The actual physical reality is a territory. If I am secretly in a video game rather than reality, provided I was not too strict with my mapping this change in information should not actually be important to this mapping to the territory: I can still only go through one door or the other. Thus, the common refrain of 'what if you are a brain in a jar?' is actually pretty irrelevant to most map-to-territory behavior. You don't need to know if you are a brain in a jar to figure out if, given the rules reality as you know it has displayed so far, jumping off a roof will hurt.
Third, territories. The difficulty is not that we can never have one to one maps to territories. The difficulty is that we live in a changing world, so when we start with a few things we carefully define as just themselves and their sub-components, and then try to guess about components that we cannot see - say because they are in the past or future - we will make mistakes.
For instance, let's suppose the relationship I have observed about being able to go through only one of the doors suddenly changes. Rather than conclude there was never such a relationship, it would often be better to say it was only valid under the domain that I observed. Perhaps in a video game, you unlock two characters instead of one after a certain level and this allows you to go through both doors at once. We would not say that there was any particularly shocking revelation about reality that two characters can go through two doors. It would be more shocking if a normal character could figure out how to put himself in a superposition and have just one person go through two doors at once, but this would also presumably involve doing something the character has never done before: the relationship of the old actually-explored classical domain stays the same.
It is not weird that reality has logical relationships: logical relationships are things we observed and then labeled. They are the original territory, not something we just invented whole-sale and then mapped on to the territory. It is just that we often over-extend past domains we actually have explored to make guesses about ones we have not, so naturally, we make mistakes. There is nothing really especially shocking or bizarre about any of this. Humans are fundamentally just not that creative, in my opinion. We evolved a 'good enough' logical intuition about everyday situations to usually survive, and if there had been some other logical system that dictated survival we would have evolved something else as our notion of common sense, provided intelligent life was still capable of existence under such deformation.
My philosophy is not held by everyone. Some people think the only numbers that naturally 'exist' are integers and everything else is a human invention, but that does not make much sense to me. Imaginary numbers for instance are basically just vectors, 'sideways 1s' with a notion of orientation which when applied to itself rotates back to the more familiar axis, they're not imaginary at all. Most 1s we encounter just aren't vectors. 'How many cats' is not a question that cares about their orientation. Similar to negative numbers. If you want to see a negative amount of something, try 1-meter hills and 1-meter holes, not cats.
(You could try to build an antimatter cat but that would be a very terrible idea.)
Direct representation, axioms, and a hypothetical disproof of solipsism
[a newer post after the one above; 2020]
People like to say mathematics is a language. It's easy to forget when you work with mathematical abstractions that are indirect references (words/symbols) that one can still do mathematics with direct representation.
It may look childish to count on your fingers, but if I ask you how many fingers you have on your hand, and you hold up 5, this is technically a correct answer! In this case, the map and the territory, '5 fingers', are exactly the same in the answer, as one did not even verbally say 5 fingers. If you 'subtract' 2 fingers by folding them, so you now have 3, this also expresses an equation non-verbally in a direct representation.
This also holds true for things that we may think of as indirect references that may not necessarily have to be: machines or calculation aids can operate in a 'move fingers' type manner to give answers.
Now, to get to the point: axioms can be thought of as just 'definitions', not true or false by themselves. I was a big fan of that school of thought for awhile even before I knew other people had thought of it too. But there's a crucial modification that I think needs to be made. A dictionary is useless unless at some point, one stops the game of having words endlessly reference other words. In short, you either need to point at a meter stick and say 'Meter', or you need to have an object 'mean itself' (such as a word meaning 'word', perhaps, in our dictionary). Preferably use both strategies as appropriate, but the second one is slightly intriguing: can we build a map of meanings by the relations things have with themselves and others? If one ran through the dictionary and observed only word mapped to itself, could we conclude the meaning of any other words simply through knowing 'word'?
We might define 2 by mapping 'Word word' to '2 words'. From there, we could get a simple numeric system!
As to the first one, pointing to an object and labeling, while I don't think it is true we have 'a priori' knowledge of Euclid or even of the existence of anything outside of our own thought (though, note: we can distinguish between conscious thought and unconscious thought, so a solipsist could deduce at least one type of thing outside of their conscious self), we do have basic default knowledge of qualia.
Our ideal base axiomatic system should be one that even a solipsistic person could agree on: we take it as a given that we have a 1, and that we understand what to increment it higher means or adding 0 to it means. We do not make any assumptions about curvature or lack thereof of space. Ideally, we'd like to deduce, using only the basic principles, that a certain universe with certain curvature (and more than one particle / entity!) must necessarily exist. That runs us smack into the nothingness problem of course, which shows that this hypothetical axiomatic system has two difficulties:
It must be pretty simple.
It must also be sophisticated enough you can build pretty much all of mathematics out of it, since by its very simplicity, it takes very few things axiomatically and they will have to be deduced.
But if it exists it would have immense payoff, because it would suggest that many sophisticated, currently difficult things are actually on their lowest level incredibly simple, including the nothingness problem itself. (Caveat: 'Simple' here does not necessarily mean 'intuitive to human minds', but rather of being composed of very few pieces.)
A scientific consensus is not a consensus of scientists per say
People confuse these two often, here's a simple example to tell the difference.
Admittedly the following example is fairly silly...
Consensus of People Who Happen To Be Scientists:
A group of scientists come together, and share pizza. They agree this is THE BEST PIZZA OF ALL TIME1111!! and anyone who disagrees is full of shame and madness. They also admit this is just their opinion.
Scientific Consensus:
A group of scientific studies are conducted which all agree with each other to a high degree of accuracy (the pizza is found to be both highly nutritious and to outrank all other pizzas in deliciousness when eaten by people with almost no one ever saying it is mediocre), and theoretical analysis by scientists (say the pizza was found to be laced with drugs) that gets published agrees as well.
If a bunch of experiments agree with each other but there is no mathematical/theoretical analysis, the only consensus will be that the phenomena is happening, not why (it is delicious but no one can track down the pizza maker to ask their secrets, and analysis doesn't show up major differences from other pizzas); conversely, if there is an analysis but no experiment conducted yet (a hypothetical most delicious pizza laced with drugs, or a delicious pizza with hypothetical dark matter deliciousness particles), the only consensus possible is that if the experiment is conducted it will likely confirm or disconfirm to these results and nothing firmer than that 'maybe'.
Re-written to be shorter and less goofy, it really is just this:
Consensus of scientists is a bunch of scientists having an opinion.
A scientific consensus is a consensus of studies and experiments, as in, actual measured facts and how theories measure up to these facts within certain, ideally controlled for, ranges of error.
Creativity as a methodical process
Some people view creativity as something you just have, like an in-born talent, and not something that can be learnt or applied as a procedure. I disagree: I think a person really can train themselves to be a bit more creative, even if some people may have a greater starting inclination to creative thinking.
First, though, we need to define what we mean by creativity.
I think of creativity as 'the ability to come up with novel responses or ideas', so that makes a good starting attempt at a definition.
Thus, that gives us an immediate idea of how to perform it methodically: simply iterate through currently accepted 'uncreative' solutions and generate a new one, say by mashing pieces together. It's the exact details of 'generate a new one' that provide the hang up. Most would argue that if one simply mashed other solutions together to make a new one, it might be novel, but it wouldn't be very creative. It would also be very disastrous if one actually tried to iterate over really trivial but nonsensical math proofs and did an infinite number before finding anything actually interesting.
Unfortunately, this means that our starting definition was not necessarily perfectly accurate to capture the way people think of creativity, since technically our solution 'mash answers together' led to a novel answer, yet it wasn't that creative despite obeying our definition. A simple adjustment is to state that creativity is layered, things can be more or less novel, and we expect a certain threshold of novelty to be met before we gauge something as genuinely creative.
With that in mind, we can now go back to our hypothetical 'generate novel answers' procedure and adjust it to throw out new ideas that are too similar to old ones, and to just look at what all acceptable answers have in common or what the problem states acceptable solutions are like, and try to find something new in that general category, rather than mashing answers together. (Example: "What is your favorite food?" 'Uncreative' old answers: "Pizza." "Steak.", and a new answer of "Steak pizza" as a mashup 'barely creative' procedure, versus "Eggplant.", an answer drawn from the general category and somewhat more 'creative', being unlike the previous two answers.)
Or, alternatively and more powerfully, we could ask ourselves: What do we want for the problem we are facing? Do we merely want to be novel, or do we want to capture the solution to the problem which none of the previous answers managed? Or do we want to capture as many possible solutions to the problem as possible, and while the old ones are adequate, we'd still like to know if there are any more that we haven't found yet?
I think we can agree that if one generated every possible solution, any creative idea to be had there would have to be within, even if the process itself didn't 'seem' terribly creative. That gets us back to the layers idea: it is possible for different parts of an idea to more or less novel, such as the creation process, or subsections of the idea. (What is your favorite food? Everyone answers pizza. You answer 'blue pizza'. Part A 'blue' is creative, part B 'pizza' is not.)
It's also helpful to remember that creativity is ultimately a tool, and that what one may want in the end is to get the correct answer or a useful solution, rather than novelty for the sake of novelty. I think we can all agree if someone asks you what your favorite food is, you don't actually win points for answering 'The fabric of spacetime in an energetic configuration' even though that's a very novel answer.
Another way to think of it is, creativity isn't so much about generating new ideas, but new ideas with significant differences: we don't just iterate over acceptable solutions (ideas), we iterate at a higher level over concepts (types of solutions/ideas).
Thinking ___ the Box (A creativity problem)
Let's do an example problem. We've all heard the phrase 'think outside the box' to characterize creativity.
How many variants can you think up on this phrase?
If you went 'inside, outside, uh... edgewise?' there's a better way than just guessing at random!
First, we can do the 'minimally creative' solution of coming up with variations on the initial answer: iterate through all directions and loop through 'Thinking _direction_ box.'
Next, we can ask how we can broaden this approach. Is it permissible to change other words in the phrase? We could do '_Verb_ inside the box', or 'Thinking inside the _noun_', or even methodically blank all except one word which we vary each time, '___ ____ ___ box', 'Thinking ___ ___', '___ inside ___'; our problem has become one of asking what the boundaries of the problem are.
But we can go another step and ask if the original problem as stated or implied was even the problem most worth solving or the one the asker really wanted to solve. The asker wanted an exercise of creativity involving the phrase, yes? So do all the solutions really have to themselves be phrases? Could we post, say, pictures of things in boxes instead, or on top of them, or pictures of things in all possible positions around the box, or pictures of things surrounded by boxes? Or could we post a story about thinking creatively, or a movie about boxes, etc?
"I thought outside the box once, but the box had friends, and it had surrounded me. Doom was nigh."
This is a great habit to get yourself into, thinking about the problem itself rather than just the possible solutions, because often in the real world the actual problem is not a word problem, so if you learned about it first by someone else's description, this description may not actually be the most useful way to think about the problem itself or there may actually be a more interesting problem lurking right behind it that is better worth your time to try and solve. The 'box' is often accepting too quickly the implied phrasing or solution of a problem, although note that there will be times you actually WANT to think 'in the box', because the uncreative solution is actually the correct one. A good skill to have is to gauge when a situation actually calls for creativity and to be flexible, using it as needed.
If you understand the components that go into making something creative or not and how to iterate over the space of problem solutions (or the space of interesting problems to solve related to your original problem!), then you can easily switch between creative and uncreative solutions as needed. Master the box, make it work for you.*
*(the box is societal norms or implications, if you needed that metaphor spelled out.)
The oddest part of all this is that creativity can become a guessing game of what other people are thinking, rather than about generating novelty from known solutions, if the game is to generate novel solutions when the 'uncreative solutions' are not even given to you.
The Lottery Ticket Problem (Example Creativity Problem 2)
Suppose you want to share your ticket with as few people as possible, by choosing a number nobody else chooses. What should you choose? Chances are good the first number you think of will have been thought of by someone else.
Humans are very bad at this problem. Many people choose their birthdays. Another bad impulse I've seen to have is the response 'Other people will be clever and try to avoid so and so number because they think everyone else will choose it', because all it takes is one other person being 'clever about being clever' or for one person to be uncaring and lazy to make it a less than optimal number. Even if you haughtily think only someone of your own IQ level would think of such a number, if there are enough players there may be hundreds or thousands of people at such an IQ level. Even if you reason that usually only idiotic people would play the lottery because the chances of winning are so low it effectively becomes a tax, that 'usually' is still going to bite you in the ass. 'Smart' people act stupidly all the time, or people may be buying the 'hope' of the lottery and the fluttering feeling that gives rather than actually expecting to win; a single rational strategy can only exist if all parties actually have the same values or goals, what is irrational to one party may not be to another!
One of your best options is simply to generate a completely random number, but not from your own head. Humans are actually quite bad at thinking up random sequences: we tend to accidentally incorporate patterns, such as avoiding repeating any given sequence, when in reality randomness might repeat numbers numerous times. Then check this number to see if it matches a date or another famous number like the digits of Pi or e; if it does, throw it out and try again.
This procedure ensures that even if other people are 'uncreative' and 'steal' your solution (or rather, happen to think of it themselves by accident), they are nonetheless unlikely to settle upon the same number you do, if the range of numbers to choose is quite large.
Heaps and certainty
The heap problem is a really old one, but only really that challenging I think if you're stuck trying to describe it with the math you learned in preschool.
When is a set of objects big enough to be a 'heap'? It's not well defined, but it does follow a pattern. 2 objects really aren't a heap, but they're more like a heap than 1 object. 3 objects isn't much of a heap. Infinite objects would definitely count as a heap. Basically, it's a nonbinary value, heap-ness.
It might be tempting to relate heapness to uncertainty, but there isn't really any uncertainty about the number of objects, just laziness in our counting. We might then call a heap whatever a given individual, driven by culture and evolution, finds the tipping point where they can't be bothered to count individuals because it isn't worth the cost. There's thus nothing particularly deep about heaps or related concepts like when someone gets dubbed 'bald'.
However, suppose we really did want to have a kind of concept that was fuzzy in its boundaries, like heaps, yet in its limits (0 or infinity) spit out clear answers. Can we do this? Yes, yes we can.
An obvious solution would simply to make a function that can be fed inputs from 0 to infinity and outputs numbers from 0 to 1, and gradually increases as the input does. In the case of heaps, logarithmic increase would probably be more appropriate than linear, as the 'heapness' shoots up pretty high after 3.
Another way to do it is to try to find some way to say vigorously, 'heaps are a collection of many objects where many is somewhat relative, rather than absolute'. Then feed two or more inputs, the standard (I am used to seeing groups called 'heaps' at around so and so size) and the thing to be evaluated as a heap. If it is a partial match, then it is a 'bit' heapish, if it is a total match then it is definitely a heap.
A third way is to try to encapsulate that heaps or heapness is about having quantity in a 'deformable' manner. We can think of it not as a strict, numeric set but as instead a category or something that has a set of transformations under which it stays the same, like a sort of abstract symmetry under rotation. We could then perhaps introduce conditions under which the symmetry breaks completely, if we wanted to make it more complicated or just have a natural breaking point where there is no more heapness. Since we're adding along the numberline, the natural easy first guess of what might work for this would be to turn the line into some sort of near-circle that breaks down at one 'end', so that it is almost but not quite symmetric under rotation to a new spot on the numberline-circle, as almost anywhere you choose among infinity is guaranteed to be a heap.
A cautionary tale of jumping to favor a hypothesis
Alzheimers. What causes it? An overview of theories by Quanta.
Without testing, people jumped on the amyloid hypothesis and neglected other approaches. Now, the amyloid hypothesis has failed to bear much fruit and other approaches, including the idea it might not be a single cause, it are getting more attention finally.
Science only works when you properly treat it as science, with uncertain hypotheses. But there is a troubling tendency for people to leap to conclusions and treat things as firmer than they are even when they should know better as scientists. Another example of this would be string theory.
Often, this is because of a messed up incentive system made by nonscientists. Having a system that demanded a hypothesis actually reach a certain level before it gets a disproportionate amount of resources/funding compared to the other available hypotheses about a problem would help fix this. Unfortunately, that's not the system we have.
The public funds a lot of medical research, but pharmaceutical and other companies disproportionately benefit from it and choose a lot of research directions. Actually, because of how the American health system is set up, even when existing research is fairly overwhelming, what is most effective and what insurance actually covers (and thus what is incentivized to study even without additional funding, because it's easier to study a large sample of the population) will often be very different things. Take for example exercise and massage for hip and spinal pain over surgery and opiates.
However, there's possibly another factor in why things tend to get ignored. The Ellsberg paradox says that 20% tends to produce 80% of the output. I think in science this sometimes tends to get even more extreme (only sometimes, because lots of studies and result replication is important). So even if things were structured much better, I think you'd still see something like 1% of the scientists producing superior outputs and others goosechasing, simply because science is hard.
That doesn't mean we should make it harder on ourselves with a shitty incentive system.
An old post on determinism
[August 2019]
(Other posts on determinism, qualia and free will can be found in my autovaluism posts)
This sort of is about both math and philosophy, mostly philosophy grounded in math for inspiration, it's a thought I've had for awhile but have forgotten to write down I think.
If you have a procedure to determine things, that does not mean those things are already determined, only your procedure to determine them is. For example, imagine that there was no procedure to determine anything, we can clearly agree this would leave everything indeterminate forever, and that thus there needs to be even in a universe where things have not yet been determined a method or procedure of determining them. If this procedure itself is already determined, as it pretty much must be if we are ever to get past that first state of uncertainty, this does not mean that the results of this procedure have already been carried out and thus determined! I don't like to assume an observer 'outside of time' if I can help it, nor am I a big 'illusionary time' fan: for something to be an illusion, you have to clarify what about it is not real, and for the change to be illusory because it all exists 'at once' posits it is possible to be such an 'outside of time' observer, which makes very little sense: observers change.
This means, for those that don't want to parse that paragraph, that one could have a deterministic universe, yet your will is not determined yet, since the procedure to determine it has not actually been performed. Whether or not this is true does not change the fact that you could, hypothetically, have this. Free will and determinism, for a given definition of free will, are not actually incompatible, if part of the procedure to determine things is in fact your will.
Note that this assumption (that it is possible to have a determination procedure that has not been carried out yet) inherently contradicts the concept of some Platonic ideal mathematical space where all possible calculations 'exist' already, in which case, not only have your actions been determined but so have the actions you didn't take been determined if they are also mathematically valid, which is actually quite contradictory to what most people think of when they imagine something is determined (we generally mean just one action, not all of them in every possible world), and if one thinks that reality actually is mathematics leads one to the possible conclusion that everything has already happened and time is just an illusion, since we would necessarily be living in that Platonic space. This would easily but not necessarily* be a variant of Many Worlds, which avoids having any determination procedure in the first place (for wavefunction collapse, specifically) simply by choosing every possible path and making different worlds for each one. One has no free will in this Many Worlds picture and presumably has some alternate reality somewhere where they are Hitler's clone, as well as another where they punch Hitler's clone, unless this is literately mathematically impossible, since with infinity anything that can happen will.
It does not, however, contradict the idea we live in a 'Platonic mathematical space' where the platonic mathematical entities are not things like line segments but ourselves, and the calculation is still going on dynamically. This kind of universe would require a rather different mathematics than what we (or you, I should say) are taught in high school, as dynamics do not 'bake into' as a hard requirement or arrow very well into that and space and time are easily exchanged, and because it seems unfriendly to traditional points and triangles: we don't exactly see a lot of perfect traditionally-Platonic ideal triangles floating around for one; our universe seems to be quantized and non-Euclidean, if this is a necessary condition the mathematics should say why.
*One could make wavefunction fully deterministic, for instance, and make the moment of 'choice' happen elsewhere down the line of calculation, for a non Many Worlds version of this Platonic Ideal universe. This really doesn't give a drastically different scenario from the MWI here as one still has every mathematically coherent world already calculated out, so unless there is only one** there will be multiple worlds, just for a different reason than the quantum mechanics Many Worlds Interpretation.
**(Which I admit is something I actually suspect, when one clarifies that there must be physical states, especially if they are finite then Godelian logic about consistency vs. completeness does not fare so well as that assumes infinity; first order logic doesn't demand real physical states as an axiom and is comfortable with infinities, and it is very difficult to have a physical inconsistency or incompleteness in a traversible physical world where you can double-check things for yourself. Physics is extremely demanding mathematically, asking for specific symmetries and has a hint of demanding certain laws be the way they are for mathematical reasons, like energy conservation being a form of symmetry or relativity falling out of electrodynamics. Tiny deviations give drastically wrong results, like putting cubes instead of squares into your equations. And primes and quantum mechanics seem to have a strange connection in the symmetry of the zeroes of the zeta function acting like the demand that energy eigenvalues take on only real values, if the Riemann hypothesis is true.
This means it is not wholly unreasonable to guess that there may be a single nontrivial consistent and complete mathematics of physics, despite Godel's infamy, and that this mathematics deals well with questions that are extremely hard in conventional first order logic systems, such as the question of how primes behave or how to formulate quantum mechanics without renormalization troubles like badly behaved infinity popping up in your gravity.)
What mathematical proofs do and do not prove
It's annoying that I have to write this one, but some people are really damn ignorant (the types who shout 'prove it!' usually don't understand what 'prove it' really means in full context of what is and isn't possible/reasonable) so I figured it was best. I could have sworn I'd already written it but I don't see it so guh, whatever.
Mathematical proofs are based on axioms, rules that are taken as givens. The proofs that result basically then say, IF these axioms are true, THEN this necessarily follows. This makes mathematics a really powerful tool if you can find a good set of axioms to map to your system, since then you'll have a very reliable set of results as long as those axioms are indeed a good approximation to reality. If you can find a pair of axioms where one or the other must be true, then you can get a really good idea from experiment by looking to contradict one's predictions, say the idea space is Euclidean. There is nothing especially deep about this despite all the philosophical bloviating you might hear over the majesty of math: that we can do this just indicates that reality isn't an incoherent contradictory mess, which is great since I'd like to not spontaneously be furiously dreaming of green gangly melodies in the timeless llama futures that have already happened, as pretty as that might sound.
The IF part is pretty damn important.
It is quite common for something to be true in one mathematical system but not true in another.
Godelian incompleteness is a really good example of something which talks about a lot of axiom systems, but not all of them, and people sometimes end up conflating this with 'Some guy proved reality will always have unproven truths'. That is, frankly, not what it says at all, and completely unrelated to whether that proposition is true or not: for one, it invokes infinity, and we don't even have an observably infinite universe, for the other, it's a lot more like 'some statements are neither true nor false in a given axiom system that is rich enough to have certain kinds of numbers'; they correspond to some number that isn't on your countable line at all, but one in the Uncountables that you would have to alter your setup to compute which would result in other entities not being computed so you'll always miss some.
"Gödel's second incompleteness theorem states that in any consistent effective theory T containing Peano arithmetic (PA), a formula CT like CT{\displaystyle =\neg (0=1)} expressing the consistency of T cannot be proven within T. " - ye ol wikipedia
Note the 'Peano arithmetic'. Reality is under no obligation to use the same axiom systems we like to, and frankly we humans haven't been at this very long. Therefore, it's a little premature to be awed at how good we are at using it to map to reality or troubled that our math sometimes breaks down a bit when faced with especially difficult problems in extreme conditions that are difficult to experiment with directly, like black holes. Our math worked well because we mapped axioms that looked like they applied to reality and who we could test for experimental contradictions, so we should be less surprised when a part of reality we can't observe directly proves more difficult to wrangle; our ability to do that mapping has been impeded!
All this said, if someone proved something that other people said should be impossible, like coming up with an axiomatic system that solves the nothingness problem and successfully gives u1xsu2xsu3 and the rest of physics, that'd be pretty fucking major. If someone says something is impossible without X, they're often implying they think it's impossible in every possible axiomatic system, after all. If someone came up with an axiomatic system that was logic symbol and math-only (so no having 'God exists' or 'Perfect things must exist' or 'This seems ludicrous to me so it cannot be so, that leaves this as the alternative' as your axioms, because those are bullshit - it is not a proof if you assume the thing you wanted to prove!) and created a god out of it, it wouldn't be an actual proof god must exist but I'd still be pretty damn impressed that you looked at God's underwear like that and confirmed it has exactly 42 polkadots and Pi leg-holes because your integral says so. If it used only axioms similar that are used in physics today and successfully generated predictions of the features of the universe, I'd be even more impressed - if also a little disturbed that God(dess) has no free will and apparently had to make the universe this way, but, eh, free will is over-rated. Alternatively, I'd be confused why all this math that doesn't invoke god for 99% of it successfully has to produce one, but, hey, maybe Almighty Dog only created the planet of Arthlfarkl or the State of Texas.
If nothing else, I'd find it pretty funny.
Is Probability About Information? The Monty Hall Problem
[tldr: replace the scenario people with computer AI which 'knows' everything but acts as if it doesn't.
or B replace the guy running the game with a bomb triggered by a choice blowing up a poor goat. the bomb doesn't have knowledge, but you can make a contorted definition of information that the bomb 'carries'.]
The Monty Hall problem is a famously perplexing problem that shows just how counter-intuitive probability can be. It became infamous for people sending in 'dis-proofs' of it, which is a little sad because it is not that difficult to demonstrate the truth of it by writing up a table of possibilities and seeing what happens when certain actions cross some of them off.
Suppose you are on a game show with three doors, and you choose a door. The game show host opens up one door with a goat in it, and offers you the chance to switch doors. One of them will have a goat, and one will have a car. If you switch doors, do your odds improve? Intuition says the odds should be 50/50, but in fact, they improve if you switch. This is because the game show host's choice of door is not random: he always avoids the one with the car as well as the door you originally chose, so there is a hidden correlation between doors.
These seems to indicate that probability, and thus our physical reality, actually does care about information, and that the theory that probability is just a reflection of our ignorance is the correct one.
However, this is not necessarily correct, and it is easy to demonstrate why.
Suppose all the doors are open, and you can see exactly which door has which, but you are forced to behave as if you do not know, say by flipping a coin or by going with the strategy you planned ahead of time. If you choose the exact same actions, will the mere fact you know (But do not allow this knowledge to influence your actions) improve your odds? Of course not. It's even easier to see if we choose machines, which are completely brainless, to enact out the Monty Hall problem.
Make the first choice of door always random. Then make one machine always choose to switch doors, and a different one to always stay, and the machine that switches will do better.
In fact, go back to the scenario where you choose, change it to having all the doors thrown open when the showman removes a door from 'play', and keep track of how often you switch when you know which door has which. Since switching has better odds, that means under the perfect information after an initial-random-choice scenario, you will end up switching more often!
|G|G|C| -> |X|G|C|, |G|X|C| <- Chose 3, always stay. Chose 2, or 1, opposite goat door is thrown open, the one X'd out. There are more scenarios where you initially chose a goat door than where you chose the car door 3.
|G|C|G| -> |X|C|G|, |G|C|X|
|C|G|G| -> |C|X|G|, |C|G|X|
To make the problem more intuitively obvious, imagine that the showman always throws open the door with the car open instead of a door with a goat after you make your selection. What are the odds that the door he opened is the one that you happened to choose the first time among the three doors? Lower than the odds you chose a goat, since there are two goats to one car, right?
What this teaches us is that probability is not about information, but can be influenced by information if it changes the behaviors that lead to certain options being selected or removed from play: in our case, making us initially choose a door randomly among the three as a change from what we would've chose in the 'know from the start' scenario, and then in the knowledge of the showman making only a goat-door chosen to be revealed before giving us the chance to switch; our knowledge matters in influencing our actions in the first stage, but matters far less in the second as the relationship between the three doors is already fixed.
'Random' systems where information is completely knowable
It may seem odd, but one can still speak meaningfully about probability even for completely deterministic systems where everything is known, in the sense of averages and limits: as one approaches infinity, how many times will a 'side' on our dice be chosen versus all others? One can also ask that the system avoid patterns as much as possible, without avoiding them completely: a genuinely random system can still have streaks that look misleadingly meaningful. The idea is to avoid long-term patterns, not short-term ones. A string of 1,0,1,0 would give us a completely fair coin with sides [0,1], but we could hardly call it random, and if it always carefully avoided having 10 0's in a row, or even a 1000, while that could be produced by a random die, we wouldn't say it had a very random distribution, we'd say it was quite patterned if in a long enough run (like infinity) we'd expect to eventually get some 0s strung together, so we wouldn't necessarily be able to 'get back' a nice coin from it alone because that kind of avoidance can itself be a bias (in fact, this is one of the easiest ways to tell if it was made by a human pretending to be random, we clump-avoid). The flipside is if we had a sequence of only 0s, we definitely wouldn't be able to 'get back' a nice coin from examining the 0s alone, even though it might have been produced by a fair coin, the distribution looks very non-random.
This does become difficult to ensure for a psuedo-random series, for the simple reason that this is actually analogous to the Halting problem: if the 'permanent patterns' (which would make it not random) you want to avoid are all combinations of algebriacs (numbers formed finitely under addition, subtraction, multiplication, division, and roots), you will need to be able to compute/determine algebriacs and then do one step further to 'distort' them without nudging them back on to another algebriac, this will put you into the uncountables. One would thus need to, oh, I don't know, find some clever axiom that can get past current axiomatic difficulties with the Halting problem and Godel's incompleteness (they are two sides of the same coin; here is exactly one of those truths we might like to find out but can't prove, because of Cantor's different sized infinities not playing well together) if one wanted to fully conquer probability and randomness in the deterministic sense...something that would look very different from Peano arithmetic and all current axiomatic systems that aren't very trivial to complete.
Don't confuse aesthetics for mathematics
I've noticed that many times people will describe an argument as 'mathematically motivated' when actually, it is nothing of the sort. For instance, the statement that 'Planck was mathematically motivated, he wanted to create as few fundamental units as possible.' when talking about Planck length and mass and relatedly c and G.
That's not a math argument. That's an aesthetics argument. Math doesn't give two fucks whether something is mathematically beautiful or really complex with lots of parameters, except that really complex with lots of parameters is harder for humans to handle.
Numerology is also accused of being 'just math', but the thing about numerology is that it is just picking out relations humans like in a really arbitrary way. That is to say, it's not clear that math is actually the motivator in any fashion, and definitely seems like aesthetics is; there's no deep mathematical principle behind someone picking out their favorite numbers out of some random data.
Take also the idea that 'string theory gives lots of interesting and appealing math, so it must be on the right track physically'. This isn't a math argument, this is an argument dealing with bits of math humans find appealing, which means it's actually an aesthetics argument that happens to involve math but isn't itself math. Well, okay, that's a bit of a strawman: usually the argument goes that 'beautiful math worked in the past, so it'll work in the future', so it's actually aesthetics + vague notions of probability. The probability part could be made into an actual math argument, hypothetically; we could try looking up all the times people reasoned 'thing x worked in past so it will work in future' and it did or did not work out for them and come up with a ratio, for instance, although frankly it's not clear this ratio would actually be useful in any way, since clearly the dependency has to do with how good the persons model was on why thing x was working. A turkey who thinks the farmer who will treat them well just because they always have is not in a good position, but a sheep who realizes why the farmer cares (for their wool) is in a better position.
So a better argument would be why 'beautiful math' of a particular aesthetic is necessary. Then the aesthetics argument could be turned into a proper math argument. It frankly seems likely that whatever the final math is, someone somewhere will call it beautiful.
What separates great math from mediocre?
A cavewoman looks at a squirrel, and makes a profound statement about her pet trash-loving wolf 'Dog' who is currently unleashed and seconds away from spotting the squirrel:
"Dog like chase squirrels. Dog will chase squirrel." This was the first time in history this statement was ever uttered.
Dog chases the squirrel.
The caveman gazes at the cave-woman, his eyes wide at her wisdom. "So profound. Prediction dog chase squirrel came true! How can be possible, we make statement about things, and then things come true?"
Math is language made precise, or alternatively, it is the property of objects that we can stick arbitrary labels on them and their properties and have the universe stay non-contradictory when we proceed to talk about them, provided we didn't fuck up our observation about their properties too badly (such as hallucinating something not there or not allowing enough broadness in the definition to allow for our uncertainties). It really isn't magic; we can talk about anything mathematically in a pretty trivial way, simply by saying it is true by definition A isn't B or some other relationship. For instance, 'non-cats' are defined as things that aren't a cat. (If this seems strange for the lack of any mention of numbers, recall that binary true/false maps to 0 and 1, and similar relationships exist for nonbinary systems.) To say the world is mathematical is really just to say that the world is non-paradoxical, and we can put labels on things and relationships, and not much more than that. Math is pretty broad.
But this broadness means that math encompasses both epicycles and Newton's gravity.
A comment argument is that math should be beautiful, and Newton's work in this case is seen as more beautiful. But whose concept of beauty do we use? Beauty is notoriously culturally fickle: just look at how fat was once considered beautiful, and now it usually is not. And an infinite symmetry group might be really beautiful, but there's no known reason to think the universe uses such a thing.
Math can both elucidate and obscure. Consider the misuse of statistics to lead to unsupported conclusions, or the gambler who knows in the very long run the odds are 1/2 for heads and tails and so assumes if he waits long enough he'll get all his money back (even if that were true, 'very long run' in math often means 'at infinity', and I doubt any gambler wants to wait that long).
Math can simplify, and people choose models with simplifications. But one should not mistake this as saying that math is only about simplified models. One can easily point to math problems that no one knows how to solve and which are ridiculously difficult, and are often more complicated rather than simplified versions of something, yet still recognized as math. Nothing is stopping me from trying to model my system with an infinite polynomial, except for the fact that this will give me a much more horrible time than if I'd chosen f(x) = y. So, this tendency toward simplified models should be understood as an artifact of the fact people will selectively choose problems that they can solve, rather than something inherent to mathematics itself! Done right, a model should be as simple as one can get away with, and no simpler, to reveal as many features of the system as possible and not extraneous features that likely don't exist and simply aren't needed and just confuse.
Great math is what elucidates, rather than obscures.
A good scientist focuses on the truth, not about being right
There are few things, I think, that better encapsulate the difference between a (good) scientist and your typical nonscientist, and which is often poorly understood by said nonscientists.
Being 'Right' is not the most important thing. Carrying out procedures so we can figure out what is actually truthful is.
The psuedoscientist comes up with an idea. They then proceed to look for anything whatsoever that will support this idea, and if they are vindicated, they crow loudly about how good at science and logic they are.
However, if you throw out thousands of ideas, one of them may be right simply by sheer chance. And if you shift goal posts, you can make yourself look right when what you've really done is made your hypothesis unfalsifiable, unpredictive and unscientific.
For me, some of the greatest scientists are names we will less often hear of, because they realized they were wrong and they self-corrected.
German physical chemist Friedrich Wilhelm Ostwald (1853-1932) was originally against atomic theory, said to have even denied the existence of matter, but later (1908) became for it, convinced experiment had vindicated the existence of particulate matter.
Max Planck was another person initially skeptical of atomic theory, despite being the person who helped start off quantum mechanics in the first place by coining the term 'quanta'. He was at first influenced by Mach, a person who said atoms were just a convenient way of describing nature and that we should stick to what we experience, but became vehemently opposed to his notions: to Planck, laws like the Conservation of Energy were definitely real and not fiction.
Michelson and Morey were not relativists, and performed an experiment to try to measure the aether. They expected to see their light beams shift by the motion of the earth through space. But when that did not happen, they still published their results! This is an interesting example because this is actually a pretty famous experiment, so it shows you don't have to get everything right to become a famous scientist respected for their work.
Rather, it is the pursuit of actual truth, subject to peer review and replication, not one's ego, that is most important.
Now, here's an example of a very BAD scientist:
Lysenko, the man who starved millions and probably killed more people than any other single scientist, who for some fucking reason (actually, it's simple: he hated the right people), is regaining some popularity.
1. The man had scientists who disagreed with him arrested. This is the exact OPPOSITE of how you do science. Having people who are skeptical of you try to replicate your work is how science becomes actual science and not psuedoscience.
2. He ignored all possibilities his idea might be bad, and decided to carry it out on millions of people instead of testing it on smaller samples and trying to falsify his own idea.
3. He murdered a fuckton of people with his ideas and still didn't back down despite the clear falsification. In fact, his original experiments he may have faked results in order to get the output he wanted.
I have seen a defense of him on the grounds that, well, Mendelian genetics isn't the sole explanation for how real genetics works, but only one part of the whole that also includes epigenetics, therefor Lysenko is a good scientist. THAT IS NOT HOW IT WORKS.
'My opponent was slightly wrong' does not mean you are right! The fact is, Mendelian genetics is real and many genes do obey it. Lysenko's position was that this was wrong, and he was so sure of it he got fellow Russians killed.
Science does not come in 'East' and 'West' versions, there is just one science. It does not care about your damn ideology. If you really want to celebrate a Russian scientist, why not Vavilov, a truly amazing man who wanted to end world hunger? Further, the idea that you can 'educate' a crop by exposing it to freezing water isn't even a concept of genetics that originates in Russia, it's just the old 'A giraffe got a long neck by stretching it and its offspring inherited that neck' yarn that has been around for ages. You can even find a version of it dating back to just-so stories in Africa, with the baby elephant who gets a long nose from a crocodile biting it and the leopard getting its spots from a man painting spots on it. It is ridiculously stupid, but people have believed in this kind of 'evolution' for a really long time. It's not original.
He killed at least 37 million people (many of them in China from the communist party adopting his methods) from a bad idea that no one was willing to let go of. That is staggering.
That is as anti-science as you can get.
https://www.encyclopedia.com/history/historians-and-chronicles/historians-miscellaneous-biographies/wilhelm-ostwald
https://www.theatlantic.com/science/archive/2017/12/trofim-lysenko-soviet-union-russia/548786/
A scientist who murdered people for profit, not by accident
So there's this interesting video on youtube which is mooostly good, but I have one very strong objection to the title.
It's called 'The man who accidentally killed the most people in history', except it wasn't a fucking accident. Midgley frikkin' knew lead caused problems. He deliberately took a glass full of lead even when he was having health problems from lead in front of an audience to prove how 'safe' it was when he knew there were reasons to think it wasn't.
'The toxicity of concentrated TEL was recognized early on, as lead had been recognized since the 19th century as a dangerous substance that could cause lead poisoning. In 1924, a public controversy arose over the "loony gas", after five workers died, and many others were severely injured, in Standard Oil refineries in New Jersey. There had also been a private controversy for two years prior to this controversy; several public health experts, including Alice Hamilton and Yandell Henderson, engaged Midgley and Kettering with letters warning of the dangers to public health.
[wiki ref: https://en.wikipedia.org/wiki/Tetraethyllead#cite_note-Kovarik2005-17 the actual ref might not exist there any more, being wiki and all; I probably should have been more careful when I grabbed the link.]
After the death of the workers, dozens of newspapers reported on the issue. The New York Times editorialized in 1924 that the deaths should not interfere with the production of more powerful fuel.
To settle the issue, the U.S. Public Health Service conducted a conference in 1925, and the sales of TEL were voluntarily suspended for one year to conduct a hazard assessment. The conference was initially expected to last for several days, but reportedly the conference decided that evaluating presentations on alternative anti-knock agents was not "its province", so it lasted a single day. Kettering and Midgley stated that no alternatives for anti-knocking were available, although private memos showed discussion of such agents. One commonly discussed agent was ethanol. The Public Health Service created a committee that reviewed a government-sponsored study of workers and an Ethyl lab test, and concluded that while leaded gasoline should not be banned, it should continue to be investigated. The low concentrations present in gasoline and exhaust were not perceived as immediately dangerous. A U.S. Surgeon General committee issued a report in 1926 that concluded there was no real evidence that the sale of TEL was hazardous to human health but urged further study. In the years that followed, research was heavily funded by the lead industry; in 1943, Randolph Byers found children with lead poisoning had behavior problems, but the Lead Industries Association threatened him with a lawsuit and the research ended'
- From wikipedia but you can check out the references yourself. I don't have any reason to think they've invented it since I've seen the same story elsewhere. Were this a scientific paper I'd be more thorough obviously, but it is not.
Health measures to get lead out of our gasoline led to possibly the most massive one-generation drop in crime in history, and increase of IQ as fewer children were exposed. Never, ever think re-regulation of goods and removal of safety checks is a good idea.
This is why it's important to ask where funding is coming from, and to pay attention to studies and not confuse them for the opinions of scientists. Look at fossil fuel industries, or 'alternative medicine' folks which is actually a multi-billion dollar industry trying to tear down regulations despite what all their crying about 'big pharma' would have you believe.
The Spock Fallacy and the Maui's Fishhook Fallacy
I caught a snippet of the Moana movie the other week during the "You're Welcome" song where the God sings about how he can 'explain anything' and humanity should thank him for all the things he did, and it made me think of a pretty common fallacious impulse in people. Namely, a failure to appropriately use Occam's razor and think that just because something can explain everything 'simply' that makes it a good explanation and they may even mistakenly think they are correctly using said razor.
Sadly, the statement 'you are not using Occam's Razor' does not quite have the same impact on people who think it just means 'simplest explanation' as it does people who actually understand what it means. So I thought it would be nice to give this particular failure a name. Maui's Fish-hook, in contrast to the razor, is an object that can explain everything and predicts absolutely nothing. That is, you could use it to explain things that do not even exist or aren't true, and it doesn't cover all the available facts: for example, let's say you see a new island that was not there a year ago, just a tiny patch of land, and you exclaim 'Maui must have dug it up with a fish-hook!' but when you actually investigate, it turns out that there was a steadily growing underwater volcano that only just now broke the surface. If you ignore the evidence of the volcano and insist Maui's fish-hook is still a good explanation because it can explain why the island is there in a simpler way than geological processes, as well as explain why everything else and not just islands are around, you've completely failed to use Occam's razor.
For another example, let's say someone tells you as a prank that fish are raining out of the sky, and you exclaim that Maui must have flipped them into the air with his fish-hook; you've just explained something that isn't even true. This also fails to use Occam's razor properly, because a good Occam's razor should not explain things that don't even exist: it shaves off the excess and only predicts what is actually true.
There's another fallacious concept that one can name readily after a popular fictional character. Mr. Spock is a character famous for pitting logic versus emotion, but there is a crucial mistake in assuming that these things are always opposed in the first place. In reality, without emotion, there would be no real reason to do anything in the first place. Emotion and logic are not wholly opposed, although one emotional desire can overwhelm one's logical thinking that would help satisfy a different emotional desire. For instance, the fearful desire to run away and the desire to not be afraid in the first place by no longer having an enemy which would require facing them, are easily opposed but this opposition is not in fact a logic versus emotion one as commonly depicted ("Fear is the mind killer" quoth Dune), both desires are in fact based on emotions.
You can see this all the time in overly simplified game theory and economic models that assume that rational agents are perfectly selfish and forget to add the qualifier that real people aren't. However, this is only true if one assumes getting as many points/goods is an inherently 'correct and rational' thing to do, that one actually values these things. If one does not, pursuing them is actually quite irrational. For someone who gets more pleasure out of cooperation, it makes no sense to defect in a single round of Prisoner's dilemma, but it is not uncommon for people who choose always cooperate to get labeled irrational without consideration that they may have no real interest in the reward in the first place. If one asks the player to pretend to be an extremely selfish person and ask what that person would do to get a guarantee of no loss, one might very well get a different answer! I think most of us would intuitively expect such a person to defect in the case that defection carries no grevious cost to the defector, only a suboptimal result, or if the grievous cost is less bad than what the opponent suffers, because selfish people generally have no problem being spiteful, and that we wouldn't be terribly surprised at someone we knew was selfish who chose always defect over many 'one round games' with new people each time because this is how grifters act in real life.
There is also something decidedly funny about a game where the method of getting most points isn't considered the winning strategy when your opponent is supposedly rational as well. This is where having a concept of goals is important even for a selfish agent: do you want to tie or better with your opponent, or do you value points for their own sake? Someone who wants as many points as possible, say because points are food and they need a certain amount to survive the winter and they don't know how much that is, someone who experiences the emotion of hunger, might be better off choosing cooperation in a single one-off round even though it is risky.
Once you start adding on additional complexity to Prisoner's dilemma, such as extra rounds that let you punish defectors, or motivators like a randomly harsh winter that requires a certain amount to survive, human emotional considerations look much more rational. But even if you did not, there would still be no reason to accrue points unless one actually wanted them.
The nonexistence of clowns fallacy, aka reducto ad absurdum, and the role of ridicule in arguments
[this is an older post. I had trouble keeping my temper toward the end, I don't like the 'maliciously foolish' where you can't tell apart foolishness from malice, or the act of malfoolery as I like to call it. This post is not about science per say, but about arguments that hopefully would try to make use of it instead of pulling ideas out of one's ass.]
To add another goofy name to my list of names for silly logic mistakes, I thought of something that I see a bit too often:
"You believe in talking snakes and just slavery / the universe poofed into existence / the earth is round? / the earth is flat? / That we shouldn't enslave people even if they commit crimes?
How could you believe in something so ridiculous!"
Just because something is absurd does not mean it cannot be true. Clowns are absurd. Yet clowns exist.
"You believe in clowns? How could you believe in something so ridiculous!"
One should ideally save their calls of 'ridiculous', and their ridicule, for failures of behavior, like failing to accept overwhelming scientific evidence and basic observable fact (call it failing to believe in evidence, if you like, it doesn't matter unless you think calling it belief rather than acceptance somehow puts belief in forks and spoons on the same level as belief in the Goddess Bastet).
That said, there is some argument to be made in calling things ridiculous when they are, and if it was proven in a study (or series as I wouldn't be trusting of a single study without replication unless it had very high sample sizes or methodology crafted to make up for lower ones, etc) to be effective for someone who did not yet understand how basic science works, I might support a limited amount of the less careful ridicule. However, I have seen no reason why this method could not simply be turned around against you: you may have noticed that what seems ridiculous is often in the eye of the beholder. One person may think it obvious that we all choose our personality and anything else is completely ridiculous, another person think the opposite is obvious.
So I think, even if a study came out tomorrow showing ridicule to be very effective rather than just causing doubling down, that I would prefer ridicule for behaviors like being a jackass and cherry-picking after being told not to cherry-pick.
I also can't help but forgive to a degree tit for tat ridicule - someone employing absurdity to swear you must be wrong who ignores their own absurdities could use the correction and nobody has a perfect temper.
But one should probably try to resist making that the main argument: even if the other person is being profoundly stupid, you don't really want to give them the ammo that 'the only response my opponents have to me is to be mean and call me stupid and silly!'. No one is perfect, so I must admit in the past, particularly as a teen (teens are not known for their great control generally) I have gotten frustrated with dimwitted bullheadedness and called people stupid for it, but in general I really do not think this is effective, especially if the other person is deliberately trying to flame you into giving such a response so they can feel like a hero for it. Don't let the other person taunt you into a battlefield of their choice, force them to debate on your terms with high standards, which may mean they must first be willing to write like a professional and use citations of actual work, they must accept results of repeatedly reproduced experiments, and they must use mostly proper spelling and grammar - that will get rid of all the people who either refuse to use capitalization or only use capitalization (hiii CAPSLOCKSSSS) and who tend to pull things out of their rear they have no source for. You know the kind. If someone is not willing to engage in what you consider a high quality polite debate with professionals judging the results (such as via submission of a counter study to an academic journal), only what they consider a 'debate', especially if they want to do it in front of an audience of their personal fans who already agree with them and will see them as the winner no matter what they say, they are probably not worth your time.
Ridicule is appropriate in the context of a comedy show, where one implicitly understands that beliefs will be mocked and that one is potentially drooling their brain into an echo chamber for an hour when they choose a comedian who echoes their pre-existing beliefs. (Interestingly, however, I recall reading that a study that found NPR was number one for accurate news, and Comedy News Shows were number two for accurate information, so if the comedy is a news show specifically one actually has fairly good odds of getting accurate information from it, but I would be less certain of the source if one is getting the news as a side tidbit in a general comedy that isn't actually devoted to getting news.)
For instance, I know some comedians (our existent clowns) hate or at least have mixed feelings about the concept of social justice, and plenty of others love it. (The ones who like to do Comedy News like to love it in my less than professional over-view of the ones I'm aware of, and it is apparently an accurate news source, so think of that what you may that social justice is associated with accuracy here.)
I'm fine with comedians making jokes, but I'm also fine with them being critiqued for the implications of their jokes; you probably won't see me among the critiquers mind you because I just don't care that much for repeating the same arguments that have already been made ten thousand times. Someone who cares more has almost certainly been far more eloquent about the issue already than someone who didn't even watch the show in the first place. Given the choice I would choose to watch documentaries most of the time over other things, though I do enjoy the occasional bit of comedy, especially the 'news comedy' genre that is becoming popular with the failure of news media to actually use the Fairness Doctrine and give good time to the 'actual left' rather than the center. I do admit that it can be very frustrating to see a great deal of one-sidedness in the willingness to 'listen to both sides', in that frequently, you see atheists, feminists, and left-leaning people (these intersect but are not 1-to-1) to be far more well versed than their counterparts sometimes even on the opposing side's own literature! ('Atheists read the Bible more than Christians do' is a pretty common joke in atheist circles, for example.) The very fact you'll see left-leaning circles even talk about issues like 'could we get stuck in our own echo chambers' but you don't see the same discussion on the far-right and center about themselves - anecdotally, I've seen the center direct 'echo chamber' insults at the far left and right equally but never at themselves - reflects the lopsidedness in the concern.
Very sadly, there is no Math and Logic channel, where one might escape the inanity of moments like people who believe in woo crying out for others to be more open minded while ignoring their own closed mindedness to the idea their woo doesn't work. More often it seems these days what is on is just drivel I could care less about like Ancient Aliens, because ancient people were too stupid to drag rocks without aliens telling them how. Apparently they didn't even have 'idiot savants' back then either. But it makes sense, we couldn't even figure out how to make fire without Rainbow Crow getting it for us and their good friend Prometheus. Ancient peoples were astoundingly incompetent, hence we must conclude we are all actually secretly descendants of alien reptile people as actual humans were probably too dumb to live.
But I'm getting really off topic, which is ridicule in argument. ...Or am I?
Sometimes you can be really effective with ridicule by letting a person draw the conclusions themselves about just how ridiculous they are being by heavily implying and showing how the ideas are wrong, and let them shame themselves, without ever directly calling them stupid or deluded: you would much rather have them think the ideas are stupid, rather than that they are stupid, because stupidity hypothetically is innate or often implied to be innate and you are signalling you don't want that person around rather than that you want that person to come over to your side.
That said, the book The God Delusion has reportedly been quite successful de-converting a lot of people despite weighing higher on the ridicule side than I am completely comfortable with, and I cannot remark on how well it would have worked if it had been titled, say, The Ridiculousness of God instead, which would ridicule the idea more than the followers yet still seem very outrageous to many a believer, which would be my first preference.
Now, caveats. Just because something can lean quite close to invoking a fallacy does not mean it is ineffective or even that it renders the truth of a person's argument incorrect (the fallacy fallacy); a stopped clock is correct twice a day. It's possibly about as common to see implied fallacies than directly stated ones. Very few people are going to do the classic ad hominem of 'you are stupid and therefore no one should listen to you' where they actually invoke the idiocy as a reason why the argument is false, but many people will call their opponent an idiot with a strong implication of the same. And something could be read in one way that is fallacious while a second interpretation could be non-fallacious.
This is why being really careful about more than just goal post shifting is important, it's important to be clear in general about what you actually meant from the start for general reading comprehension reasons. Being unclear may confuse your opponent and make you feel superior because they are now easy to ridicule for getting you wrong (though I have noted this has often backfired when the unclear person has every single other person say they thought they meant something else, making it fairly clear the problem was with their post), but it does not make you a superior arguer. I have a pretty strong rule that you should avoid ridiculing a person whenever they misread a statement of yours unless at a minimum it is clear that everyone else has read it correctly; if it is very clear they are deliberately misquoting you, that's another thing, but that's better argued with by supplying the correct quote, not by calling them a moron. (I know, not everyone has the patience for that. Many media has a block button. Some people just aren't worth your time I think. However if a study showed engaging every troll in existence was more effective I suppose I might support that, if not for the sheer time and energy sink. That's a lot of people to insult. You'd run out of virtual breath...)
I will note many times of someone trying to claim they are misquoted when the other person quoted them word for word and the other person could not come up with a 'corrected' word for word version either. Just today, I saw some twit claim they made no inference that 'thing a is not thing b' when they literately wrote 'thing a is thing b' and got mad everyone started issuing corrections. Perhaps they are technically correct: they did not infer or imply it, they directly stated it outright!
Now, do I even need to ridicule that, or does it fall over for itself pretty clearly?
links:
https://www.journalism.org/2016/07/07/trust-and-accuracy/
https://www.forbes.com/sites/quora/2016/07/21/a-rigorous-scientific-look-into-the-fox-news-effect/#648fa54f12ab
about the Fox News effect; it's partly a correlation rather than pure causation: Fox News attracts the kind of people to watch it who simply don't know much about things like country capitals and then it focuses on stories that never end up conveying that kind of information.
Things that look like fallacies that aren't
Authority fallacy fallacy: When someone points out that the doctor has more expertise on brain surgery, and another person with no medical training cries 'That is a call to authority, which is fallacious!' and demands to be allowed to do brain surgery / peddle their flu cure / etcetera.
Faux ad hominem: Someone making an argument, and then an insult.
This one is interesting because people often do make logical errors that revolve around insults which aren't ad hominem fallacies, and insulting your adversary does have a poisoning the well effect where it's going to blow up tensions and potentially make people pick sides based on who they are more attached to or who they think looks like less of a jackass, which is the last thing you really want in a logical argument. The thing is, like the expertise one above, someone who has repeatedly shown they are terrible at judgment or malicious doesn't really inspire desire to listen to them. If one's time is limited, then it can be perfectly nonfallacious to simply toss out arguments from certain actors as coming in bad faith. But this can lead to an unfortunate bubble affect where one ends up never hearing needed critique of their own group, so one shouldn't do this all the time.
An efficient method to avoid both of these is to, gasp, set up people as experts whose job it is to sift through the garbage and find the gems, and spend the years necessary to get a full grasp of the issues. You know, peer reviewers.
You see a lot of assholes who spent a day googling a subject and consider themselves experts. Let me tell you, if the subject involves math at all, and most things worth knowing do (even if it's 'just' statistics), that is definitely not sufficient time to become acquainted. The thing you should be asking yourself is, how do I most efficiently compensate for my own ignorance? Is it (a) google the evidence and 'form my own opinion, weighing the pros and cons', or is it (b), identify experts in the field and high quality scientific papers (multiple, not just one paper; you want to beware the crappy, cherry-picked study that people only cite in order to debunk, the exception is for self contained math proofs in pure mathematics), with ideally the second weighted above the first, but the first able to readily point you to said papers if you need them?
Actually, the best is (c), gain enough self knowledge you can notice when one side does bullshit like blatantly lie/cherry-pick, and can evaluate bits and pieces of papers in general, and the full papers within your field of expertise, then do (b), but if you don't have the time or ability, (b) is your best option.
Why people aren't paying attention to your idea
Stop if you've heard someone shout this before. This could probably be generalized to more cases, but in this case I'll focus on the one I encounter most often: some fruitcake claiming their physics theory is the best one. This is often carried with some sort of insult or a lament toward the 'establishment'.
First:
Did you try to get it peer reviewed? What did the reviewers say?
Does it blatantly ignore or insult well established results without providing any new additional experimental evidence for this violation or explanation for why old theories did as well as they do within their respective domains?
If you managed to get the point above correct (as in, you didn't idiotically bash Einstein or declare that local point particles Are The Best despite, y'know, Bell's inequality results, and you correctly cited people in the field instead of ignoring your predecessors), how exactly did you claim to unite general relativity and quantum mechanics, or whatever your theory does?
Did you do so in a way that produces predictions?
That's really the biggest point. If your theory doesn't produce predictions, absolutely no one is going to take you very seriously.
Did you claim that it 'predicts the standard model'? Did you get it published, and people are still ignoring it? If so, the problem may be you didn't make any new predictions in experimentally accessible regimes, something that can actually be tested. If you make a multiverse, that's nice, but it's not exactly easy to test that. If you really have a theory of everything, there should be at least some path to deriving the coupling constants and the mixing angles. If you have zero clue how to do that, that's probably one reason nobody is paying attention to your 'theory'. If you just say 'well, it is anthropic, since otherwise we would not be here', that's not much of a prediction. (I read someone's post that they had 'united GR and Standard, and all anyone asked was what dark matter it predicted.' I find this really dubious, but I find it notable that they didn't link to a formally published paper with actual novel predictions, nor did they claim to derive all the coupling constants.)
Does it explain why this axiomatic system and not another?
This dives into meta-mathematics, a neglected area, so I can't really blame anyone for not including this, but I note it here because something like a proof that certain axiomatic systems are the only ones compatible with, say, the standard model plus general relativity would be fairly impressive if it was correct.
Did it have a conflict with experiment pointed out by the reviewers?
If so, you can't really blame some people for ignoring it until that conflict is resolved. (As big a fan I am of MoND, I think the number one thing it should do to get more attention is to resolve its issues, as while it is unfair dark matter gets a pass for its issues and mond doesn't, complaining won't fix anything.)
Did it do better and take more effort than this dumb example I came up with in 10 seconds of effort?
To prove that simply getting the standard model and general relativity really isn't enough to make your hypothesis worth respecting, I will now produce in 10 seconds an outline for a bullshit hypothesis that does both. (OK, not literately 10 seconds; it took me 10 seconds to think of but will take a minute to actually write out.)
First, make it a string theory, and claim string theory has general relativity, as string theorists so often do, because it has a spin 2 graviton.
Second, find some excuse to use octonions. Or alternatively, a really big group that contains the standard model as a subset. Make sure that whatever new particles get predicted if you choose the group method are high energy so they aren't observable in today's regimes, and make sure to put the proton decay at a rate low enough we won't observe that any time soon either.
Wala! You have united GR and the standard model, and done almost absolutely nothing else. Do you see how absolutely useless and cheap this is?
Were you utilizing basic civility?
I once saw someone pro string call loop quantum theory trash. There was a video of a loop theorist arguing with a string theorist, and while string theory usually enjoys more popularity, in this case the majority who watched the video thought the loop theorist 'won'. Why? Because the loop theorist was perceived as more civil and actually addressing the points of both sides. Meanwhile, the string theorist bulldozed over the loop theorist with interruptions and mostly talked about why string theory was great, whilst kinda ignoring the loop theorist's attempts to argue about possible weak points (or making it hard for them to even say them in the first place).
It is a strange fact that basic civility actually helps you make better arguments, because it forces you to actually consider both sides. Further, basic civility is incredibly important to the basic operation of science.
Science has to both update itself, and keep important parts fixed in place if they do well (make predictions). This means that the challenge to orthodoxy is necessarily highly ritualized, because throwing out the orthodoxy too quickly would be a bad thing.
Part of this ritual process is, yes, making predictions, but also acknowledging that scientists are humans and humans like to be respected. You make zero friends by declaring everyone in orthodoxy an idiot, when they are exactly the ones you need to convince.
Remember: it actually is part of scientific orthodoxy to change the paradigm under certain conditions! So to convince the orthodox, you need to use their established methods and rituals for this change. This means citations, and this means showing a respect for why previous theories worked as well as they did, but also showing, via prediction, exactly how your theory encompasses both the realm those previous theories did so well on and the new region where they fall flat.
I've noticed a general tendency for, on average, kinder people to be more correct (not all the time, but just slightly more than average). I'm not quite sure why this is so, but I think the simple fact kinder humbler people are more likely to self-correct (or at least consider that they are wrong and their opponent is right) and if we assume kind and unkind people get things wrong on their first guess at the same rate, then on average on future guesses kind people will do better than unkind people because they actually confront their mistakes.
Once you've gotten to phase of calling everyone else an idiot, it's hard to walk back from that without feeling really embarrassed. IF, however, you said 'maybe this is true and the other person is wrong', it's much easier to say 'well, that turned out to be wrong and they were actually right' without extreme embarrassment.
Re-deriving equivalence of points, lines, tables and chairs in a typed system
Hilbert famously said you could replace points and lines in an axiomatic system with tables and chairs, and everything should be the same. Indeed, it can be a useful mathematical technique to switch points and lines with one another.
However, one is not actually guaranteed to have this behavior in all mathematical systems. In a system that is genuinely sensitive to the structure of points versus tables, one could obviously not switch them out. So how does one 're-get' this equivalency? By finding out where points, lines and tables differ and ripping out this structure.
One obvious difference is typing, relationships, and substructure: a set is not a superset. So if we simply rip out all substructure and embedded-into-higher-structure relationships, or indeed all relationships whatsoever, then shove our objects into our axiomatic system and have only the new relationships/axioms we demanded, then of course we will be able to freely exchange points and chairs, since at that point it's basically a labeling change.
Treat things as tested hypotheses, not opinions or facts
So I was pretty irritated to see someone repeating the notion that 'historical humans could not physically see blue' as if this were a fact.
It's not.
It's a hypothesis, and not a particularly great one, if you think about the fact there are cultures with more color words than our own (Russians have two colors that we both call blue, if I remember right) but one can physically distinguish the difference just fine, they just aren't culturally significant universally. If you asked me if the 'darker blue' and 'cyan blue' were two different colors, I would say you could justify that if you liked, it just depends on how big a hue difference something has to have before you declare it a new color, in my book. If we go with a really low hue difference, then purple and blue are basically the same color, green and yellow, and red and orange are all same-color pairs that just happen to have different names. That's not my favorite scheme, but you could justify it.
People have a fairly poor idea of science as just a collection of facts. Other people treat it like a set of opinions that we can simply pick and choose from as we like. In reality, neither of these is the best way to go about it.
Rather, one should state there are a list of facts (observations and evidence), which is separate from the science itself (but an important starting point), and hypotheses that are better or worse at explaining these, and models which on explaining one thing imply others even if we don't directly observe them. We do experiments to get more observations and evidence and try to see if what we find next matches our predictions or gives clues to a better model, but the theory should not be treated like a fact itself; it's something separate. It may be so well supported it's basically a fact, but we could always potentially find a slightly better theory which explains why that theory does so well within its domain (example: Newton's gravity at sub-light speeds and away from the extreme masses of black holes) and additionally explains more things outside of that domain.
This is one reason why I've found the back and forth guidance on mask wearing during this pandemic to be so frustrating. We're back to advocating masks for everyone, but arguably we should have never advocated taking them off in large groups in the first place. For one, it lowers transmission and death from other diseases, so it's beneficial as a habit in general, two, this is a really new disease and we're constantly revising our understanding, and three it is still mutating in a population not vaccinated to herd immunity levels because some assholes are refusing to get a free jab and you can't trust people that selfish not to lie about being vaccinated.
The gods forbid you be mildly inconvenienced about something that might save other people's lives.
This country is pretty ableist, so it's not surprising some people say 'let Grandma die.' Never mind that it might kill you too, or your kids.
That's getting a bit off topic, so, I just want to say the science reporting here could be a lot better. Scientists often phrase things much more cautiously in their papers than the reporters writing click-bait lines, so it's always helpful to notice if the article linked to the actual source and to at least skim the abstract and note if there are any big differences. If someone doesn't cite at all, I generally take that as a bad sign.
Although sometimes when I'm sleep deprived and trusting people not to repeat shit as fact that they don't actually know for certain (fuck knows why anyone would ever trust people not to, because non-scientist people pull shit out of their ass all the time) I don't, which is my bad, and then I get confused when something ends up making little sense, as often happens, and then if I'm lucky someone points out they were full of ignorance and cherry-picking or not citing in the first place and then I go 'oh yeah, that's why'.
I find it interesting that when I'm confused, it's frequently correlated that the other person is a moron; when someone who actually knows what the fuck they're talking about, like a scientist talking about their own profession, it's generally not confusing at all to me. The exception being some specialist starting to talk in jargon without explaining their definitions first, but, foreign languages would confuse anyone; a genuine professional actually trying to communicate with a broader audience knows this.
The contradiction of saying there are no absolute objective statements
Sometimes you see people say 'There are no objectively true statements'.
But think about it. If this were true, it would have to be true objectively all of the time, or it would be completely false. It's an absolute statement stating there are no absolute statements.
It's inherently self contradicting!
Star Wars Misquote:
Obi-wan Kenobi: Only the Sith deal in absolutes!
Anakin: I didn't know you were a Sith, Obi-wan.
This may not seem very important, but it actually is really frikkin' important. It means there can be a 'automatic axiomatic set' that we get if we try to strip away all axioms, that there are rules that can and MUST apply even in a world of nothingness.
That's a big deal, because that's the only possible way to solve the nothingness problem.
Brain chemistry myth-busting: Cuddly testosterone, and serotonin imbalance not causing depression
https://www.sciencedaily.com/releases/2022/08/220812114019.htm
Testosterone can make individuals more cuddly, not just aggressive.
'In one experiment, a male gerbil was introduced to a female gerbil. After they formed a pair bond and the female became pregnant, the males displayed the usual cuddling behaviors toward their partners. The researchers then gave the male subjects an injection of testosterone. They expected that the resulting acute rise in a male's testosterone level would lessen his cuddling behaviors if testosterone generally acts as an antisocial molecule.
"Instead, we were surprised that a male gerbil became even more cuddly and prosocial with his partner," Kelly says. "He became like 'super partner.'"
In a follow-up experiment a week later, the researchers conducted a resident-intruder test. The females were removed from the cages so that each male gerbil that had previously received a testosterone injection was alone in his home cage. An unknown male was then introduced into the cage.
"Normally, a male would chase another male that came into its cage, or try to avoid it," Kelly says. "Instead, the resident males that had previously been injected with testosterone were more friendly to the intruder."
The friendly behavior abruptly changed, however, when the original male subjects were given another injection of testosterone. They then began exhibiting normal chasing and/or avoidance behaviors with the intruder. "It was like they suddenly woke up and realized they weren't supposed to be friendly in that context," Kelly says.'
And there is no strong evidence that chemical imbalance is the main cause of depression.
https://www.sciencedaily.com/releases/2022/07/220720080145.htm
https://www.psychologytoday.com/us/blog/neuroscience-in-everyday-life/202006/the-brain-under-the-influence-power
Also interesting and very terrifying is some studies showing some evidence that power may literately make you act more psychopathic/brain damaged, less able to accurately judge what those 'lower' than you are feeling and more likely to engage in stereotypes (say judging someone for wearing a hoodie instead of a suit), and fail at simple tasks like writing an 'E' on your forehead.
I'm not sure how good these actually are though.
Unrelated, but while we're on the topic of myth busting, some folks like to claim neanderthal genes are what make whites 'special'. However, Asians actually carry more than white people do, and Black people also have those genes. White people are kinda middle of the road about it - nothing special at all.
https://cosmosmagazine.com/history/palaeontology/why-asians-carry-more-neanderthal-dna-than-others/
no scientist really believes in solipsism even if the apparatus suggests it, and are atheists assholes?
1. I saw someone accuse atheists who think most of humanity are fools of being 'arrogant assholes'.
Eh. My opinion is that all humans are by default fools, and that you must actively work to make yourself unfoolish, by utilizing a neutral standard such as 'ability to predict the future' for truth statements instead of gut feelings.
That is the exact opposite of arrogance.
2.
I saw multiple people get confused by the latest 'loophole free' test of contextuality in quantum mechanics. Some of them accusing scientists of putting humans too much into it, which is an amusing accusation.
I think it's important to put how science usually works into context, and to compare to a totally different scenario: scientists trying to avoid talking about consciousness of other beings and emotions.
Scientists like to define things by what they can rigorously measure in the laboratory. This has the result of potentially denying the consciousness of all other beings that aren't the scientist while simultaneously making consciousness seem necessary in quantum mechanics.
I know, you're probably going 'Ehh?? How on earth does that work?'
We can't measure consciousness. We also can only describe multi-particle systems in terms of their relationships to each other over 'measurements', which happen to be carried out by conscious scientists.
It's important to realize here that no scientist is seriously suggesting a solipsistic universe where only that scientist exists and they cause everything to exist by measuring it. This is entirely a result of trying to stick to rigor. Thus, it's easy to read more into it than is really there. Another thing that is basically defined by measurement is our concept of time: Einstein defined time as what you measure with a clock, because that was the simplest vigorous way to do it. That doesn't mean if we destroyed all clocks tomorrow that time would cease to exist.
Experiment and what we can measure are important, but so is basic logic. A measurement is ultimately just an interaction. Thus, we could and probably should replace in QM the word 'measurement' with 'strong measurement-type interaction' and 'weak measurement interaction' (it depends on whether you view 'interaction free measurements' as actually interaction free) and this would be a lot less confusing. In vanilla QM, there is absolutely nothing about whether you need a conscious observer or not, it's totally agnostic about it. That's why there's a bunch of different interpretations of QM, some of which need conscious observers, but others don't.
My personal favorite explanation for contextuality is relative locality, with properties beyond just position being relatively local.
A similarity between referential logic breakdown and nonlocality, the barber paradox
[this post is an example of how to solve one kind of paradox]
The reader may be familiar with logic breakdowns such as 'A [male] barber shaves all men who do not shave themselves.'
And they may also be familiar with the behavior of vector transport along curved paths, returning to the same place, where they may become unlike their original counterparts. An infinitely curved space might give us infinitesimal loops that seem like they should give back the same vector orientation, but don't.
This works nicely as an analogue to self referential logic break downs, because these too engage in loops. And in the above example, we can clearly see it's not a localized description of the barber: we are talking about all men, which is definitely a non-local thing in the 'space of people' (rather than physical space, where we might imagine making all men into Russian dolls).
So at least in that case, they actually could be made into almost the same thing, except that barbers are not vectors. (They might be very complicated tensors, though.)
This suggests a method of repairing the logical breakdown, which is to pay attention to the identity of the object as we transport it and to decide what features are actually most critical.
Let's say we tried this in practice. A barber decides he wants to shave all men who don't shave themselves, and starts shaving men, going around until finally, the only man left is himself, whereupon he is stuck. Now he is 'the barber who used to shave all men who do not shave themselves, until he looked at himself in the mirror', a transformation.
Or, he could decide that he is no longer the man he once was, and hop into a time travel machine, and shave his past self's hair, and have his past self shave his hair in return once he regrows it out a bit.
Or he could get a gender change.
The original nonsense statement can be satisfied if we vary the quantity in two different ways, the path continuation (shaving) and the guy himself, so that a combined quantity is an invariant.
Is the universe solvable? Here is why the answer is probably yes
[august 2022]
I've seen quotes from Smolin and Sabine Hossenfelder that science may not even be able to solve the nothingness problem, with Smolin implying this is a good thing since it means science and religion can reconcile.
Stephen Hawking claimed a theory of everything was just round the corner, but in 2010 Hawking changed his mind and abandoned the theory altogether arguing that all scientific theories were models and that we would never be able to arrive at a single account.
According to Woit, the mood among many string theorists seems to be a mixture of denial (string theory is still super useful! insert talk of toy models that fail to capture reality but are nice because they are 'simple'!) and acknowledgement that, yes, string theory didn't quite work out as the theory of everything as hoped.
Even when I was very young and encountered strings for the first time, I didn't really like string theory, tbh. It didn't have the feeling of deepness that relativity did or even quantum mechanics. It really felt like 'well, maybe it is strings for some reason, with conveniently tiny curled up dimensions that do not match any actual evidence...' was the motivator, which compared to the other, actual deep shit or even Higgs (which was not discovered at the time) which at least had some concreteness to go with it.
That said, there's apparently a conference addressing just this issue, and when you have a debate, there's guaranteed to be at least some people on the other side. So just because I haven't heard of them doesn't mean they don't exist.
And of course, I'm on the side that the problem can be solved. That said, I'll be nice and bring up one of the more decent objections, rather than 'it is too hard', and then one that I don't think is actually a good objection but it 'looks' like a good objection if you don't think it through:
1. Axioms.
Even if you solve the problem, how do you know the universe chose those axioms? However, this actually is not the deal breaker it first appears. For one, if we actually had a theory making ridiculously good predictions, not being able to absolutely prove it would be kinda irrelevant - you don't actually prove beyond all possible doubt someone committed a crime in order to convict them or we'd never put anyone in jail, just beyond all reasonable doubt. Similarly, I don't try to prove beyond all possible doubt tables exist before putting my chocolate milk down on one.
Secondly, this is not impossible to deal with. It is entirely possible to make statements about entire axiomatic systems, rather than just your starting axiom system. If someone could prove other kinds of axiomatic systems are actually logically inconsistent or trivial (mathematicians would hate this, since they'd like the logic systems they use the most often to actually just be incomplete) in a Godel-like proof, that'd be pretty convincing!
2. Is the question of how we exist even possible to pose in a logically coherent and rigorous manner?
Uh, we exist, so I'm going to say 'yes'. Logically coherent doesn't mean easy. It just means non-paradoxical and consistent. And you can bend a system a lot before it actually becomes paradoxical.
Suppose the over-arching law is to change the 'subordinate laws of physics' every Sunday morning. It might look inconsistent, but as long as the system is really obeying that over-law, this isn't a paradox. In fact, it's probably only possible for referential systems to be paradoxical, and a physical system is not referential (except perhaps for the humans, but that's very indirect - my thinking I will assign you the color blue doesn't change the physical light coming off of your body).
Why, you may ask?
If a thing means itself, and never anything else, there's no chance for A to be assigned NOT!B and then later assigned B by accident. If it changes to B via a physical process, that isn't a paradox. If it gains two attributes, B and Not!B, that also isn't a paradox any more than it is a paradox for the earth to experience night and day at the same time - the Earth is a big place.
Now, whether humans, who do use referential systems, can come up with a rigorous system to describe physics is a very different question, and if the answer is 'no' here, it probably says more about us than the universe. But if it did say something solely about the universe, that'd actually be rather strange, because it's implying something funny goes on between the morphism 'object A' and 'symbol A that represents object A'. You can label anything you want, so hypothetically you should be able to label that funny thing and describe what it does and where it breaks down, so that implies the 'funny thing that cannot be described referentially' can't actually exist because I literately just labeled it 'funny thing that cannot be described referentially', a reference.
You get what I'm saying?
3. Acknowledging that we can pose the question per above, can we solve it?
That is probably so, because again, the physical universe is not a self-referential system like the ones Godel showed were so problematic for axiom systems. If it was, it might be a genuine worry that the universe is 'incomplete'. But if you think about it, it doesn't make much sense for a physical system to be incomplete, especially if all things stand for themselves (non-referential) - that would mean the universe (the totality of all things) has pieces missing, which would mean the totality is not the totality - a paradox that would mean it can't be incomplete, only complete or inconsistent. (Well, ok, I guess it can be both incomplete AND inconsistent.)
And physical systems probably can't be logically inconsistent, for the reasons I listed above. (Of course, if the universe is actually illogical, it won't care about my logic...)
4. What if humans are too stupid?
This is the only one that actually worries me. We are pretty fucking stupid most of the time. But not all of the time. That said, humans aren't the only life forms in the universe. Maybe a billion years from now an alien will be born with the intelligence to realize the answer pretty quickly.
Or maybe a human somewhere already figured it out... I for one don't think the problem is as difficult as people think it is. There are some really intriguing clues, like the conformal transform between flat space and a single point, and NULL being different from zero. And it's actually not that hard to identify a crucial feature that any solution must have: a nonclassical ability of the lesser parts solo to imply the whole, so that 0 can imply 1, or the past the future (interestingly, this would seem to say the future can imply/demand the past - so no Last Tuesday hypothesis where the universe only started last Tuesday; maybe that sounds too 'obvious' to be interesting but it is interesting to me).
It's still intimidating as fuck though which is why I haven't tried to publish yet... Although part of that intimidation is 'some people will probably want to murder you if you figure it out'.
Smullyan's "Hardest" Logic Problem: an original solution by myself
[2023]
So you're wondering - how smart is this guy? Can he show his chops so I know how seriously to take his ideas?
Well, let's tackle the so called 'hardest logic problem', although it really really isn't, it's still moderately fiendish.
Three gods A, B, and C are called, in some order, True, False, and Random. True always speaks truly, False always speaks falsely, but whether Random speaks truly or falsely is a completely random matter. Your task is to determine the identities of A, B, and C by asking three yes-no questions; each question must be put to exactly one god. The gods understand English, but will answer all questions in their own language, in which the words for “yes” and “no” are “da” and “ja,” in some order. You do not know which word means which.
This puzzle was posed by Smullyan and answered by Boolos. I will give you a novel answer which IS NOT the one Boolos gave, and which I came up with before continuing the article and reading Boolos's. (from this article: https://getpocket.com/explore/item/how-to-solve-the-hardest-logic-puzzle-ever)
I spent an hour coming up with my own solution before reading the rest.
I was slightly surprised when I read the rest of the article and saw it was quite different from the official one; I had expected it would use the halting problem in some way, I guess because thinking about binaries always primes me to think about it.
Anyway, the official version apparently involves compound questions, and trying to figure out a question that will ascertain one of the three for certain. In that, our solutions are alike.
My first part of the solution wasn't to trick an answer about 'ja' and 'da', but to make 'ja' and 'da' mostly irrelevant and utilize the fact it is binary, and, if necessary, to add 'if you were answering as a person to whom ja means yes and da means no', the last of which is what the original used. So partially like the original.
My second part was to utilize the halt problem to effectively get a third form of answer. Even if you can ask only yes or no questions, that doesn't mean the answer has a determined yes or no. In this, our solutions are completely different.
For instance, you can ask the question: "What does the God next to you answer for the question I am about to pose to them, treating currently undefined as 0/false ONLY if you are the liar?" (actually, a slightly better version of this would be 'What is the combination of both God's answers in binary', as the next god could be the randomizer; why this is important should be clear in a moment)
And then whatever it answers, you can flip that answer by altering your next question, forcing that God to be wrong. If it is the truth teller god you are asking this first question, then it impossible for them to answer, because you can force them to lie. That means if it fails to crash into non-answer, you either have the liar or the randomizer.
For the second question, after you've gotten the truth teller, you just need to ask a question that will cause the liar to halt, but not the randomizer, like 'What would invert the previous God's answer' or a variation of that first question, made to cause the liar to halt and not the truth teller or randomizer. The liar cannot flip a non-answer!
You can effectively re-use a question by making a slight variation on it, like 'What is the answer to the question I asked before, inverted?' or 'what would the truth god say to the question I just asked'.
If you ask questions designed to make the truth teller halt and get failures to halt both times, then you know the final answerer is the truth teller. Then, you ask it "What would a speaker of a language where ja means yes say to the question that the first God is the randomizer?" which will tell you which one is the randomizer and which one is the liar.
In the normal solution, you also are meant to ask about the other gods, but whether they are random or not in a double conditional:
Ask A “Does ‘da’ mean ‘yes’ if and only if you are True if and only if B is Random?”
then Ask B, “Does ‘da’ mean ‘yes’ if and only if Pluto is defined as a dwarf planet?”
then ask the one that must be truthful.
I'll be honest with you, this is really awkward phrasing.
The first sounds like it's asking does the meaning of da change depending on whether b is random and if you are true, which accidentally could work in that you could argue the 'yes' of a liar really means no and the yes of a randomizer means nothing, but that's not what that question is supposed to be asking (I think), it's trying to string conditionals. If you want to set the meaning of da, then it would be less confusing to state 'Imagine da means yes if and only if Pluto is a dwarf planet', if you are trying to figure out the meaning of da itself, then 'does da mean yes AND is Pluto a dwarf planet, using AND as a logic gate to combine the two into a single true/false statement?' would be a better way to phrase it.
But we don't need to figure out the meaning of da, we can just set it.
I actually misread the starting challenge as saying 'each question has to go to a different god', somehow, but the original solution makes it really clear it doesn't have that requirement:
1. To god A: “What is the binary true/false evaluation with ‘da’ meaning ‘yes’ of 'Are you True and B is Random'?”
(Let's suppose A said, “ja,” making B True or False; if it answered otherwise, C would be true or false. To see why, note a Random A can answer anything, but then the other two are true and false, so we can essentially almost ignore that Random exists for this question).
2. To god B: “Evaluating with “da” means ‘yes’ is Pluto a dwarf planet?” (We suppose B said, “da,” making B True.)
3. And to god B (True) again: “What is the output of 'Evaluate ‘da’ meaning ‘yes’ if and only if A is Random, else output ja?'” Since B’s True, he must say “da,” which means A is Random, leaving C to be False.
Since I misread it, I never would have found the older solution on my own, hah. But on the plus side, it means mine is in a sense 'stronger', since it remains a solution under stricter conditions.
A small potential problem with my solution is that it's useful to know whether the gods are really all knowing or not. Do they know the future? If they don't, then both true and false can get confused in the first question by you asking about the answer to the second question and be unable to answer, although in principle you might notice this by them delaying answering until you had posed the second question and it had been answered. Also while they presumably know how the other gods would answer and be able to read minds and thus be able to guess what you are planning to answer next and how it will get answered they could still get stymied by the fact you could change your mind, being a fickle mortal and all, so if they don't know the future they don't know if you'll do that.
I think being gods it is fair to assume they know the future, but not every god in every mythology does, in fact most of them don't, so one will need to be careful about that.
I am far from a flawless human being, but I do have half a brain and can compensate for my mistakes. You could argue that I bent the rules on the 'only yes/no questions' idea, and that any halt problem shouldn't really be regarded as a yes/no question. But the original problem didn't give any consequences for improper questioning beyond the gods not answering anything other than ja or da, so I figured it should be alright. If a God /does/ curse you for asking a question that God can't answer, that too would provide you information since some of the gods would be able to answer and others would only be able to curse you. In such a case, you could gamble with a tr-conditional designed to make truth teller halt if the next god B is random and the final god C is false, and technically solve it anyway by getting cursed.
Playing with definitions to make the universe definitely solipsist
Let's play a game. Can we use a definition or view of solipsism that makes solipsism 'necessarily true'?
First, the usual definition: solipsism is the belief or idea that you are the only thing in the universe. The motivator is that one cannot actually prove one didn't just imagine everything else.
But note that even a solipsist will notice some things aren't consciously controlled by them. If they are mere thoughts, they are inadvertent thoughts.
So that gives us an amusing way of broadening the definition and solving this puzzle. If our solipsist recognizes a difference between unconscious and conscious, they merely have to classify all things that they aren't directly conscious of and don't have conscious control over as also themselves.
From there, it becomes impossible for the universe to not be solipsist, as we've basically defined every possible state as a solipsism state. The trick, of course, is that we've neglected that there may be other consciousnesses which a more traditional definition of solipsism would say don't exist.
If you define yourself as everything, then you've also lost the ability to talk easily about what most people would actually define as yourself, which is your conscious part, and that needs a new word, like 'moogle' or something. This demonstrates one of the hazards of changing definitions.
Scientism and autovaluism
See my
autovaluism article; this is one of the posts there, I won't repeat it.
Math as invented or not depends on definitions
I thiiink ditto for this one as well.