Understanding your health is hard

This post was originally published at https://drmaciver.substack.com/p/understanding-your-health-is-hard.

Hi everyone,

Today didn’t quite gel into a coherent narrative, so here are a bunch of loosely related fragments about the intersection between physical and mental health, the nature of knowledge, and the difficulty of doing science to everything we’d like to.

Can you train your vision?

An exchange went by on Twitter a little while ago that I keep thinking about because I realise the fact that it sounded plausible to me is new.

Here’s the relevant discussion:

::: {.twitter-embed attrs=“{”url”:“https://twitter.com/PrinceVogel/status/1374806789884104704”,“full_text”:“@imperialauditor Esoteric health thing : a lot of eyesight problems are caused by muscle tension behind and around the eyes\n\nGlasses can lock in and worsen the tension overall \n\nI was skeptical at first but my progress is unequivocal, if I can relax enough and let my eyes refocus I see way better”,“username”:“PrinceVogel”,“name”:“Prince Vogelfrei Is On Break”,“profile_image_url”:““,”date”:“Wed Mar 24 19:34:16 +0000 2021”,“photos”:[],“quoted_tweet”:{},“reply_count”:0,“retweet_count”:0,“like_count”:20,“impression_count”:0,“expanded_url”:{},“video_url”:null,“belowTheFold”:false}” component-name=“Twitter2ToDOM”} :::

There’s a bunch of discussion up and down thread about how you can improve your vision with training, and Vogel cites the book “The Art of Seeing” by Alex Huxley among other things.

I was vaguely curious about this, so I looked into it. Here’s the Wikipedia entry on The Art of Seeing:

The Art of Seeing: An Adventure in Re-education is a 1942 book by Aldous Huxley, which details his experience with and views on the discredited Bates method, which according to Huxley improved his eyesight.

Ah, I see. Later on in the article:

The established ophthalmological and optometric professions have not been convinced. For example, Stewart Duke-Elder wrote

Whatever be the value of the exercises, it is quite unintelligent of Huxley to have confused their advocacy with so many misstatements regarding known scientific facts. It has been shown that the hypothesis upon which these methods of treatment are based is wrong; but Huxley, while admitting he is ignorant of the matter and unqualified to speak, contends that this is of no importance because the method works in practice and gives good results: it comes into the category of “art” not of “science.” The argument is perfectly allowable, for in other spheres than medicine empirical methods have often produced effective results the rationale of which may be mysterious. The most stupid feature about his book, however, is that he insists throughout on the physiological mechanism whereby these exercises are supposed to work. It would at least have been logical if he had continued to allow the reader to assume that he was speaking in ignorance of anything except results…

There would appear to be no doubt that these exercises have done Aldous Huxley himself a great deal of good. Every ophthalmologist knows that they have made quite a number of people with a similar functional affliction happy. And every ophthalmologist equally knows that his consulting-room has long been haunted by people whom they have not helped at all.

This is one of the more positive reviews of the book. Later:

Martin Gardner described The Art of Seeing as “a book destined to rank beside Bishop Berkeley’s famous treatise on the medicinal properties of ‘tar-water’”

Philip Pollack commented

Huxley sounds in his book like Bates out of Oxford with a major in psychology and metaphysics. Bates wrote of relaxation but Huxley brings in transcendentalism. Tension and poor vision are caused by the refusal of the individual ego to surrender to Nature.

It would be easy to paint a story of how big ophthalmology doesn’t want you to know this one weird trick to improve your vision. It would be easy to paint a story of Huxley as some total kook who has taken far too much mescaline. I suspect that the truth is more complicated: They’re both wrong, and possibly they’re both right.

I mentally flagged this as “probably wrong, maybe worth following up on” as my vision has been gradually degrading over the last ten years and also I really don’t enjoy wearing glasses any more, but I didn’t think much of it past that point.

Then I was reading Peak recently (highly recommended) and ran across the following passage:

Researchers are just beginning to explore the various ways that this plasticity can be put to work. One of the most striking results to date could have implications for anyone who suffers from age-related farsightedness—which is just about everyone over the age of fifty. The study, which was carried out by American and Israeli neuroscientists and vision researchers, was reported in 2012. Those scientists assembled a group of middle-aged volunteers, all of whom had difficulty focusing on nearby objects. The official name of the condition is presbyopia, and it results from a problem with the eye itself, which loses elasticity in its lens, making it more difficult to focus well enough to make out small details. There is also an associated difficulty in detecting contrasts between light and dark areas, which exacerbates the difficulty in focusing. The consequences are a boon for optometrists and opticians and a bother for the over-fifty crowd, nearly all of whom need glasses to read or perform close-up work.

The researchers had their subjects come into the lab three or so times a week for three months and spend thirty minutes each visit training their vision. The subjects were asked to spot a small image against a background that was very similar in shade to the spot; that is, there was very little contrast between the image and the background. Spotting these images required intense concentration and effort. Over time the subjects learned to more quickly and accurately determine the presence of these images. At the end of three months the subjects were tested to see what size type they could read. On average they were able to read letters that were 60 percent smaller than they could at the beginning of the training, and every single subject had improved. Furthermore, after the training every subject was able to read a newspaper without glasses, something a majority of them couldn’t do beforehand. They also were able to read faster than before.

Surprisingly, none of this improvement was caused by changes in the eyes, which had the same stiffness and difficulty focusing as before. Instead, the improvement was due to changes in the part of the brain that interprets visual signals from the eye. Although the researchers couldn’t pinpoint exactly what those changes were, they believe that the brain learned to “de-blur” images. Blurry images result from a combination of two different weaknesses in vision—an inability to see small details and difficulties in detecting differences in contrast—and both of these issues can be helped by the image processing carried out in the brain, in much the same way that image-processing software in a computer or a camera can sharpen an image by such techniques as manipulating the contrast. The researchers who carried out the study believe that their training exercises taught the subjects’ brains to do a better job of processing, which in turn allowed the subjects to discern smaller details without any improvement in the signal from the eyes.

So at this point I’m leaning towards the theory that yes, vision is just a skill you can practice.

As best as I can tell, the accepted medical wisdom is still that this is not the case. There’s not a lot of studies on this as far as I can tell, and some casual googling reveals things like The lowdown on eye exercises where the position is as far as I can tell “There’s no evidence that they work, therefore we should assume that they don’t work”. After the last year or two I have a certain amount of learned healthy scepticism towards this attitude.

Fortunately the nice thing about trying to come to a personal understanding of the world is that you can just try things and see what happens. We are bricoleurs, not scientists. So maybe I’ll have a read of the Huxley book and see what happens. I doubt it will work, but it might be interesting?

Holding beliefs lightly

By a complete coincidence, I recently read a book with the subtitle “Why some people see things clearly and others don’t”.

The answer isn’t “because the others need glasses and/or eye muscle strengthening exercises”. “See things clearly” is a metaphor here. The book is Julia Galef’s “The Scout Mindset”, which is about how and why to believe true things.

It’s a good book and I about 80% endorse its contents and 100% endorse reading it.

But one problem I have with it is that it’s mostly not actually about believing true things, it’s about not believing false things, and that’s not actually quite the same. It’s perfectly reasonable to have an entire book about how not to believe false things, and Scout Mindset is a good book about that, but it’s also important not to conflate the two, because Scout Mindset is mostly silent on the question of how to discover true things in the first place.

In order to understand why this is, please consider the totally wrong but in this context useful model where a person’s beliefs can be accurately summed up by a list of statements that they believe.

Suppose this person wants to be a truth seeker. Which of the following do they optimise for?

  1. The number of true statements in their list of beliefs.

  2. The fraction of their beliefs that are true.

This is a trick question, there is no reasonable model of truth seeking in which either of these things are what you optimise for, because for any non-omniscient agent with finite resources, these correspond to the following strategies:

  1. Believe literally everything.

  2. Believe literally nothing.

As I wrote about in There’s no single error rate, in any process of decision making you have multiple error rates (at least two, more if there are more possibly outcomes to the decision). Each outcome you can decide on has an associated error rate, the fraction of times when you made that decision and shouldn’t have. Here’s an example from that article:

Suppose you are part of the quality assurance team at a car manufacturer. Your job is to certify the finished cars as safe or not (please note that I know nothing about car manufacture and this example is entirely an illustrative just so story).

Each time a car is presented to you, you can make two types of error:

  • You can accept an unsafe car as safe

  • You can reject a safe car as unsafe

Both of these errors are bad, but they are not the same sort of thing. Rejecting a safe car is expensive, but passing an unsafe car is a potentially fatal error (and also very expensive if you care more about that sort of thing).

Although you cannot ensure you never make any errors, there are two strategies you can adapt that will let you reduce one of these error rates to zero:

  • To ensure that you never pass an unsafe care, never accept any cars.

  • To ensure that you never reject a safe care, accept every car.

That is, in order to reduce one error rate to 0% you have to make the other error rate 100%: When you reject every unsafe care, you also reject every safe care. When you accept every safe car, you also accept every unsafe car.

This is a general principle: For almost all nontrivial decisions, the only way to avoid sometimes deciding something incorrectly. You can increase the amount of effort you’re spending to decrease both error rates at once, you can spend a bunch of up front effort on skill development to maybe improve both at once, but for a fixed amount of effort and skill any attempt to lower one error rate will raise the other.

To see the relevance to the problem of beliefs, let us extend our totally wrong model with belief with a totally wrong model of belief acquisition: You encounter a statement, and then you decide whether to believe it by adding it to your list of beliefs.

One error rate is how often you disbelieve a true statement (refuse to add it to your list - not the same thing as adding its negation to your list!), and another is how often you believe a false statement. In order to maximise the number of true things you believe, you should minimize the former. In order to maximize the fraction of your beliefs that are true, you should minimize the latter.

In reality of course, you need to keep both in reasonable regions. You shouldn’t believe too many false things, but you shouldn’t also reject too many true things. Both are failure modes.

Exactly what the right balance is depends on the relevant trade offs. In the car manufacturing example, selling an unsafe car is really quite bad, so you want to keep the error rate quite low. But if you were manufacturing, say, paper cups, it doesn’t really matter so much if you occasionally sell a leaky cup - it’s important to keep it low, but the error rate can reasonably be much higher than that for cars, and it’s important to keep the cost of production low (because paper cups are cheap), so you can’t let the other error rate grow as high.

Similarly with beliefs, at a fixed amount of effort, in order to believe more true things (i.e. reduce the rate at which you reject incoming true beliefs) you need to believe more false things (i.e. increase the rate at which you accept incoming false beliefs), and in order for that to be a good idea you have to reduce the cost of having false beliefs.

One of the ways of doing this, which I think Scout Mindset is very good at encouraging, is making it easier to change your mind. It’s OK to have a false belief if, when that belief is tested, you are able to get rid of it later.

Not proven to work

I really enjoy Tim Minchin’s poem, Storm.

::: {#youtube2-HhGuXCuDb1U .youtube-wrap attrs=“{”videoId”:“HhGuXCuDb1U”,“startTime”:null,“endTime”:null}” component-name=“Youtube2ToDOM”} ::: youtube-inner ::: iframe ::: {#player} :::

An error occurred.

Unable to execute JavaScript.

::: ::: :::

In it there’s a great line:

“By definition,” I begin,
“Alternative medicine,” I continue,
“Has either not been proved to work, or been proved not to work.
Do you know what they call
Alternative medicine that’s been proved to work?
Medicine.”

I have historically been very much a fan of this line. Ha ha, good one, Tim. You really owned those alternative medicine fans.

Anyway, then over the last couple of years I got into reading about therapy and similar methodologies as a way to try to figure out my life and emotional state a bit better. A lot of it has been extremely effective.

There’s an interesting thing about therapy. You know what they call therapy that’s been proven to work? CBT.

Cognitive-Behavioural Therapy is the most “evidence based” therapy. This is why it’s so popular in the UK, and is the main form of therapy offered by the NHS. It’s also not very good.

Don’t get me wrong, CBT is probably going to be better than nothing for most people, and for some people CBT is probably exactly the therapeutic tool they need.

But the reason that CBT is the most evidence based therapy is mostly that it’s reasonably easy to run experiments on CBT. It’s easy to provide a routine set of instructions for how to do it, and it has relatively measurable goals, so it’s not that hard to set up a large scale experiment to test the hypothesis that CBT more or less works.

CBT is the most evidence based therapy not because it’s the therapy that is the most effective, but because it’s the therapy that it’s easiest to gather evidence around.

There are a lot of other therapies that we “know” work, but have not got an evidence base that meets the standard of scientific rigour that would “prove” that. Chances are some of them work and some of them don’t, but also chances are that we’re better off just muddling along and trying them and seeing it helps.

Science, when done properly, is biased heavily in favour of not believing false things rather than believing true things, and sometimes this is far too cautious and does more harm than good.

Science, it works

(In this contrived lab experiment, p < 0.05)

One recurring argument in my PhD was that there was a lot about the scientific publishing process I hated. My supervisor had the, not unreasonable, criticism that my objections were often to the part that was science rather than the part that was publishing. I guiltily agreed with him, but said that I still found it very unpleasant.

He was right, I did hate doing science to a lot of things that I wanted to write about. I’m not sure I was wrong to hate it. In a lot of cases this was because they weren’t really worth doing science to.

The problem is that one of the core processes of constructing scientific knowledge looks like this:

  1. You have some general principle you want to test.

  2. You say “Well if this is true in general, it should be true in this specific case”

  3. You repeat the specific case enough times in order to determine whether it is probably true in that case.

The two problems with this approach are that it often doesn’t generalise as well as you claim it does, because there are particular choices in how you construct the specific case - e.g. something might be true only if you test it in white middle class college students in a particular city in the USA - and also step 3 is really expensive.

A lot of the things I wanted - and to some degree still want - to write about are more to the tune of “Here’s a thing that might under some circumstances be useful”.

I’m not actually against the use of experiments for this - actually validating your ideas is important - but I think the way you use empirical data in engineering is much more specific. You’re not testing a generalisable claim, you’re just keeping an eye on the thing you’re doing to tell whether it works in this particular case - and trying to turn that process into science always felt dishonest to me.

This problem is even worse when you’re doing science with people rather than computers. There are far more choices you need to make in step 2 in going from the general to the specific, and also the experiments you need to run in step 3 are far more expensive. There’s a reason the replication crisis is centred on psychology.

None of which is to say that we shouldn’t do science in some generalised sense, only that science is really hard, and there will often be questions we want to know the answer to but that are not worth doing science to.

The pandemic has had so many instances of good science and bad science, and I don’t plan to cover all of them here (mostly because I don’t know most of them and am not qualified to comment on most of what I do know!), but two interesting ones stand out for me.

One is that among the things that were “not proven to work” were face masks (I’m not actually sure what the current status of this is). We had no scientific studies showing that face masks were an effective protective measure against COVID.

But come on, we could make a pretty good guess. Face masks are and were obviously a good idea, and many of the people responsible for demonising or belittling their use early on in the panic are probably statistical murderers.

The decision of whether to wear a face mask was one of the areas where waiting for science to catch up with common sense had a literal body count.

Vaccine trials are also, perhaps, a case of us being overcautious on the science. I’m more ambivalent about this - I think the case is stronger that we should have been doing better science, certainly (e.g. Human Challenge Trials). On this one I’ll defer to people with more expertise than me. It sure feels like taking an entire year to roll out a vaccine developed in two days is being a bit over cautious under the circumstances, but I don’t really know the details of how that went down.

Vaccine development on the other hand was a stellar example of why despite all of the reservations about science I have in this post, science is great and we should do more of it. Vaccine development is more engineering than science, but it builds heavily on a very large body of previous science.

It’s probably not Lyme disease

One of the reasons why I am very interested in the intersection between science, epistemology, and healthcare, is that a couple of years ago I spent about six months thinking it was quite likely that I had spent a decade with undiagnosed Lyme disease.

I won’t bore you with the detailed chain of reasoning, but in short:

  1. There was plenty of evidence that this was a plausible hypothesis, though nothing conclusive.

  2. I had spent about a decade with low-grade chronic fatigue, weird health issues, and generally feeling quite bad, and these would all be well explained as Lyme disease.

I got a negative blood test, but did some research on the internet and concluded that this probably wasn’t 100% conclusive and there were plenty of reasons the blood tests could come back negative even if I had Lyme disease. It was evidence against the hypothesis, but I couldn’t rule it out. There were expensive labs I could send blood to to get other more refined tests, but I never quite got around to doing that. Instead I hit upon a perfectly reasonable idea: Why not just do the course of antibiotics that we’d use if it were Lyme disease? It’s an easy experiment to run and either it is Lyme disease and we fix it, or it’s not Lyme disease and we don’t.

My GP was politely sceptical about this but after a brief cautionary note, happy to go along with it. (I now realise he was humouring me, although I still think my reasoning at the time wasn’t wrong).

Anyway, the antibiotics didn’t help, so I did something that I think shocked my GP (and another medical specialist I was talking to at the time about some breathing issues).

I said “OK, I guess it wasn’t Lyme disease after all.”

This is, I think, a good example of Julia’s “Scout Mindset” - changing your mind when the evidence suggests you are wrong, no matter how appealing the hypothesis was.

I didn’t fully realise this at the time, but it turns out that thinking you have Lyme disease despite all evidence to the contrary is this whole big thing on the internet, and that much of the “research” I was doing was getting information from people with about the scientific credibility of Storm or a snake-oil salesman.

And the thing is, I’m not sure I can really blame them. As I said, I had spent about a decade with low-grade chronic fatigue and weird health issues at that point, and it’s really hard to get doctors to take you seriously. I know through the grapevine that they see this sort of thing so often that they’ve got an internal slightly dismissive term for it: “Tired all the time”. Doctors do not particularly like it when a patient comes in with this complaint, because there’s not much they can do about it once they’ve done the obvious blood tests and ruled out the obvious things, and patients do not like the way they get treated in these circumstances, because they feel like the doctors are telling them that one of the big problems of their life “isn’t real”.

There’s a general maxim that if you have chronic health problems (to some degree you have to even if you don’t. Being human is a chronic health problem), you have to become an expert in your own healthcare, because no doctor will do it for you: They are experts in healthcare in general, but they aren’t experts in you.

The problem is that when you couple this with a total failure of the medical establishment to help you or indeed acknowledge your problems as valid, it’s very easy to get uncoupled from reality. I don’t blame the community of people who think they have Lyme disease and don’t. I’m not even 100% sure they don’t, although I’m probably somewhere in the region of 99.9% sure, though I don’t think highly of them either.

It might be depression

My current explain everything theory to replace “It’s probably Lyme disease” as a way to explain all my problems is “It might be depression”. Or, more generally, a mental health issue.

One of my current lightly held beliefs is that there’s a much stronger link between mental and physical health than we necessarily give credit for, and a lot of chronic health problems are really mostly mental health problems.

This isn’t to say that there’s no underlying physical cause - I’m sure there often is - but that there’s a major feedback loop between mental and physical health where bad mental health can wreck your physical health and bad physical health can wreck your mental health.

Can I prove this scientifically? No, absolutely not.

But it seems to be a useful point of view for me, so far. I’ve made huge strides in my mental health over the last few years and increasingly I’m not tired all the time. I have bad days and good days (and bad weeks and good weeks), but it would no longer feel at all accurate to describe myself as having chronic fatigue.

Unfortunately this means that I often find myself thinking the equivalent of “Have you tried yoga?” at people. I don’t say it, because it would be rude, unhelpful, and violate my policy of No Backseating, but I do think almost everyone with a chronic health problem would benefit from spending a lot more time on weird therapeutic practices than they do. It probably won’t fix things entirely, but it also will probably fix a lot more than you think.

On the other hand, I’m aware that this is a dangerous direction to go in - both politically, and personally. I’m reasonably OK with the politics - “Have you tried undergoing a years long process of personal exploration and emotional healing? It’s really hard work and quite painful so I don’t blame you if you haven’t or don’t want to.” is rather less victim blamey than most of the variants of this

I do also worry about becoming one of the Lyme disease people though. The extreme versions of this sort of belief system are pretty crazy, and I do read material from people with those extreme versions (e.g. I often recommend the therapeutic practice of “Feeding Your Demons”, which is mostly a useful exercise for interacting with negative emotional states, but it comes from a book in which the author explains how this sometimes cures various extremely physical diseases like AIDS and cancer, and I very much do not want to be that person).

As long as I treat these sources with an appropriate degree of caution, I don’t think this is necessarily bad, they have some useful ideas in them, but it scares me a little that I don’t find the idea that a therapeutic practice can send your cancer into remission utterly out of the question. It’s significantly less plausible than eye training exercises fixing your vision, but the effect of stress on the body is quite strong, and I can’t completely rule out interventions on mental health having dramatic physical effects too.

Living life well is hard

A criticism that I’ve occasionally received about my writing is that it tends to be a bit “Here is a problem, sure is hard, isn’t it?” and not offer any solutions.

Well, here is a problem, sure is hard, isn’t it?

This is especially true of the paid issues - sorry! - because here you’re mostly getting things that are a bit private because they’re on the cutting edge of the bits I’m still figuring out.

But I can maybe offer a talking points level solution of the view I’m trying to get across here:

  1. Be good at changing your mind when you’re wrong (and maybe read Scout Mindset to learn how to do that)

  2. Be pretty relaxed about being wrong for a while until you do.

  3. Get comfortable with the fact that everything is far more complicated than we can possibly deal with, and in many cases you will never know if you were right or wrong.

  4. Try stuff anyway, and learn from it whether it works or not.

Perhaps not the most satisfying answer, but here we are.