## Friday, January 18, 2013

### Treatment > Cure

There is a particular flavor of conspiracy theory surrounding disease cures that can be stated like this: private interests actively suppress disease cures because they can make more money by "milking" the diseased through on-going treatments that last a lifetime instead of cures which only earn money until the disease is mostly gone. Historical evidence does not favor this theory, but putting that aside I am interested in the underlying incentive problem.

Suppose it is extremely costly to research and develop a mechanism that totally kills all of the roaches, ants, and termites living in or around your home. Such a panacea insecticide could exist; it would just be very costly to develop it. On the other hand, suppose that common insect repellents are relatively cheap and easy to produce, or at least the chemists consulting with would-be insecticide company founders can give convincing reasons to expect it to be cheap.

Then the forecasted income stream from an on going business that makes the cheaper insecticides is more likely to succeed, more likely to get needed venture capital funding, and more likely to actually produce insect repellents that improve the lives of consumers. The savvy business investor will be incented to make products that solve a problem. Yes, an insect "cure" would be better than an on going insect "treatment" and we could grumble that the evil business investor is diverting funds that could otherwise be exhaustively spent on a cure search. But then in the meantime we might not have the useful stop gap repellents that, while not a cure, sure make life better and are more guaranteed due to their lower production burden.

Why doesn't similar reasoning apply to disease? I sure don't want to get cancer. It looks like a pretty difficult disease to understand and treat, much more so to cure. But smart people have already devoted a lot of time to explore potential cures, most of which haven't shown signs of working. So we should expect a full cure to be very, very difficult and expensive. So should I desire folks to go Indiana Jones style after that cure or praise them if they do? Their highly risky and expensive research efforts may fail to produce the "cancer repellent" equivalent along the way, leaving us with no cure, no treatment, and lost wealth.

Instead, perhaps a pharmaceutical company might look at the business implications of a long term revenue stream from on going cancer treatments... the better the treatment (less pain, no hair loss, less weakness or morbidity) the more money people will pay for the treatment stream. That sounds like a good world for future me to live in. Yes, I'd like a flat out cure more than that, but if the incentives can more reliably steer people towards a treatment that also improves my life, I'll happily take it.

To summarize, on going cancer treatments may very well be "better" than cancer cures, in the sense that treatments could be more cheaply and reliably achieved and actually offer investors an attractive revenue stream that will act as an incentive to get them to solve the problem. Going for the cure only, and sneering at treatments as if they are "greedy" ways to solve the problem is disingenuous: if it improves lives reliably then it's a good thing even if it's not the hard-to-get best thing.

I suspect some folks will not like this and will see it as "giving up" on a cure when, because of the high value of human life, we should practically give up anything to find a bullet proof cure. Economic behavior suggests that people only pay lip service to such an idea. But if you're more committed to your convictions than most, an excellent avenue to explore is prize-based charity so that you actively incent people to win a prize for single milestones of achievement. If it's true that people really want expensive cancer cures instead of just the cheaper cancer repellent, we ought to see a lot more private prize donations.

## Wednesday, December 26, 2012

### Why Loss Lingering?

I just re-watched 'Mother Simpson' (season 7, episode 8 of The Simpsons) and was more struck this time by the end of the episode than the other dozen or so times that I've seen it. In it, Homer's long presumed-dead mother makes a dramatic reappearance where her past as a law-breaking activist is revealed. At the end, the law are after her again and she must suddenly flee, leaving Homer motherless again. Homer says goodbye as she gets into a van that speeds into the distance. Then Homer remains right there, on the side of a rural road, until long after nightfall. It's that last part that made me think.

I've had many times in life where I've had to make dramatic goodbyes, most often with my family. One thing I often notice is that I retrace steps or linger in places where now-gone people once were. I walk my family to their car or a cab, wave goodbye, then go back to my living room and feel a bit sad to think that they were just in that very living room. I've had these experiences when retracing my steps after saying goodbye at my own apartment, at airports, at train stations, and at various points of interest where paths diverge. I've had this feeling on a small scale when I know it will only be weeks or months before seeing the person again; I've had this feeling on a large scale when I am unsure if I will ever see the person again.

Of course it fades and it is not the same feeling as grief over death or other separation emotions. But it makes me curious about why physical surroundings and retracing steps combine like this to amplify separation loss feelings. It's very easy to come up with simple/obvious answers for this, but are they really good explanations?

Here are a few (very speculative) ideas:

1. Sensory surprise. When you are around family, friends, or other loved ones, you are more attentive to things that would usually just be cursory elements of your setting. The specific sensory experiences that relate to your family, friends, or loved ones become familiar very fast and then their absence later is very stark, providing a depressive environment for previously-heightened senses. For example, it's well-known that peer and mate bonding has a lot to do with oxytocin levels in the brain, and things like eye-to-eye contact can increase this. If you become accustomed to receiving an oxytocin trigger in a given setting and then the trigger goes away, it could be like a mild form of withdrawal.
2. Immediacy of loss. If the impact of separation falls off according to some power law, then the moments right after a separation would be relatively more difficult than distant moments. Thus, we might expect to blow small details out of proportion, like the significance of someone having just been with you at a certain spot, more when separation is extremely recent.
3. Milestone effects. Culturally, communities and societies choose occasions and events that have "intrinsic" meaning. Sure, some have other valuable meanings too (like a college graduation conferring certification of a certain level of hard work or knowledge, or the birth of a child implying all sorts of emotional and physical lifestyle changes). But many things, like birthdays, vacations, retirement, rituals, holidays, or reunions, have meaning that more or less is dreamed up out of thin air and persists only as long as cultural pressures make it persist. But even so, we grow up embedded in audio, video, and tactile sense streams that reinforce the importance of milestone events all the time. Most goodbyes accompany milestone events, and therefore we might be prone to contrast a milestone-setting with a non-milestone-setting.
4. Maybe we don't. It could be that I perceive this affect to be more common than it is. Maybe most people don't linger in post-separation places or states of mind. Perhaps this is more strongly felt the more "emo" or whiny that someone is? What other personality traits does linger-after-loss correlate with? Why would such a trait appear in people? Does it indicate brooding? Is it a signal of commitment to others? What ways have we evolved to detect fake grief, so that such brooding or loyalty signals could be relied upon?
What other explanations help model this linger-after-loss behavior? If it was never very advantageous for social species to develop quick, robust, elastic happiness feelings that pop back into place after a loss, why not?

## Saturday, September 22, 2012

### Why I Am Not A Szaszian

Thomas Szasz recently died. While I respect the man as an intellectual a great deal, I disagree strongly with some of his claims about mentally disabled people. A fairly comprehensive defense of Szasz's ideas is given by economist Bryan Caplan in The Economics of Szasz. When Caplan writes something it is usually very well thought out and very difficult to successfully disagree with, regardless of how contrarian it sounds at first blush. Even so, I find many things in The Economics of Szasz to be just blatantly incorrect.

I don't have the time to format all of this into a polished essay, but my goal to is to do that sometime in the future. For now, I'm just posting various bits of thoughts below. These won't make much sense unless you also read the Caplan link above (which I highly highly recommend to anyone even remotely interested in philosophy).

Here goes:

First, as a side note, I cannot tell from reading the paper whether Caplan and Szasz would agree with the following statement about schizophrenia: There is mounting evidence that it is a neurological disease with obvious physical indicators and that the severity of the disease can be accurately predicted from these indicators (duh). (I added the 'duh' because computationalism is a highly accurate model of conscious beings.) Now, Caplan and Szasz may agree to that; I'm just saying it really is unclear where they stand from reading Caplan's piece.

Szaszian views attempt to take normative judgments of sanity and sequester them into a magisterium untouchable by neuroscience. But just because no one has yet articulated the genuine empirical distinction between mentally healthy and mentally ill patients does not mean that such a distinction cannot exist. I think the alternate definition offered, that of convincing a randomly selected jury that you are capable of genuinely investigating decision space and truly holding a counterfactual in mind is very robust against many of Caplan's arguments.

Consider the problem of aging. Smart folks like those at the SENS foundation view aging as a genuine disease, but the FDA and most people do not. Normative judgments like this will always be part of regulatory policy. There is always some line drawn in the sand, even for directly causal physical ailments. The telomeres of my DNA degrade over time causing me to age, yet many officials don't consider this a disease like cancer, which has a qualitatively similar effect at the genetic level but happens to be very spatially correlated, looks ominous on x-ray results, and affects lots of people's frail family elders.

I don't like this about Caplan's analysis because he's acting like we apply normative judgments to some things but not others. The problem is that new things always become exposed for consideration as our theories improve and our measuring devices become more accurate and computers let us churn more data. We can make well-defined physiological claims about mental illness now with functional MRI to a degree that makes diagnosing some mental illnesses about as routine as diagnosing a broken bone. In terms of the well-being of an individual we can clearly say that the state of having a broken bone is worse than the state of having a mended bone (all else equal). Why can't we say the same about incontrovertible physiological brain states? And why is it bad to use a Turing-style test for counterfactual robustness to check for this, replacing such a test with fancy neuronal imaging when we can (perhaps eventually replacing it all with a scan of the connectome)?

This gets at the point. Caplan complains, "Yes you can measure some difference in a mentally retarded person's brain, but who are you to decide that such a measurement 'matters' or 'is bad'?" But by the same token, why can't I say, "Yes, you can measure some difference in the x-ray of an injury victim's wrist bones, but who are you to decide that such a measurement 'matters' or 'is bad'? Why isn't this person just expressing a masochistic preference for bone pain?"

Caplan suggests that alcoholism cannot be viewed as shifting your constraints on actions inward. But I think it can: if you are genetically an alcoholic (as opposed to choosing to be an alcoholic while genuinely aware of the tradeoff space you're operating in), then your brain physiologically cannot generate the processes responsible for simulating what it's like to disprefer alcohol and then base a decision to drink alcohol upon what you discover when you simulate alternatives.

This inability to simulate alternatives would be just like a broken-ankle-victim's inability to run at their full speed. There are physical configurations of their spatio-temporal self that they cannot cause to be. Whether it arises gradually from long term brain degradation, or suddenly from brain injury, etc. etc., the analogy carries just fine. So if we make the normative judgement that a person born without ankles is in a worse state than if they had ankles, surely the same norms tell us that arriving at a brain state that cannot fully entertain the alternative of not drinking is demonstrably worse (a physical impairment) than a state in which one can.

Under Caplan's reasoning: Who are you to tell a Pica sufferer than their extreme economic preference for eating lead plaster "is bad." It's consistent with choice theory after all. But the Pica-haver is not expressing a preference to eat lead plaster. They are expressing a physical inability to adequately account for the counter-factual idea of "what will my life be like if I do not eat this lead plaster?"

Section 5.1 on delusions is especially bad. If someone suffers a delusion that communists are going to get them, say, it becomes a delusion precisely when the person is no longer able to convince others that he is capable of genuinely entertaining the counterfactual belief that communists are not after him, at least for the purposes of rational thought. Mainstream religious beliefs are completely absurd, as absurd as any mental illness. But it's not just the having of a belief that matters; it's the condition of the inference mechanism that lead to the belief. Most Christians can truly, viscerally imagine what it would be like to truly believe in atheism, and they can give a conscious account of simulated atheism and why they reject it. But the deluded communist-phobe cannot viscerally imagine what it would be like to believe that communists were not after him, and certainly is not able to use language to communicate the capacity for such visceral imagination to a wide array of other people. That's a clear and real difference that clearly affects the well-being of the communist-phobe just as much as a broken ankle or missing limb affects an injured athlete or amputee.

Caplan equates almost any degree of induced preference changes (such as to avoid consequence) as confirmation that choice underlies the behavior. But just because there exist some extreme tradeoffs that the person is willing to consider, it does not mean they are capable of entering counterfactual examination mode without the extreme environmental factors (threat of committal to hospital, say) as stimuli. This certainly wouldn't pass the Turing test version of sanity, which requires an ability to enter counterfactual exam mode on demand, and convince listeners that you are.

Consider the following (hypothetical) logical extensions of Caplan's approach:

A one legged man cannot run very fast, but I once saw such a man trying to escape a burning building. He altered his preference to avoid tumbling down flights of stairs and launched himself down several flights, acquiring many bruises and broken bones, but escaped the fire. Therefore he has perfectly sufficient ability to escape fires compared with everyone else, you just have to push him to an extreme corner of trade-off space to elicit his fire escaping ability. We should offer no disability benefits to such a person, such as handicapped-accessible fire escapes or preferential placement on the building's first floor.
A mentally handicapped man cannot keep from screaming when you say his name, but I once saw such a man trying to win a desired cupcake from his therapist in a group session. He altered his preference to avoid hearing his name by jamming q-tips in his ears and instead set the q-tips aside as the therapist asked him questions stating his name. He winced and expressed lots of contorted facial expressions each time he heard his name, but ultimately abstained from screaming and received the cupcake. Therefore, he has sufficient ability to respond to the sound of his own name as compared with everyone else, you just have to push him to an extreme corner of trade-off space to elicit his able-to-hear-name-without-screaming ability. We should offer no disability benefits to such a person.

Again Caplan seems to suggest that just because good definitions haven't been put forward, this means they cannot ever be invented:

"What this inference overlooks, however, is that brain science and behavioral genetics usually ask questions orthogonal to Szasz’s thesis. Return to the case of homosexuality. Does evidence of a strong genetic component raise the probability that homosexuality is a disease after all? It is hard to see how it would."

The problem is that a (sane) homosexual or transgenered person can easily demonstrate that they are capable of counterfactual robustness. Their altered state does not induce a decline in well-being (except perhaps socially) in a conscious creature (the original basis for stigma against these was religious and tribal, and of course we don't accept alleged metaphysical harms as adequate basis for declaring illness. Our great grandfathers did, which is an embarassment and a shame, but this is basically just a sophisticated straw man).

Let me be clear: Szasz (and Caplan) has contributed many great ideas that often rub people the wrong way and cut right at false premises underlying many inefficient egalitarian beliefs we have evolved. You should read Szasz and probably you should uncomfortably agree with much of what he says. I just don't think this business about choice theory explaining mental illness is part of Szasz's (or Caplan's) set of good or useful ideas.

## Saturday, September 8, 2012

### How do we reach these kids?

Most folks feel that government spending on education is "good". Yet, just when we have been spending historical highs on educational assistance programs for poor families, we are seeing a more pronounced deficiency in later-in-life income level achievements from children in poverty.

Basically, if you think education-specific government assistance spending is helping poor children have an increased shot at higher incomes, the data are very much against you.

A researcher makes the point (link):

"Responding to Ladd’s claim that the gap in reading achievement between students from families in the lowest and highest income deciles is larger for those born in 2001 than for those born in earlier decades, Peterson points out that the achievement gap between income groups was growing at exactly the same time the federal government was rapidly expanding services to the poor – Medicaid, food stamps, Head Start, housing subsidies, and many other programs.

"A better case can be made that any increase in the achievement gap between high- and low-income groups is more the result of changing family structure than of inadequate medical services or preschool education,” Peterson says. In 1969, 85 percent of children under the age of 18 were living with two married parents; by 2010, that percentage had declined to 65 percent. The median income level of a single-parent family is just over \$\$$27,000 (using 1992 dollars), compared to more than \$$61,000 for a two-parent family; and the risk of dropping out of high school increases from 11 percent to 28 percent if a white student comes from a single-parent family instead of a two-parent family. For blacks, the increment is from 17 percent to 30 percent, and for Hispanics, the risk rises from 25 percent to 49 percent."

That is, the data suggests that the causal link between being a low-income child and making it to higher income brackets later in life is more related to parents income, which is higher for two-parent families than for single parents, and less related to overall government spending levels. Since the proportion of poor children living in single parent homes has risen, one would expect to see negative return on educational investment at a federal level for any assistance money that is not directed at clearing up the cultural problem of raising kids in single parent environments.

More succinctly, you can give the child of a poor single parent lots of free medicine and educational opportunities, but culturally they will be less inclined to take advantage of those opportunities and for every dollar you spend assisting them in that manner, you pay the opportunity cost of positive changes that could have been made by diverting that money elsewhere (to, say, Planned Parenthood for example).

## Tuesday, August 21, 2012

### The Semblance of Sustainability

I saw this post linked from Cafe Hayek today. The post makes some valid points. It is hard to approach global warming with true scientific skepticism without being idiotically vilified as a "denier." I worry that we will not think through the cost-to-benefit analysis of our options and so I think the post does a good job to take a stand for skepticism.

But I take issue with a very specific point of the (otherwise very useful) post: the claim that "if warming is not human-caused, then it’s not clear how much we can do to reduce it."

This is a mistake in reasoning that I see often. The ultimate cause of the warming has little to do with whether we can or should fix it. If there is problematic warming, whatever the cause, then we should innovate, as humans are wont to do, and locate a workable solution for our survival and our desired progress.

I often see folks claiming that if the warming weren’t man-induced, then we’re somehow "off the hook" or that we won't have to undergo any unpleasantness like curbing industrial emissions.

But how can this make sense? If an asteroid was bearing down on our planet, it surely would not be human-caused. Whether or not an asteroid impact threat was anthropogenic would not be of concern as NASA prepares whatever best defense that we can muster. So why are we stopping to babble on and on about anthropogenic status of warming instead of just innovating to solve the problem?

The evidence suggests that there is warming and that it will be a problem. This is not counterintuitive nor unexpected. The warming we have (probably) caused has brought us tons of benefits that we wanted. Probably, even knowing what we know now, most of us would prefer to have our modern innovations, medicines, and comforts if we had the chance to re-do history, even at the expense of animal species and nature.

But we’ve also known all along that we can’t go on using resources in the same ways forever, in any endeavor. At some point there will always be a scarcity limit and humanity will need to invent itself out of a jam.

Humanity is always just MacGyver, barely escaping another jam that we find ourselves in.

Thus, it makes no difference where the warming comes from. If warming comes from something yet unknown, such as unobserved solar trends, and not human industry, that is no reason to remain lax on curbing industrial emissions. Perhaps we must reduce emissions to just barely survive those non-human-caused solar patterns.

Or perhaps there would be good arguments for leaving industrial emissions alone... but the mere blame of the warming trend wouldn’t be among them. In either case, the goal is to locate and invent a solution, not to assign blame.

And to boot, it could very well be the case that humanity did cause the warming but that humanity cannot reverse its effects quickly or effectively enough to keep the earth habitable in the sense that we know it now. I find this possibility to be unlikely, but it’s nonetheless plausible. And if it were true, we would nonetheless have to innovate to bring about our own survival, regardless of our deserved blame.

Yet this fallacy of implication, “if warming is not human-caused, then it’s not clear how much we can do to reduce it,” denies all of these different interpretations. Why should the clarity of a solution be related to who caused the problem?

Human solutions to existential threats rarely depend on the threats’ causes. Think of disease and medicine. Imagine a shaman saying something like this:

If it wasn't your moral transgressions that have upset the gods which caused our recent epidemic of disease, then I am afraid it is unclear whether we can even do anything to prevent the disease at all.

This parochial kind of thinking has always been purely obstructionist.

I credit David Deutsch with the main thrust of this idea, which you can read about in the chapter called “Unsustainable” from his recent book “The Beginning of Infinity”. Here is an excerpt from this section of the book:

Optimistic opponents of Malthusian arguments are often - rightly - keen to stress that all evils are due to lack of knowledge, and that problems are soluble. Prophecies of disaster such as the ones I have described do illustrate the fact that the prophetic mode of thinking, no matter how plausible it seems prospectively, is fallacious and inherently biased. However, to expect that problems will always be solved in time to avert disasters would be the same fallacy. And, indeed, the deeper and more dangerous mistake made by Malthusians is that they claim to have a way of averting resource-allocation disasters (namely, unsustainability). Thus they also deny that other great truth that I suggested we engrave in stone: problems are inevitable.

A solution may be problem-free for a period, and in a parochial application, but there is no way of identifying in advance which problems will have such a solution. Hence there is no way, short of stasis, to avoid unforeseen problems arising from new solutions. But stasis itself is unsustainable, as witness every static society in history. Malthus could not have known that the obscure element uranium, which had just been discovered, would eventually become relevant to the survival of civilization, just as my colleague [who argued that anything beyond a black and white TV was just wasteful, unnecessary consumption that depleted rare materials] could not have known that, within his lifetime, colour televisions would be saving lives every day.

So there is no resource-management strategy that can prevent disasters, just as there is no political system that provides only good leaders and good policies, nor a scientific method that provides only true theories. But there are ideas that reliably cause disasters, and one of them is, notoriously, the idea that the future can be scientifically planned. The only rational policy, in all three cases, is to judge institutions, plans, and ways of life according to how good they are at correcting mistakes: removing bad policies and leaders, superseding bad explanations, and recovering from disasters.

For example, one of the triumphs of twentieth-century progress was the discovery of antibiotics, which ended many of the plagues and endemic illnesses that had caused suffering and death since time immemorial. However, it has been pointed out almost from the outset by critics of 'so-called-progress' that this triumph may only be temporary, because of the evolution of antibiotic-resistant pathogens. This is often held up as an indictment of - to give it its broad context - Enlightenment hubris. We need lose only one battle in this war of science against bacteria and their weapon, evolution (so the argument goes), to be doomed, because our other 'so-called-progess' - such as cheap world-wide air travel, global trade, enormous cities - make us more vulnerable than ever before to a global pandemic that could exceed the Black Death in destructiveness and even course our extinction.

But all triumphs are temporary. So to use this fact to reinterpret progress as 'so-called-progress' is bad philosophy. The fact that reliance on specific antibiotics is unsustainable is only an indictment from the point of view of someone who expects a sustainable lifestyle. But in reality there is no such thing. Only progress is sustainable.

The prophetic approach can see only what one might do to postpone disaster, namely improve sustainability: drastically reduce and disperse the population, make travel difficult, suppress contact between different geographical areas. A society which did this would not be able to afford the kind of scientific research that would lead to new antibiotics. Its members would hope that their lifestyle would protect them instead. But note that this lifestyle did not, when it was tried, prevent the Black Death. Nor would it cure cancer.

Prevention and delaying tactics are useful, but they can be no more than a minor part of a variable strategy for the future. Problems are inevitable, and sooner or later survival will depend on being able to cope when prevention and delaying tactics have failed. Obviously we need to work towards cures. But we can do that only for diseases we already know about. So we need the capacity to deal with unforeseen, unforeseeable failures. For this we need a large and vibrant research community, interested in explanation and problem-solving. We need the wealth to fund it, and the technological capacity to implement what it discovers.

This is also true of the problem of climate change, about which there is currently great controversy. We face the prospect that carbon-dioxide emissions from technology will cause an increase in the average temperature of the atmosphere, with harmful effects such as droughts, sea-level rises, disruption to agriculture, and the extinction of some species. These are forecast to outweigh the beneficial effects, such as an increase in crop yields, a general boost to plant life, and a reduction in the number of people dying of hypothermia in the winter. Trillions of dollars, and a great deal of legislation and institutional change, intended to reduce those emissions, currently hang on the outcomes of simulations of the planet's climate by the most powerful supercomputers, and on projects by economists about what those computations imply about the economy in the next century. In the light of the above discussion, we should notice several things about the controversy and about the underlying problem.

First, we have been lucky so far. Regardless of how accurate the prevailing climate models are, it is uncontroversial from the laws of physics, without any need for supercomputers or sophisticated modelling, that such emissions must, eventually, increase the temperature, which must, eventually, be harmful. Consider therefore: what if the relevant parameters had been just slightly different and the moment of disaster had been, say, 1902 - Veblen's time - when carbon-dioxide emissions were already orders of magnitude above their pre-Enlightenment values. Then the disaster would have happened before anyone could have predicted it or known what was happening. Sea levels would have risen, agriculture would have been disrupted, millions would have begun to die, with worse to come. And the great issue of the day would have been not how to prevent it but what could be done about it.

They had no supercomputers then. Because of Babbage's failures and the scientific community's misjudgements - and, perhaps most importantly, their lack of wealth - they lacked the vital technology of automated computing altogether. Mechanical calculators and roomfuls of clerks would have been insufficient. But much worse: they had almost no atmospheric physicists. In fact the total number of physicists of all kinds was a small fraction of the number who today work on climate change alone. From society's point of view, physicists were a luxury in 1902, like colour televisions in the 1970s. Yet, to recover from the disaster, society would have needed more scientific knowledge, and better technology, and more of it - that is to say, more wealth. For instance, in 1900, building a sea wall to protect the coast of a low-lying island would have required resources so enormous that the only islands that could have afforded it would have been those with either large concentrations of cheap labor or exceptional wealth, and in the Netherlands, much of whose population already lived below sea level thanks to the technology of dyke-building.

...

It is not yet accurately known how sensitive the atmosphere's temperature is to the concentration of carbon dioxide - that is, how much a given increase in concentration increases the temperature. This number is important politically, because it affects how urgent the problem is: high sensitivity means high urgency; low sensitivity means the opposite. Unfortunately, this has led to the political debate being dominated by the side issue of how 'anthropogenic' (human-caused) the increase in temperature is to date has been. It is as if people were arguing about how best to prepare for the next hurricane while all agreeing that the only hurricane one should prepare for are human-induced ones. All sides seem to assume that if it turns out that a random fluctuation in the temperature is about to raise sea levels, disrupt agriculture, wipe out species and so on, our best plan would be simply to grin and bear it. Or if two-thirds of the increase is anthropogenic, would not mitigate the effects of the other third.

...

The world is currently buzzing with plans to force reductions in gas emissions at almost any cost. But it ought to be buzzing much more with plans to reduce the temperature, or for how to thrive at a higher temperature. And not at all costs, but efficiently and cheaply. Some such plans exist - for instance to remove carbon dioxide from the atmosphere by a variety of methods; and to generate clouds over the oceans to reflect sunlight; and to encourage aquatic organisms to absorb more carbon-dioxide. But at the moment these are very minor research efforts. Neither supercomputers nor international treaties nor vast sums of money are devoted to them. They are not central to the human effort to face this problem, or problems like it.

This is dangerous. There is as yet no serious sign of retreat into a sustainable lifestyle (which would really mean achieving only the semblance of sustainability), but even the aspiration is dangerous. For what would be aspiring to? To forcing the future world into our image, endlessly reproducing our lifestyle, our misconceptions, and our mistakes. But if we choose instead to embark on an open-ended journey of creation and exploration whose every step is unsustainable until it is redeemed by the next - if this becomes the prevailing ethic and aspiration of our society - then the ascent of man, the beginning of infinity, will have become, if not secure, then at least sustainable.

--David Deutsch, from The Beginning of Infinity, pages 435-441.