Interpretation: Young adults at genetic risk for autosomal dominant Alzheimer’s disease have functional and structural MRI findings and CSF and plasma biomarker findings consistent with Aβ1–42 overproduction. Although the extent to which the underlying brain changes are either neurodegenerative or developmental remain to be determined, this study shows the earliest known biomarker changes in cognitively normal people at genetic risk for autosomal dominant Alzheimer’s disease.
Note from Elsevier:
The section “Articles in Press” contains peer reviewed accepted articles to be published in this journal. When the final article is assigned to an issue of the journal, the “Article in Press” version will be removed from this section and will appear in the associated published journal issue. The date it was first made available online will be carried over. Please be aware that although “Articles in Press” do not have all bibliographic details available yet, they can already be cited using the year of online availability and the DOI as follows: Author(s), Article Title, Journal (Year), DOI.
Please consult the journal’s reference style for the exact appearance of these elements, abbreviation of journal names and the use of punctuation.
There are three types of “Articles in Press”:
Accepted manuscripts: these are articles that have been peer reviewed and accepted for publication by the Editorial Board. The articles have not yet been copy edited and/or formatted in the journal house style.
Uncorrected proofs: these are copy edited and formatted articles that are not yet finalized and that will be corrected by the authors. Therefore the text could change before final publication.
Corrected proofs: these are articles containing the authors’ corrections and may, or may not yet have specific issue and page numbers assigned.
Here’s an article from the New York Times
The brain, as it traverses middle age, gets better at recognizing the central idea, the big picture. If kept in good shape, the brain can continue to build pathways that help its owner recognize patterns and, as a consequence, see significance and even solutions much faster than a young person can.
Meet Paro. He’s a robotic seal developed by Japanese researchers to help dementia patients feel that they have companionship and a feeling of security, without the responsibilities of a living pet. (Thanks to Suzie Katz for alerting me to this story).
Made to emulate a live pet as much as possible, he can cuddle, nod and blink his big black eyes. Paro is currently being tested with patients in Baden-Baden and there are already 1,000 robot seals deployed in long-term care homes in Japan.
There is an excellent collection of papers comprising the latest issue of Topoi (Volume 28, Number 1 / March, 2009). I assume that because of the introduction “Mind Embodied, Embedded, Enacted: One Church or Many?” this issue was pulled together by Julian Kiverstein and Andy Clark. They set up the issue by posing the following two questions, questions that I’ve been wrestling with of late given the issue I’m currently pulling together:
[w]hat, if anything, forms the deep theoretical core of the embodied, embedded approach? Equally importantly, we may ask to what extent the various projects pursued under the single umbrella are in fact harmonious?
I’m pasting it in since access may well be denied.
My Genome, My Self
ONE OF THE PERKS of being a psychologist is access to tools that allow you to carry out the injunction to know thyself. I have been tested for vocational interest (closest match: psychologist), intelligence (above average), personality (open, conscientious, agreeable, average in extraversion, not too neurotic) and political orientation (neither leftist nor rightist, more libertarian than authoritarian). I have M.R.I. pictures of my brain (no obvious holes or bulges) and soon will undergo the ultimate test of marital love: my brain will be scanned while my wife’s name is subliminally flashed before my eyes.
Last fall I submitted to the latest high-tech way to bare your soul. I had my genome sequenced and am allowing it to be posted on the Internet, along with my medical history. The opportunity arose when the biologist George Church sought 10 volunteers to kick off his audacious Personal Genome Project. The P.G.P. has created a public database that will contain the genomes and traits of 100,000 people. Tapping the magic of crowd sourcing that gave us Wikipedia and Google rankings, the project seeks to engage geneticists in a worldwide effort to sift through the genetic and environmental predictors of medical, physical and behavioral traits.
The Personal Genome Project is an initiative in basic research, not personal discovery. Yet the technological advance making it possible — the plunging cost of genome sequencing — will soon give people an unprecedented opportunity to contemplate their own biological and even psychological makeups. We have entered the era of consumer genetics. At one end of the price range you can get a complete sequence and analysis of your genome from Knome (often pronounced “know me”) for $99,500. At the other you can get a sample of traits, disease risks and ancestry data from 23andMe for $399. The science journal Nature listed “Personal Genomics Goes Mainstream” as a top news story of 2008.
Like the early days of the Internet, the dawn of personal genomics promises benefits and pitfalls that no one can foresee. It could usher in an era of personalized medicine, in which drug regimens are customized for a patient’s biochemistry rather than juggled through trial and error, and screening and prevention measures are aimed at those who are most at risk. It opens up a niche for bottom-feeding companies to terrify hypochondriacs by turning dubious probabilities into Genes of Doom. Depending on who has access to the information, personal genomics could bring about national health insurance, leapfrogging decades of debate, because piecemeal insurance is not viable in a world in which insurers can cherry-pick the most risk-free customers, or in which at-risk customers can load up on lavish insurance.
The pitfalls of personal genomics have already made it a subject of government attention. Last year President Bush signed the Genetic Information Nondiscrimination Act, outlawing discrimination in employment and health insurance based on genetic data. And the states of California and New York took action against the direct-to-consumer companies, arguing that what they provide are medical tests and thus can be ordered only by a doctor.
With the genome no less than with the Internet, information wants to be free, and I doubt that paternalistic measures can stifle the industry for long (but then, I have a libertarian temperament). For better or for worse, people will want to know about their genomes. The human mind is prone to essentialism — the intuition that living things house some hidden substance that gives them their form and determines their powers. Over the past century, this essence has become increasingly concrete. Growing out of the early, vague idea that traits are “in the blood,” the essence became identified with the abstractions discovered by Gregor Mendel called genes, and then with the iconic double helix of DNA. But DNA has long been an invisible molecule accessible only to a white-coated priesthood. Today, for the price of a flat-screen TV, people can read their essence as a printout detailing their very own A’s, C’s, T’s and G’s.
A firsthand familiarity with the code of life is bound to confront us with the emotional, moral and political baggage associated with the idea of our essential nature. People have long been familiar with tests for heritable diseases, and the use of genetics to trace ancestry — the new “Roots” — is becoming familiar as well. But we are only beginning to recognize that our genome also contains information about our temperaments and abilities. Affordable genotyping may offer new kinds of answers to the question “Who am I?” — to ruminations about our ancestry, our vulnerabilities, our character and our choices in life.
Over the years I have come to appreciate how elusive the answers to those questions can be. During my first book tour 15 years ago, an interviewer noted that the paleontologist Stephen Jay Gould had dedicated his first book to his father, who took him to see the dinosaurs when he was 5. What was the event that made me become a cognitive psychologist who studies language? I was dumbstruck. The only thing that came to mind was that the human mind is uniquely interesting and that as soon as I learned you could study it for a living, I knew that that was what I wanted to do. But that response would not just have been charmless; it would also have failed to answer the question. Millions of people are exposed to cognitive psychology in college but have no interest in making a career of it. What made it so attractive to me?
As I stared blankly, the interviewer suggested that perhaps it was because I grew up in Quebec in the 1970s when language, our pre-eminent cognitive capacity, figured so prominently in debates about the future of the province. I quickly agreed — and silently vowed to come up with something better for the next time. Now I say that my formative years were a time of raging debates about the political implications of human nature, or that my parents subscribed to a Time-Life series of science books, and my eye was caught by the one called “The Mind,” or that one day a friend took me to hear a lecture by the great Canadian psychologist D. O. Hebb, and I was hooked. But it is all humbug. The very fact that I had to think so hard brought home what scholars of autobiography and memoir have long recognized. None of us know what made us what we are, and when we have to say something, we make up a good story.
An obvious candidate for the real answer is that we are shaped by our genes in ways that none of us can directly know. Of course genes can’t pull the levers of our behavior directly. But they affect the wiring and workings of the brain, and the brain is the seat of our drives, temperaments and patterns of thought. Each of us is dealt a unique hand of tastes and aptitudes, like curiosity, ambition, empathy, a thirst for novelty or for security, a comfort level with the social or the mechanical or the abstract. Some opportunities we come across click with our constitutions and set us along a path in life.
This hardly seems radical — any parent of more than one child will tell you that babies come into the world with distinct personalities. But what can anyone say about how the baby got to be that way? Until recently, the only portents on offer were traits that ran in the family, and even they conflated genetic tendencies with family traditions. Now, at least in theory, personal genomics can offer a more precise explanation. We might be able to identify the actual genes that incline a person to being nasty or nice, an egghead or a doer, a sad sack or a blithe spirit.
Looking to the genome for the nature of the person is far from innocuous. In the 20th century, many intellectuals embraced the idea that babies are blank slates that are inscribed by parents and society. It allowed them to distance themselves from toxic doctrines like that of a superior race, the eugenic breeding of a better species or a genetic version of the Twinkie Defense in which individuals or society could evade responsibility by saying that it’s all in the genes. When it came to human behavior, the attitude toward genetics was “Don’t go there.” Those who did go there found themselves picketed, tarred as Nazis and genetic determinists or, in the case of the biologist E. O. Wilson, doused with a pitcher of ice water at a scientific conference.
Today, as the lessons of history have become clearer, the taboo is fading. Though the 20th century saw horrific genocides inspired by Nazi pseudoscience about genetics and race, it also saw horrific genocides inspired by Marxist pseudoscience about the malleability of human nature. The real threat to humanity comes from totalizing ideologies and the denial of human rights, rather than a curiosity about nature and nurture. Today it is the humane democracies of Scandinavia that are hotbeds of research in behavioral genetics, and two of the groups who were historically most victimized by racial pseudoscience — Jews and African-Americans — are among the most avid consumers of information about their genes.
Nor should the scare word “determinism” get in the way of understanding our genetic roots. For some conditions, like Huntington’s disease, genetic determinism is simply correct: everyone with the defective gene who lives long enough will develop the condition. But for most other traits, any influence of the genes will be probabilistic. Having a version of a gene may change the odds, making you more or less likely to have a trait, all things being equal, but as we shall see, the actual outcome depends on a tangle of other circumstances as well.
With personal genomics in its infancy, we can’t know whether it will deliver usable information about our psychological traits. But evidence from old-fashioned behavioral genetics — studies of twins, adoptees and other kinds of relatives — suggests that those genes are in there somewhere. Though once vilified as fraud-infested crypto-eugenics, behavioral genetics has accumulated sophisticated methodologies and replicable findings, which can tell us how much we can ever expect to learn about ourselves from personal genomics.
To study something scientifically, you first have to measure it, and psychologists have developed tests for many mental traits. And contrary to popular opinion, the tests work pretty well: they give a similar measurement of a person every time they are administered, and they statistically predict life outcomes like school and job performance, psychiatric diagnoses and marital stability. Tests for intelligence might ask people to recite a string of digits backward, define a word like “predicament,” identify what an egg and a seed have in common or assemble four triangles into a square. Personality tests ask people to agree or disagree with statements like “Often I cross the street in order not to meet someone I know,” “I often was in trouble in school,” “Before I do something I try to consider how my friends will react to it” and “People say insulting and vulgar things about me.” People’s answers to a large set of these questions tend to vary in five major ways: openness to experience, conscientiousness, extraversion, agreeableness (as opposed to antagonism) and neuroticism. The scores can then be compared with those of relatives who vary in relatedness and family backgrounds.
The most prominent finding of behavioral genetics has been summarized by the psychologist Eric Turkheimer: “The nature-nurture debate is over. . . . All human behavioral traits are heritable.” By this he meant that a substantial fraction of the variation among individuals within a culture can be linked to variation in their genes. Whether you measure intelligence or personality, religiosity or political orientation, television watching or cigarette smoking, the outcome is the same. Identical twins (who share all their genes) are more similar than fraternal twins (who share half their genes that vary among people). Biological siblings (who share half those genes too) are more similar than adopted siblings (who share no more genes than do strangers). And identical twins separated at birth and raised in different adoptive homes (who share their genes but not their environments) are uncannily similar.
Behavioral geneticists like Turkheimer are quick to add that many of the differences among people cannot be attributed to their genes. First among these are the effects of culture, which cannot be measured by these studies because all the participants come from the same culture, typically middle-class European or American. The importance of culture is obvious from the study of history and anthropology. The reason that most of us don’t challenge each other to duels or worship our ancestors or chug down a nice warm glass of cow urine has nothing to do with genes and everything to do with the milieu in which we grew up. But this still leaves the question of why people in the same culture differ from one another.
At this point behavioral geneticists will point to data showing that even within a single culture, individuals are shaped by their environments. This is another way of saying that a large fraction of the differences among individuals in any trait you care to measure do not correlate with differences among their genes. But a look at these nongenetic causes of our psychological differences shows that it’s far from clear what this “environment” is.
Behavioral genetics has repeatedly found that the “shared environment” — everything that siblings growing up in the same home have in common, including their parents, their neighborhood, their home, their peer group and their school — has less of an influence on the way they turn out than their genes. In many studies, the shared environment has no measurable influence on the adult at all. Siblings reared together end up no more similar than siblings reared apart, and adoptive siblings reared in the same family end up not similar at all. A large chunk of the variation among people in intelligence and personality is not predictable from any obvious feature of the world of their childhood.
Think of a pair of identical twins you know. They are probably highly similar, but they are certainly not indistinguishable. They clearly have their own personalities, and in some cases one twin can be gay and the other straight, or one schizophrenic and the other not. But where could these differences have come from? Not from their genes, which are identical. And not from their parents or siblings or neighborhood or school either, which were also, in most cases, identical. Behavioral geneticists attribute this mysterious variation to the “nonshared” or “unique” environment, but that is just a fudge factor introduced to make the numbers add up to 100 percent.
No one knows what the nongenetic causes of individuality are. Perhaps people are shaped by modifications of genes that take place after conception, or by haphazard fluctuations in the chemical soup in the womb or the wiring up of the brain or the expression of the genes themselves. Even in the simplest organisms, genes are not turned on and off like clockwork but are subject to a lot of random noise, which is why genetically identical fruit flies bred in controlled laboratory conditions can end up with unpredictable differences in their anatomy. This genetic roulette must be even more significant in an organism as complex as a human, and it tells us that the two traditional shapers of a person, nature and nurture, must be augmented by a third one, brute chance.
The discoveries of behavioral genetics call for another adjustment to our traditional conception of a nature-nurture cocktail. A common finding is that the effects of being brought up in a given family are sometimes detectable in childhood, but that they tend to peter out by the time the child has grown up. That is, the reach of the genes appears to get stronger as we age, not weaker. Perhaps our genes affect our environments, which in turn affect ourselves. Young children are at the mercy of parents and have to adapt to a world that is not of their choosing. As they get older, however, they can gravitate to the microenvironments that best suit their natures. Some children naturally lose themselves in the library or the local woods or the nearest computer; others ingratiate themselves with the jocks or the goths or the church youth group. Whatever genetic quirks incline a youth toward one niche or another will be magnified over time as they develop the parts of themselves that allow them to flourish in their chosen worlds. Also magnified are the accidents of life (catching or dropping a ball, acing or flubbing a test), which, according to the psychologist Judith Rich Harris, may help explain the seemingly random component of personality variation. The environment, then, is not a stamping machine that pounds us into a shape but a cafeteria of options from which our genes and our histories incline us to choose.
All this sets the stage for what we can expect from personal genomics. Our genes are a big part of what we are. But even knowing the totality of genetic predictors, there will be many things about ourselves that no genome scan — and for that matter, no demographic checklist — will ever reveal. With these bookends in mind, I rolled up my sleeve, drooled into a couple of vials and awaited the results of three analyses of my DNA.
The output of a complete genome scan would be a list of six billion A’s, C’s, G’s and T’s — a multigigabyte file that is still prohibitively expensive to generate and that, by itself, will always be perfectly useless. That is why most personal genomics ventures are starting with smaller portions of the genome that promise to contain nuggets of interpretable information.
The Personal Genome Project is beginning with the exome: the 1 percent of our genome that is translated into strings of amino acids that assemble themselves into proteins. Proteins make up our physical structure, catalyze the chemical reactions that keep us alive and regulate the expression of other genes. The vast majority of heritable diseases that we currently understand involve tiny differences in one of the exons that collectively make up the exome, so it’s a logical place to start.
Only a portion of my exome has been sequenced by the P.G.P. so far, none of it terribly interesting. But I did face a decision that will confront every genome consumer. Most genes linked to disease nudge the odds of developing the illness up or down a bit, and when the odds are increased, there is a recommended course of action, like more frequent testing or a preventive drug or a lifestyle change. But a few genes are perfect storms of bad news: high odds of developing a horrible condition that you can do nothing about. Huntington’s disease is one example, and many people whose family histories put them at risk (like Arlo Guthrie, whose father, Woody, died of the disease) choose not to learn whether they carry the gene.
Another example is the apolipoprotein E gene (APOE). Nearly a quarter of the population carries one copy of the E4 variant, which triples their risk of developing Alzheimer’s disease. Two percent of people carry two copies of the gene (one from each parent), which increases their risk fifteenfold. James Watson, who with Francis Crick discovered the structure of DNA and who was one of the first two humans to have his genome sequenced, asked not to see which variant he had.
As it turns out, we know what happens to people who do get the worst news. According to preliminary findings by the epidemiologist Robert C. Green, they don’t sink into despair or throw themselves off bridges; they handle it perfectly well. This should not be terribly surprising. All of us already live with the knowledge that we have the fatal genetic condition called mortality, and most of us cope using some combination of denial, resignation and religion. Still, I figured that my current burden of existential dread is just about right, so I followed Watson’s lead and asked for a line-item veto of my APOE gene information when the P.G.P. sequencer gets to it.
The genes analyzed by a new company called Counsyl are more actionable, as they say in the trade. Their “universal carrier screen” is meant to tell prospective parents whether they carry genes that put their potential children at risk for more than a hundred serious diseases like cystic fibrosis and alpha thalassemia. If both parents have a copy of a recessive disease gene, there is a one-in-four chance that any child they conceive will develop the disease. With this knowledge they can choose to adopt a child instead or to undergo in-vitro fertilization and screen the embryos for the dangerous genes. It’s a scaled-up version of the Tay-Sachs test that Ashkenazi Jews have undergone for decades.
I have known since 1972 that I am clean for Tay-Sachs, but the Counsyl screen showed that I carry one copy of a gene for familial dysautonomia, an incurable disorder of the autonomic nervous system that causes a number of unpleasant symptoms and a high chance of premature death. A well-meaning colleague tried to console me, but I was pleased to gain the knowledge. Children are not in my cards, but my nieces and nephews, who have a 25 percent chance of being carriers, will know to get tested. And I can shut the door to whatever wistfulness I may have had about my childlessness. The gene was not discovered until 2001, well after the choice confronted me, so my road not taken could have led to tragedy. But perhaps that’s the way you think if you are open to experience and not too neurotic.
Familial dysautonomia is found almost exclusively among Ashkenazi Jews, and 23andMe provided additional clues to that ancestry in my genome. My mitochondrial DNA (which is passed intact from mother to offspring) is specific to Ashkenazi populations and is similar to ones found in Sephardic and Oriental Jews and in Druze and Kurds. My Y chromosome (which is passed intact from father to son) is also Levantine, common among Ashkenazi, Sephardic and Oriental Jews and also sprinkled across the eastern Mediterranean. Both variants arose in the Middle East more than 2,000 years ago and were probably carried to regions in Italy by Jewish exiles after the Roman destruction of Jerusalem, then to the Rhine Valley in the Middle Ages and eastward to the Pale of Settlement in Poland and Moldova, ending up in my father’s father and my mother’s mother a century ago.
It’s thrilling to find yourself so tangibly connected to two millenniums of history. And even this secular, ecumenical Jew experienced a primitive tribal stirring in learning of a deep genealogy that coincides with the handing down of traditions I grew up with. But my blue eyes remind me not to get carried away with delusions about a Semitic essence. Mitochondrial DNA, and the Y chromosome, do not literally tell you about “your ancestry” but only half of your ancestry a generation ago, a quarter two generations ago and so on, shrinking exponentially the further back you go. In fact, since the further back you go the more ancestors you theoretically have (eight great-grandparents, sixteen great-great-grandparents and so on), at some point there aren’t enough ancestors to go around, everyone’s ancestors overlap with everyone else’s, and the very concept of personal ancestry becomes meaningless. I found it just as thrilling to zoom outward in the diagrams of my genetic lineage and see my place in a family tree that embraces all of humanity.
As fascinating as carrier screening and ancestry are, the really new feature offered by 23andMe is its genetic report card. The company directs you to a Web page that displays risk factors for 14 diseases and 10 traits, and links to pages for an additional 51 diseases and 21 traits for which the scientific evidence is more iffy. Curious users can browse a list of markers from the rest of their genomes with a third-party program that searches a wiki of gene-trait associations that have been reported in the scientific literature. I found the site user-friendly and scientifically responsible. This clarity, though, made it easy to see that personal genomics has a long way to go before it will be a significant tool of self-discovery.
The two biggest pieces of news I got about my disease risks were a 12.6 percent chance of getting prostate cancer before I turn 80 compared with the average risk for white men of 17.8 percent, and a 26.8 percent chance of getting Type 2 diabetes compared with the average risk of 21.9 percent. Most of the other outcomes involved even smaller departures from the norm. For a blessedly average person like me, it is completely unclear what to do with these odds. A one-in-four chance of developing diabetes should make any prudent person watch his weight and other risk factors. But then so should a one-in-five chance.
It became all the more confusing when I browsed for genes beyond those on the summary page. Both the P.G.P. and the genome browser turned up studies that linked various of my genes to an elevated risk of prostate cancer, deflating my initial relief at the lowered risk. Assessing risks from genomic data is not like using a pregnancy-test kit with its bright blue line. It’s more like writing a term paper on a topic with a huge and chaotic research literature. You are whipsawed by contradictory studies with different sample sizes, ages, sexes, ethnicities, selection criteria and levels of statistical significance. Geneticists working for 23andMe sift through the journals and make their best judgments of which associations are solid. But these judgments are necessarily subjective, and they can quickly become obsolete now that cheap genotyping techniques have opened the floodgates to new studies.
Direct-to-consumer companies are sometimes accused of peddling “recreational genetics,” and there’s no denying the horoscopelike fascination of learning about genes that predict your traits. Who wouldn’t be flattered to learn that he has two genes associated with higher I.Q. and one linked to a taste for novelty? It is also strangely validating to learn that I have genes for traits that I already know I have, like light skin and blue eyes. Then there are the genes for traits that seem plausible enough but make the wrong prediction about how I live my life, like my genes for tasting the bitterness in broccoli, beer and brussels sprouts (I consume them all), for lactose-intolerance (I seem to tolerate ice cream just fine) and for fast-twitch muscle fibers (I prefer hiking and cycling to basketball and squash). I also have genes that are nothing to brag about (like average memory performance and lower efficiency at learning from errors), ones whose meanings are a bit baffling (like a gene that gives me “typical odds” for having red hair, which I don’t have), and ones whose predictions are flat-out wrong (like a high risk of baldness).
For all the narcissistic pleasure that comes from poring over clues to my inner makeup, I soon realized that I was using my knowledge of myself to make sense of the genetic readout, not the other way around. My novelty-seeking gene, for example, has been associated with a cluster of traits that includes impulsivity. But I don’t think I’m particularly impulsive, so I interpret the gene as the cause of my openness to experience. But then it may be like that baldness gene, and say nothing about me at all.
Individual genes are just not very informative. Call it Geno’s Paradox. We know from classic medical and behavioral genetics that many physical and psychological traits are substantially heritable. But when scientists use the latest methods to fish for the responsible genes, the catch is paltry.
Take height. Though health and nutrition can affect stature, height is highly heritable: no one thinks that Kareem Abdul-Jabbar just ate more Wheaties growing up than Danny DeVito. Height should therefore be a target-rich area in the search for genes, and in 2007 a genomewide scan of nearly 16,000 people turned up a dozen of them. But these genes collectively accounted for just 2 percent of the variation in height, and a person who had most of the genes was barely an inch taller, on average, than a person who had few of them. If that’s the best we can do for height, which can be assessed with a tape measure, what can we expect for more elusive traits like intelligence or personality?
Geno’s Paradox entails that apart from carrier screening, personal genomics will be more recreational than diagnostic for some time to come. Some reasons are technological. The affordable genotyping services don’t actually sequence your entire genome but follow the time-honored scientific practice of looking for one’s keys under the lamppost because that’s where the light is best. They scan for half a million or so spots on the genome where a single nucleotide (half a rung on the DNA ladder) is likely to differ from one person to the next. These differences are called Single Nucleotide Polymorphisms, or SNPs (pronounced “snips”), and they can be cheaply identified en masse by putting a dollop of someone’s DNA on a device called a microarray or SNP chip. A SNP can be a variant of a gene, or can serve as a signpost for variants of a gene that are nearby.
But not all genetic variation comes in the form of these one-letter typos. A much larger portion of our genomes varies in other ways. A chunk of DNA may be missing or inverted or duplicated, or a tiny substring may be repeated different numbers of times — say, five times in one person and seven times in another. These variations are known to cause diseases and differences in personality, but unless they accompany a particular SNP, they will not turn up on a SNP chip.
As sequencing technology improves, more of our genomic variations will come into view. But determining what those variants mean is another matter. A good day for geneticists is one in which they look for genes that have nice big effects and that are found in many people. But remember the minuscule influence of each of the genes that affects stature. There may be hundreds of other such genes, each affecting height by an even smaller smidgen, but it is hard to discern the genes in this long tail of the distribution amid the cacophony of the entire genome. And so it may be for the hundreds or thousands of genes that make you a teensy bit smarter or duller, calmer or more jittery.
Another kind of headache for geneticists comes from gene variants that do have large effects but that are unique to you or to some tiny fraction of humanity. These, too, are hard to spot in genomewide scans. Say you have a unique genetic variant that gives you big ears. The problem is that you have other unique genes as well. Since it would be literally impossible to assemble a large sample of people who do and don’t have the crucial gene and who do and don’t have big ears, there is no way to know which of your proprietary genes is the culprit. If we understood the molecular assembly line by which ears were put together in the embryo, we could identify the gene by what it does rather than by what it correlates with. But with most traits, that’s not yet possible — not for ears, and certainly not for a sense of humor or a gift of gab or a sweet disposition. In fact, the road to discovery in biology often goes in the other direction. Biologists discover the genetic pathways that build an organ by spotting genes that correlate with different forms of it and then seeing what they do.
So how likely is it that future upgrades to consumer genomics kits will turn up markers for psychological traits? The answer depends on why we vary in the first place, an unsolved problem in behavioral genetics. And the answer may be different for different psychological traits.
In theory, we should hardly differ at all. Natural selection works like compound interest: a gene with even a 1 percent advantage in the number of surviving offspring it yields will expand geometrically over a few hundred generations and quickly crowd out its less fecund alternatives. Why didn’t this winnowing leave each of us with the best version of every gene, making each of us as vigorous, smart and well adjusted as human physiology allows? The world would be a duller place, but evolution doesn’t go out of its way to keep us entertained.
It’s tempting to say that society as a whole prospers with a mixture of tinkers, tailors, soldiers, sailors and so on. But evolution selects among genes, not societies, and if the genes that make tinkers outreproduce the genes that make tailors, the tinker genes will become a monopoly. A better way of thinking about genetic diversity is that if everyone were a tinker, it would pay to have tailor genes, and the tailor genes would start to make an inroad, but then as society filled up with tailor genes, the advantage would shift back to the tinkers. A result would be an equilibrium with a certain proportion of tinkers and a certain proportion of tailors. Biologists call this process balancing selection: two designs for an organism are equally fit, but in different physical or social environments, including the environments that consist of other members of the species. Often the choice between versions of such a trait is governed by a single gene, or a few adjacent genes that are inherited together. If instead the trait were controlled by many genes, then during sexual reproduction those genes would get all mixed up with the genes from the other parent, who might have the alternative version of the trait. Over several generations the genes for the two designs would be thoroughly scrambled, and the species would be homogenized.
The psychologists Lars Penke, Jaap Denissen and Geoffrey Miller argue that personality differences arise from this process of balancing selection. Selfish people prosper in a world of nice guys, until they become so common that they start to swindle one another, whereupon nice guys who cooperate get the upper hand, until there are enough of them for the swindlers to exploit, and so on. The same balancing act can favor rebels in a world of conformists and vice-versa, or doves in a world of hawks.
The optimal personality may also depend on the opportunities and risks presented by different environments. The early bird gets the worm, but the second mouse gets the cheese. An environment that has worms in some parts but mousetraps in others could select for a mixture of go-getters and nervous nellies. More plausibly, it selects for organisms that sniff out what kind of environment they are in and tune their boldness accordingly, with different individuals setting their danger threshold at different points.
But not all variation in nature arises from balancing selection. The other reason that genetic variation can persist is that rust never sleeps: new mutations creep into the genome faster than natural selection can weed them out. At any given moment, the population is laden with a portfolio of recent mutations, each of whose days are numbered. This Sisyphean struggle between selection and mutation is common with traits that depend on many genes, because there are so many things that can go wrong.
Penke, Denissen and Miller argue that a mutation-selection standoff is the explanation for why we differ in intelligence. Unlike personality, where it takes all kinds to make a world, with intelligence, smarter is simply better, so balancing selection is unlikely. But intelligence depends on a large network of brain areas, and it thrives in a body that is properly nourished and free of diseases and defects. Many genes are engaged in keeping this system going, and so there are many genes that, when mutated, can make us a little bit stupider.
At the same time there aren’t many mutations that can make us a whole lot smarter. Mutations in general are far more likely to be harmful than helpful, and the large, helpful ones were low-hanging fruit that were picked long ago in our evolutionary history and entrenched in the species. One reason for this can be explained with an analogy inspired by the mathematician Ronald Fisher. A large twist of a focusing knob has some chance of bringing a microscope into better focus when it is far from the best setting. But as the barrel gets closer to the target, smaller and smaller tweaks are needed to bring any further improvement.
The Penke/Denissen/Miller theory, which attributes variation in personality and intelligence to different evolutionary processes, is consistent with what we have learned so far about the genes for those two kinds of traits. The search for I.Q. genes calls to mind the cartoon in which a scientist with a smoldering test tube asks a colleague, “What’s the opposite of Eureka?” Though we know that genes for intelligence must exist, each is likely to be small in effect, found in only a few people, or both. In a recent study of 6,000 children, the gene with the biggest effect accounted for less than one-quarter of an I.Q. point. The quest for genes that underlie major disorders of cognition, like autism and schizophrenia, has been almost as frustrating. Both conditions are highly heritable, yet no one has identified genes that cause either condition across a wide range of people. Perhaps this is what we should expect for a high-maintenance trait like human cognition, which is vulnerable to many mutations.
The hunt for personality genes, though not yet Nobel-worthy, has had better fortunes. Several associations have been found between personality traits and genes that govern the breakdown, recycling or detection of neurotransmitters (the molecules that seep from neuron to neuron) in the brain systems underlying mood and motivation.
Dopamine is the molecular currency in several brain circuits associated with wanting, getting satisfaction and paying attention. The gene for one kind of dopamine receptor, DRD4, comes in several versions. Some of the variants (like the one I have) have been associated with “approach related” personality traits like novelty seeking, sensation seeking and extraversion. A gene for another kind of receptor, DRD2, comes in a version that makes its dopamine system function less effectively. It has been associated with impulsivity, obesity and substance abuse. Still another gene, COMT, produces an enzyme that breaks down dopamine in the prefrontal cortex, the home of higher cognitive functions like reasoning and planning. If your version of the gene produces less COMT, you may have better concentration but might also be more neurotic and jittery.
Behavioral geneticists have also trained their sights on serotonin, which is found in brain circuits that affect many moods and drives, including those affected by Prozac and similar drugs. SERT, the serotonin transporter, is a molecule that scoops up stray serotonin for recycling, reducing the amount available to act in the brain. The switch for the gene that makes SERT comes in long and short versions, and the short version has been linked to depression and anxiety. A 2003 study made headlines because it suggested that the gene may affect a person’s resilience to life’s stressors rather than giving them a tendency to be depressed or content across the board. People who had two short versions of the gene (one from each parent) were likely to have a major depressive episode only if they had undergone traumatic experiences; those who had a more placid history were fine. In contrast, people who had two long versions of the gene typically failed to report depression regardless of their life histories. In other words, the effects of the gene are sensitive to a person’s environment. Psychologists have long known that some people are resilient to life’s slings and arrows and others are more fragile, but they had never seen this interaction played out in the effects of individual genes.
Still other genes have been associated with trust and commitment, or with a tendency to antisocial outbursts. It’s still a messy science, with plenty of false alarms, contradictory results and tiny effects. But consumers will probably learn of genes linked to personality before they see any that are reliably connected to intelligence.
Personal genomics is here to stay. The science will improve as efforts like the Personal Genome Project amass huge samples, the price of sequencing sinks and biologists come to a better understanding of what genes do and why they vary. People who have grown up with the democratization of information will not tolerate paternalistic regulations that keep them from their own genomes, and early adopters will explore how this new information can best be used to manage our health. There are risks of misunderstandings, but there are also risks in much of the flimflam we tolerate in alternative medicine, and in the hunches and folklore that many doctors prefer to evidence-based medicine. And besides, personal genomics is just too much fun.
At the same time, there is nothing like perusing your genetic data to drive home its limitations as a source of insight into yourself. What should I make of the nonsensical news that I am “probably light-skinned” but have a “twofold risk of baldness”? These diagnoses, of course, are simply peeled off the data in a study: 40 percent of men with the C version of the rs2180439 SNP are bald, compared with 80 percent of men with the T version, and I have the T. But something strange happens when you take a number representing the proportion of people in a sample and apply it to a single individual. The first use of the number is perfectly respectable as an input into a policy that will optimize the costs and benefits of treating a large similar group in a particular way. But the second use of the number is just plain weird. Anyone who knows me can confirm that I’m not 80 percent bald, or even 80 percent likely to be bald; I’m 100 percent likely not to be bald. The most charitable interpretation of the number when applied to me is, “If you knew nothing else about me, your subjective confidence that I am bald, on a scale of 0 to 10, should be 8.” But that is a statement about your mental state, not my physical one. If you learned more clues about me (like seeing photographs of my father and grandfathers), that number would change, while not a hair on my head would be different. Some mathematicians say that “the probability of a single event” is a meaningless concept.
Even when the effect of some gene is indubitable, the sheer complexity of the self will mean that it will not serve as an oracle on what the person will do. The gene that lets me taste propyl thiouracil, 23andMe suggests, might make me dislike tonic water, coffee and dark beer. Unlike the tenuous genes linked to personality or intelligence, this one codes for a single taste-bud receptor, and I don’t doubt that it lets me taste the bitterness. So why hasn’t it stopped me from enjoying those drinks? Presumably it’s because adults get a sophisticated pleasure from administering controlled doses of aversive stimuli to themselves. I’ve acquired a taste for Beck’s Dark; others enjoy saunas, rock-climbing, thrillers or dissonant music. Similarly, why don’t I conform to type and exploit those fast-twitch muscle fibers (thanks, ACTN3 genes!) in squash or basketball, rather than wasting them on hiking? A lack of coordination, a love of the outdoors, an inclination to daydream, all of the above? The self is a byzantine bureaucracy, and no gene can push the buttons of behavior by itself. You can attribute the ability to defy our genotypes to free will, whatever that means, but you can also attribute it to the fact that in a hundred-trillion-synapse human brain, any single influence can be outweighed by the product of all of the others.
Even if personal genomics someday delivers a detailed printout of psychological traits, it will probably not change everything, or even most things. It will give us deeper insight about the biological causes of individuality, and it may narrow the guesswork in assessing individual cases. But the issues about self and society that it brings into focus have always been with us. We have always known that people are liable, to varying degrees, to antisocial temptations and weakness of the will. We have always known that people should be encouraged to develop the parts of themselves that they can (“a man’s reach should exceed his grasp”) but that it’s foolish to expect that anyone can accomplish anything (“a man has got to know his limitations”). And we know that holding people responsible for their behavior will make it more likely that they behave responsibly. “My genes made me do it” is no better an excuse than “We’re depraved on account of we’re deprived.”
Many of the dystopian fears raised by personal genomics are simply out of touch with the complex and probabilistic nature of genes. Forget about the hyperparents who want to implant math genes in their unborn children, the “Gattaca” corporations that scan people’s DNA to assign them to castes, the employers or suitors who hack into your genome to find out what kind of worker or spouse you’d make. Let them try; they’d be wasting their time.
The real-life examples are almost as futile. When the connection between the ACTN3 gene and muscle type was discovered, parents and coaches started swabbing the cheeks of children so they could steer the ones with the fast-twitch variant into sprinting and football. Carl Foster, one of the scientists who uncovered the association, had a better idea: “Just line them up with their classmates for a race and see which ones are the fastest.” Good advice. The test for a gene can identify one of the contributors to a trait. A measurement of the trait itself will identify all of them: the other genes (many or few, discovered or undiscovered, understood or not understood), the way they interact, the effects of the environment and the child’s unique history of developmental quirks.
It’s our essentialist mind-set that makes the cheek swab feel as if it is somehow a deeper, truer, more authentic test of the child’s ability. It’s not that the mind-set is utterly misguided. Our genomes truly are a fundamental part of us. They are what make us human, including the distinctively human ability to learn and create culture. They account for at least half of what makes us different from our neighbors. And though we can change both inherited and acquired traits, changing the inherited ones is usually harder. It is a question of the most perspicuous level of analysis at which to understand a complex phenomenon. You can’t understand the stock market by studying a single trader, or a movie by putting a DVD under a microscope. The fallacy is not in thinking that the entire genome matters, but in thinking that an individual gene will matter, at least in a way that is large and intelligible enough for us to care about.
So if you are bitten by scientific or personal curiosity and can think in probabilities, by all means enjoy the fruits of personal genomics. But if you want to know whether you are at risk for high cholesterol, have your cholesterol measured; if you want to know whether you are good at math, take a math test. And if you really want to know yourself (and this will be the test of how much you do), consider the suggestion of François La Rochefoucauld: “Our enemies’ opinion of us comes closer to the truth than our own.”
This hot off the press. Jerry Fodor, you may recall, reviewed Andy Clark’s latest work Supersizing the Mind in the London Review of Books. In the latest issue, Clark uses the Letters section to respond. As this is a general link I paste in Clark’s letter below.
Where is my mind?
Jerry Fodor’s amusing, insightful, but fatally flawed review of my book, Supersizing the Mind, seems committed to the idea that states of the brain (and only states of the brain) actually manage to be ‘about things’: to ‘have content’ in some original and underived sense (LRB, 12 February). ‘Underived content,’ he says, ‘is what minds and only minds have.’ That’s why, as Fodor would have it, states of non-brainbound stuff (like iPhones, notebooks etc) cannot even form parts of the material systems that actually constitute the physical basis of a human mind. But just how far is he willing to go with this?
Let’s start small. There is a documented case (from the University of California’s Institute for Nonlinear Science) of a California spiny lobster, one of whose neurons was deliberately damaged and replaced by a silicon circuit that restored the original functionality: in this case, the control of rhythmic chewing. Does Fodor believe that, despite the restored functionality, there is still something missing here? Probably, he thinks the control of chewing insufficiently ‘mental’ to count. But now imagine a case in which a person (call her Diva) suffers minor brain damage and loses the ability to perform a simple task of arithmetic division using only her neural resources. An external silicon circuit is added that restores the previous functionality. Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon circuit: a genuinely mental process (division) is supported by a hybrid bio-technological system. That alone, if you accept it, establishes the key principle of Supersizing the Mind. It is that non-biological resources, if hooked appropriately into processes running in the human brain, can form parts of larger circuits that count as genuinely cognitive in their own right.
Fodor seems to believe that the only way the right kind of ‘hooking in’ can occur is by direct wiring to neural systems. But if you imagine a case, identical to Diva’s, but in which the restored (or even some novel) functionality is provided – as it easily could be – by a portable device communicating with the brain by wireless, it becomes apparent that actual wiring is not important. If you next gently alter the details so that the device communicates with Diva’s brain through Diva’s sense organs (piggybacking on existing sensory mechanisms as cheap way stations to the brain) you end up with what David Chalmers and I dubbed ‘extended minds’.
There is much more to say, of course, about the specific ways that non-implanted devices (iPhones and the like) might or might not then count, in respect of some enabled functionality, as being appropriately integrated into our overall cognitive profiles. Fodor seems to believe that such integration is impossible where parts of the extended process involve what he describes as the ‘consultation’ (and then the explicit interpretation) of an encoding, rather than the simple functioning of that encoding to bring about an effect. This kind of consideration, however, cannot distinguish the cases in the way Fodor requires. Think of the case where, to solve a problem, I first conjure a mental image, then inspect it to check or to read off a result. Imagining the overlapping circles of a Venn diagram while solving a set-theoretic puzzle, or imagining doing long division using pen and paper and then reading the result off from one’s own mental image, would be cases in point. In each case we have a process that, while fully internal, involves the careful construction, manipulation and subsequent consultation of representations whose meaning is a matter of convention.
As a final real-world illustration, consider the trials (at MIT Media Lab) of so-called ‘memory glasses’ as aids to recall for people with impaired visual recognition skills. These glasses work by matching the current scene (a face, for example) to stored information and cueing the subject with relevant information (a name, a relationship). The cue may be overt (consciously perceived by the subject) or covert (rapidly flashed and hence subliminally presented). Interestingly, in the covert case, functionality is improved without any process of conscious consultation on the part of the subject. Now imagine a case in which the same cueing is robustly achieved by means of a hard-wired connection to the brain. Presumably Fodor would allow the latter, but not the former, as a case of genuine cognitive augmentation. Yet it seems clear that the intervention of visual sensing in the former case marks merely an unimportant channel detail. The machinery that makes minds can outrun the bounds of skin and skull.
University of Edinburgh
If you’ve ever heard the term “extended mind” and thought it denoted some sort of hocus pocus, then this recording will set you straight. Zoe Drayson of Bristol University has recorded a superb overview of the notion and the ethical implications arising from it. Zoe’s motivation for coming to this multidisciplinary literature had resonance for me – Cartesian philosophy of mind seemed to be so tired and infertile.
Zoe’s piece starts at 2:25 minutes into the recording – so please don’t think you’ve got the wrong clip. This recording will remain available for only 6 more days. Click here.
Here’s a restrained and sensitive article from the Scotsman on Claude Wischik‘s work on Alzheimer’s disease. The tone of the article matches the low-key disposition and existential focus of Wischik. Speaking to an Alzheimic patient on a regular basis, I have often used synonyms for the metaphor of “tangles”:
Wischik has spent 24 years studying the neurofibrillary ‘tangles’ that first destroy nerve cells critical for memory and then neurons in other parts of the brain in those suffering from Alzheimer’s.