Myotonic dystrophy is a degenerative muscle disorder whose victims can grip but can’t let go. They may also suffer from such bewildering and seemingly disparate symptoms as cataracts, abnormal heartbeat, diabetes, and mental retardation. But what has most puzzled doctors about this inherited condition is that the disease gets worse with each generation: children tend to be more afflicted than their parents; grandchildren suffer even more.
This pattern of escalation has been an utter enigma to physicians. The genetic rules they learned in medical school simply can’t explain a disorder that gets more severe from parent to child. According to these rules children inherit two of every gene, one copy from each parent. More to the point, each of those individual genes (even an abnormal one) should pass from parent to child unchanged, barring the odd spontaneous mutation. But last February, when investigators discovered the gene that causes myotonic dystrophy, they found that it didn’t obey this pattern at all.
A gene, you’ll recall, is made up of a long sequence of chemical bases designated by the letters A, C, G, and T (for adenine, cytosine, guanine, and thymine). Now, in the normal counterpart of the gene that causes myotonic dystrophy, there’s a section that includes several repeats of the combination of letters CTG. But in people with the disease, these copies multiply. It’s as if some internal Xerox machine goes wild when the gene replicates, spewing out at least 50 CTG sequences in the first afflicted generation and as many as 2,000 repeats in their children and grandchildren. The more repeats there are, the longer the gene becomes and the worse the symptoms tend to get. We might call genes like this accordion genes because of their propensity to stretch with each new generation.
Although they received a good deal less attention, two other examples of accordion genes emerged last year as well: one is responsible for a form of retardation called fragile X syndrome; the other inflicts a wasting illness known as Kennedy’s disease. In fragile X the disastrously repeated triplet is CGG. In Kennedy’s it’s CAG. Fired-up geneticists are now looking for more diseases that might be explained by gene expansion, and they are pretty sure they’re going to find them. In short, the phenomenon has given them a whole new way of thinking about genes, inheritance patterns, and human disease.
We have known that genes were the units of inheritance since 1900, when Gregor Mendel’s famous breeding experiments on peas were rediscovered. (The Austrian monk actually worked out the fundamental principles of heredity in the 1860s, but he died before his ideas were accepted.) Mendel was the first to show that inherited traits such as color, size, and shape are controlled by discrete factors (genes), with each individual inheriting two forms of a gene (alleles), one from each parent. Since each parent in turn has two alleles, the offspring has a 50 percent chance of getting any particular allele and a 25 percent chance of getting any particular combination of alleles. And as Mendel’s pea-breeding experiments revealed, these alleles don’t blend or change--they retain their distinct identity from generation to generation.
Let’s take color, for example, and assume that each parent plant has one white and one red allele. Mendel found that two white alleles make white flowers, two red alleles make red flowers, and a combination of red and white makes red flowers also--red being a dominant trait. When he mated the mixed parents, three-quarters of their offspring had red flowers (a quarter had red-red alleles, and half had red-white), and the remaining quarter of the plants had white flowers (white-white alleles). There’s no way in this system that we can go from red to redder to ultrared, which would be roughly equivalent to what happens in myotonic dystrophy. Mendel knew a lot about peas, but he didn’t know his C’s, G’s, T’s, and A’s.
In fact, the chemical nature of genes, and the means by which they pass information from generation to generation, remained a mystery until 1953. That was when James Watson and Francis Crick discovered that genes consisted of two strands of winding DNA--the famous double helix, source of the many startling developments that have popped up in recent years. Its structure turned out to be the key to understanding how DNA works. When DNA replicates, for example, its two strands untwine to reveal the sequence of its chemical bases, A, C, G, and T. Each of those strands then acts as a template for a copy with a complementary sequence, forming an identical new helix.
One reason that so many genetic surprises are popping out at us like jack-in-the-boxes is that early molecular studies of the gene were carried out mostly on the simplest organisms, single-celled bacteria that have no nucleus (known as prokaryotes). What was true for bacteria, we believed until the late 1970s, would also be true for the more complex organisms we call eukaryotes--fungi, plants, and animals--which have larger cells with nuclei and are often multicellular. In bacteria and eukaryotes alike DNA sequences encode the instructions for making proteins. And in both cases, they’re transcribed into messenger RNA, which in turn is translated into proteins by small organelles called ribosomes. In bacteria just about all the information packed into the DNA is used to make proteins. But that’s not the way it works with eukaryotes.
Geneticists are still recovering from the shock their systems received in 1977, when it was discovered that the genes of chickens and rabbits (and ultimately all eukaryotes) are split by long stretches of DNA that encode no proteins. These intervening sequences, called introns, are faithfully transcribed into messenger RNA--but their nonsense message is promptly snipped out and cast on the cutting-room floor. The DNA sequences that do encode proteins are called exons. Most genes are made up of several exons separated by introns. Our copious genetic endowment is only about 10 percent message and, it would appear, 90 percent introns and other gobbledygook.
Introns seem to be DNA parasites that have no other function than to reproduce themselves from generation to generation. They appear to violate a fundamental principle of natural selection: use it or lose it. Part of the pre-intron dogma of evolution was that the genetic apparatus is lean and mean and will toss out or eliminate any molecules that are not working for a living and earning their keep.
Stunned molecular biologists had to ask themselves: Why is our genome, like daytime television, nine-tenths junk? Why do we depend on a Rube Goldberg apparatus, a complicated set of dice-and-splice enzymes, to cut out the introns and sew the cut ends of exons together again just to make a single protein? Is this any way to run a genome? It’s not only inefficient, it could be positively dangerous! A splicing error could lead to a botched protein. A simple editing mistake in the process of copying such complicated DNA could cause a lethal mutation in some vital gene.
Take, for example, the gene for collagen, the main structural protein of the skin, bones, and teeth. It’s fragmented by no less than 50 introns--50 chances for error each time the gene is transcribed or copied. Such errors do in fact occur, resulting in defective collagen and a type of osteogenesis imperfecta. Individuals who inherit this disease, like the painter Toulouse-Lautrec, have fragile bones and suffer from fractures and growth abnormalities.
In retaining such a shockingly cumbersome mechanism, nature must have discovered some compensatory advantage. Maybe introns are spacers that divide the whole gene into different functional units. Perhaps they speed up the process of evolution by making it possible to shuffle exons like cards, putting together new proteins with new functions. Such a deal might make it possible for evolving animals or plants to adapt more quickly to a new environment.
One new deal believed to have resulted from such exon shuffling is tissue plasminogen activator (t-PA)--a protein that helps dissolve blood clots and that’s now being used to treat heart attacks caused by artery- clogging clots. Different exons on the t-PA gene resemble exons from three other proteins involved in blood clotting: plasminogen, epidermal growth factor, and fibronectin. Apparently exon shuffling brought together pieces of three different genes to form the mosaic gene for t-PA.
Introns are only one example of the apparent--and surprising-- wastefulness of the eukaryotic genome. In contrast with the tiny bacterial cell, where there is no space to spare, the larger eukaryotes seem profligate in their manufacture and retention of extra DNA. We have numerous copies of very similar genes--a whole slew of them just for various types of hemoglobin, the red pigment in blood cells that transports oxygen. Several different hemoglobins, for example, cater to our changing oxygen needs as we go from floating in the oxygen-poor milieu of the womb to much more demanding physical activity in the oxygen-rich world outside.
The genes for these various hemoglobins cluster on the DNA (lined up in chronological order of use, no less--from the hemoglobin we use as embryos to what we use as adults). But in 1980 researchers found there was something unreal about three of the genes in the clusters. They looked like hemoglobin genes and were replicated with them during cell division, but they could no longer be translated into proteins. They were defunct.
Another surprise! Not only does the genome have all that Styrofoam-like packing in the form of the introns, it also serves as a kind of graveyard for dead genes. These genetic fossils are reproduced from generation to generation, often for millions of years. They don’t do an organism any good, but like regular fossils, they have proved useful to students of evolution. Free of the usual evolutionary constraints--they no longer encode proteins so their sequence can change without abusing the organism--these pseudogenes evolve more rapidly than normal genes and can serve as fast clocks for timing rates of evolutionary change. When pseudogenes from different apes were compared in 1987, they helped confirm that our closest living primate relative is the chimpanzee. (Pseudogenes for hemoglobin in chimps were more similar to humans’ than pseudogenes in gorillas and orangutans.)
Incredibly, in some gene clusters, pseudogenes actually outnumber real genes by ten to one. So once again we’ve found that the processes of replication and evolution, so compact and efficient in bacteria, are much more sloppy in higher organisms.
This very sloppiness may be an evolutionary mechanism. All the genes in these clusters originated as a single gene that then replicated side by side to form multiple copies. That permitted the original gene to retain its function, while the copies evolved slightly different functions that were better adapted to changing conditions, as in the case of fetal versus adult hemoglobin. Indeed, it now looks as if all 100,000 genes in the human genome may have originated in a handful of genes (or perhaps only one). Nature, like Mozart, may have started with a few simple themes and elaborated them endlessly into the complex compositions we have now.
One of the most complex and peculiar compositions surely has to be the neurofibromatosis gene, which was identified the year before last on chromosome 17 by teams at the University of Michigan and the University of Utah. Mutations in this gene are responsible for a condition marked by pigmented spots and lumps on the body and tumors of the nervous system. It has been suggested that Quasimodo, the hunchback of Notre Dame, was created by Victor Hugo after he saw someone with neurofibromatosis. The famous Elephant Man was thought to have this disease, too, though experts now question the diagnosis.
Not only is the neurofibromatosis gene very large, spanning some 400,000 nucleotide bases, but it has three smaller genes embedded in it, all three of them nesting inside an intron. It was bad enough to find genes inhabited by introns. But genes inside introns inhabiting genes, like so many Chinese boxes, was something else again. Nor was that the last of the surprises in store for neurofibromatosis researchers. Last year the Michigan group revealed that in one patient the disease had been caused by a transposon: a DNA sequence had moved lock, stock, and barrel from elsewhere in the genome and landed smack in his neurofibromatosis gene, disrupting its normal function and causing the disease.
Less than 40 years ago it would have been heresy to suggest that bits of DNA moved around, jumping from one parcel of DNA or chromosome to another. Yet even at that time Barbara McClintock, a plant geneticist, had begun to suspect as much. McClintock observed that pale ears of corn became mottled with dark patches as they grew. She cleverly deduced that the kernels were initially colorless because a transposable element was stuck in the middle of their color gene. But as the corncobs grew, the transposon would occasionally hop out of the gene, restoring color to the kernels. It was not until the sixties and seventies, though, that these mobile units were found in bacteria (and later eukaryotes) and their importance as agents of genetic change was appreciated.
Transposons can either disrupt the function of other genes or carry important functional genes of their own. (The latter are commonly referred to as jumping genes.) What’s more, in bacteria they not only jump around within the genome, but they make the leap from one organism to another. That’s a big concern right now because of a recent rise in antibiotic-resistant diseases--physicians are increasingly worried about transposons ferrying resistance genes from one bacterium to another. Once thought to be relatively stable, the genome (even in humans) is now viewed as subject to constant remodeling from within by these self-propelled pieces of genetic furniture.
One of the most amazing and least expected violations of Mendelian genetics, however, is a newly recognized phenomenon called genomic imprinting. Until six years ago we thought that what mattered was what kind of genes and chromosomes we inherited, not which parent they came from. When sperm meets egg, the human embryo generally receives 23 chromosomes from each parent and so has 23 equivalent pairs (except for the sex chromosomes, which will be XX for females and XY for males).
Occasionally things get mixed up, though, and both chromosomes in a pair come from the same parent. If both chromosomes are healthy, it shouldn’t make any difference, according to our pre-imprinting ideas. But for some genes and chromosomes, it turns out, it matters very much indeed whether they come from the mother or the father.
Just last March, for example, in the New England Journal of Medicine, a team from the Netherlands reported that genomic imprinting can cause two distinctly different kinds of mental retardation and growth. If a baby has two chromosomes 15 from the mother (instead of one from each parent), it will have Prader-Willi syndrome and grow into a short, fat, sluggish kid whose compulsive eating may cause parents to put a lock on the fridge. If it has two chromosomes 15 from the father, it will have Angelman syndrome, a totally different disorder characterized by a small head, widely spaced teeth, clumsy, jerky movements, and paroxysms of inappropriate laughter. Observations like these give new meaning to the truism that a child needs the genetic contribution of both its parents.
Still, the vast amount of dna that reproduces itself never seems to contribute anything to its host--that’s why it’s often called selfish DNA. And what could be more selfish than a gene that spreads by selectively killing the offspring of the organism that carries it? Sexual reproduction sets the scene for this bizarre result, as it does for genomic imprinting.
As we have seen, the genome contains pairs of genes, one version, or allele, from each parent. Normally such a pair coexists comfortably. But sometimes the two versions, like rival siblings, compete for parental favor--namely, the privilege of being passed on to the next generation.
Ordinarily each version would be transmitted to 50 percent of the offspring. But selfish alleles aren’t satisfied with half and seek to gain an advantage either by replicating themselves in larger numbers or by destroying their rival allele. Just such a hustler is the Medea gene, reported by a Kansas team last April in common flour beetles. Once this gene variant turns up in a particular population of flour beetles, it spreads inexorably.
Medea, remember, was the mythical Greek enchantress (and Jason’s main squeeze) who killed her own children. The Euripides drama invariably induces horror in the audience, so perverted do we perceive a mother who murders her own progeny. But Greek tragedy has nothing on the flour beetles.
If a beetle mother has both a Medea allele and a non-Medea allele (Mn) and mates with a male who has two non-Medea alleles (nn), half her offspring will have the Medea (Mn) and half will not (nn). The offspring with the Medea allele do fine. Those without it die before pupation. The Medea mom apparently produces some kind of poison that the Medea allele protects the embryo against. Lacking this protection, the non-Medea embryo is killed by its own mother’s toxicity. Thus most of this population of flour beetles will eventually have Medea alleles because those that lack it won’t live to reproduce.
Alternatively, some other newfound oddities, the Methuselah genes, may lengthen the lives of their carriers. The genes (named for the most long-lived man in the Bible) were found in fruit flies by Michael Rose, a geneticist at the University of California at Irvine. Last February, Rose reported that one of these genes makes a souped-up version of superoxide dismutase, an enzyme that mops up highly reactive molecules called free radicals. Like acid rain, these radicals attack and damage any bodily structures they come in contact with. Biologists have suspected for at least two decades that they are largely responsible for the aging process.
Engineering extra copies of the superoxide dismutase gene into a fly’s DNA lengthens its life span by 10 percent, another California group has found. What works for flies could theoretically work for humans too, since we also have genes for this enzyme--and presumably other Methuselah genes as well. It would certainly be a pleasant surprise to live longer without falling apart.
In the weird world of the genome, Methuselah giveth life, and Medea taketh away.
What next? The Village Voice used to advertise EXPECT THE UNEXPECTED. After accordion genes, jumping genes, introns, genomic imprinting, Medea, and Methuselah, we might think that our capacity for surprise is nearly sated. But chances are that many more whoppers will spring forth from the cell nucleus before the last twisted tangle of DNA has been explored.