Harder Than Diamond Is Not Faster Than Light

Some perceived limits to our material world may not be limits at all. With the help of computers and the fundamental laws of physics, we can make exotic new substances almost any way we like.

By Paul Preuss|Monday, November 01, 1993
In 1989 Marvin Cohen, a professor of theoreticai soiid-state physics at the University of Caiifornia at Berkeiey, proposed a structure for a new crystaiiine compound that might be harder than diamond, the hardest of aii known materiais. Nobody questioned his sanity. In fact, haif a dozen teams of experimenters were soon trying to create this pureiy theoreticai substance in their iaboratories.

Experimenters have not always been so willing to follow up the ideas of theorists, especially when it comes to materials science, which has been dominated by hands-on experience since someone fashioned the first flaked stone tool. As late as the 1960s, when Cohen was a graduate student at the University of Chicago, he had to look long and hard before he could find anyone willing to do the experiments that would try out his ideas. The difference now isn’t that Cohen is a professor instead of a lowly graduate student, nor even that he has compiled a record of accurate predictions over several decades. What Cohen and others have done is apply computer power to quantum theory; as a result, experimenters have gradually grown more willing to trust the sort of blue-sky ideas that once struck them as useless or even crazy.

In some ways, it’s surprising it took so long. After all, quantum mechanics, the basic recipe for creating matter in all its forms, has been with us since the 1920s. That’s when physicist Erwin Schrödinger came up with the equation that describes the motion of a particle through a force field--notably, the motion of an electron in the vicinity of an atomic nucleus--as a wave. And knowing what the electrons are doing tells you almost everything you need to know about a material: whether it’s shiny or dull, soft or hard, transparent or opaque, reactive or inert.

A large part of physics and the whole of chemistry are thus completely known, wrote Paul Dirac, who later shared the Nobel Prize with Schrödinger for their work in quantum theory. What Dirac meant was in principle, for he quickly added, The difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.

There’s the rub. To this day, complete solutions of the Schrödinger equation remain too complicated for even the most powerful computers--too complicated, at least, for all but the smallest assemblies of atoms. To apply theory to practice ab initio (from the beginning), Schrödinger’s equation must be solved for every electron in a system. And no two electrons in an atom or molecule occupy the same quantum state: they inhabit discrete energy levels, their spins are differently oriented, and the magnitudes and directions of their angular momentums differ. Furthermore, many of these values change under the influence of electric and magnetic fields, so that each electron is affected by the fields produced by its neighbors. The electrons are like small boats in a crowded harbor, bobbing about with a motion that affects and is affected by dozens or even many millions of other boats. The result is complexity of the most unmanageable kind.

For almost half a century, from Schrödinger’s equation until the still-tentative computer programs of the 1970s, quantum theory was more impressive to theorists than to experimental chemists. As far as lab chemists were concerned, the only thing about which theory had anything sensible to say was the hydrogen molecule--the simplest of all molecules, consisting of a pair of the simplest of all atoms, each with only a single electron to worry about. Theory could specify the bond length between two hydrogen atoms, the way hydrogen absorbed and radiated energy, and other basic properties of hydrogen gas. But the pertinent numbers had long been established in the laboratory. When trying to understand molecules that were a little more complicated than hydrogen--or when trying to understand even the simplest of chemical reactions--experimenters regarded theory as useless.

I remember a joke, Cohen says of his early days, when he struggled to use computers to calculate properties of matter, told to me so often it was irritating, about a physicist who goes to the computer and has this complicated integral running, and finally the answer comes out 3.14159 . . . , which is that most basic and familiar of physically significant ratios, pi. The point being, if this guy had used his mind instead of the computer, he would have gotten the answer at the beginning.

It took two parallel developments to turn the pipe dreams of theorists into practical tools that experimenters would take seriously: fast computers and clever approximations of the Schrödinger equation. The approximations had a long head start. As early as 1934, physicist Enrico Fermi (a theorist and experimenter both) devised a more workable way to use the Schrödinger equation. Realizing the futility of attempting to solve it for every electron, Fermi substituted two simplified entities: valence electrons--an atom’s outer electrons, those that interact with other atoms- -and cores, consisting of the atomic nuclei and their tightly constrained inner electrons. With Fermi’s method, the Schrödinger equation had to be solved only for the valence electrons; their interaction with the cores was expressed as a pseudopotential--the effective electric force felt by each valence electron as the core acts upon it.

This method works particularly well when applied to a crystalline solid, real or imaginary. All you need is the atomic number and a geometric structure. The atomic number tells you the number of positively charged protons, and thus the number of negatively charged electrons. Carbon, for example, has six electrons: two of them are bound tightly to the nucleus; the other four, the valence electrons, are the handles that make carbon such a flexible building block (for making harder-than-diamond compounds, or life). Knowing the atomic number already tells you a lot about the nature of the element.

The geometric shape of the crystal cells--hexagonal, say, or cubic--is important because different crystal structures can make for enormous differences in the properties of chemically identical solids. Take carbon again: if the atoms are arranged into flat sheets of hexagons, like chicken wire, the bonds between the sheets are weak, allowing them to slide over one another easily. This is the structure of graphite, one of the softest solids and an excellent lubricant. But if the atoms are arranged in tetrahedrons, with each atom bound to four others, the geometric units cannot move with respect to one another. The result is diamond, the hardest of materials.

Until the 1980s, however, even the best approximations--and several generations of variations followed Fermi’s--couldn’t be applied, because the calculations still required too much computing power. Cohen estimates that even with today’s supercomputers, the number of hours needed to calculate the properties of a solid varies roughly as the cube--or, if you’re infinitely clever, maybe the square--of the number of atoms in the solid’s most basic unit. In a solid made of a single element such as carbon, that number could be as low as two; thus to get a good description of the properties of diamond by the pseudopotential method might take somewhere between four and eight hours of supercomputer time just to run the calculations. In a solid made of more than one element, time on the supercomputer could add up to hundreds of hours.

Nevertheless, Cohen and his colleagues forged ahead. By the mid- 1980s they had spent many thousands of hours on the fastest available computers and had studied dozens of materials. They’d made some profound theoretical discoveries. For example, they predicted that under great pressure the crystal structure of silicon would shift from a tetrahedron to a simple hexagon. This rearrangement would change both the configuration of the electrons and their mobility. Ordinarily, of course, silicon is a semiconductor. But at very low temperatures, Cohen predicted, compressed silicon would become a superconducting metal--that is, it would carry an electric current without resistance. The pressure needed to get the predicted result was 150,000 times atmospheric pressure, at a temperature eight degrees Celsius above absolute zero. That range is not hard to reach; it’s just that experimentalists didn’t believe it, says Cohen. I couldn’t get anyone in the United States to do that measurement. Fortunately I had friends in Grenoble, France, who believed in what we were doing.

By squeezing a tiny sample of silicon between the tips of two diamonds, the French team confirmed that a hexagonal form of silicon is a superconductor at 150,000 times atmospheric pressure and 8.2 degrees above absolute zero. We predicted the existence of the material before it was discovered in the laboratory, and we predicted that it would be a superconductor, Cohen says, and we understood everything that went into that calculation, from scratch. Theory, with the aid of computers to do the tedious calculations, had pointed the way to discovery.

Alas, there aren’t a lot of uses for a material that can exist only at the extremely high pressures of Earth’s deep interior combined with the extremely low temperatures of outer space. So, Cohen says, our next step was to try to think about making materials that might be useful.

With the assistance of a student, Amy Liu, who ran the computations, Cohen soon predicted that a compound of carbon and nitrogen arranged in a diamondlike structure might have extraordinarily useful properties. For one thing, it should, like diamond, have very high thermal conductivity--meaning heat would pass through it almost without resistance. For another, it would be very hard--possibly harder than diamond itself. I talked to several of my metallurgy friends, says Cohen, and of course they had grown up with the idea that diamond was the hardest material in the world--there wasn’t going to be anything harder. I had the feeling that they thought of the hardness of diamond the way we think of the speed of light--a maximum that you can’t exceed. Cohen hoped to persuade his friends that a theorist could have something useful to say, not just about the way the world is but about what the world might become. As the 1990s began, he was not the only theorist engaged in the struggle.

Poor Marvin, he started ten years before me, and the experimentalists were abusive, says Henry Fritz Schaefer III, director of the Center for Computational Quantum Chemistry at the University of Georgia. The evolution of modern chemistry, as Schaefer tells it, is a tale of often strained relations between experimenters and theorists. Obviously the only way theory could make its mark was by some spectacular clash with experiment, says Schaefer--in other words, by giving an example in which theory was right and experiment was wrong. In 1970 Schaefer himself provided that spectacular clash.

The prelude to the showdown had begun a decade earlier, when, after years of searching, experimentalists finally proved beyond a doubt the existence of the tiny methylene molecule--a carbon atom between two hydrogen atoms. The first thing you want to know about a molecule is its structure, says Schaefer, and the structure of methylene soon became an object of controversy.

Despite the opinions of some theorists to the contrary, the experimentalist Gerhard Herzberg insisted that methylene had a linear structure, with its three atoms arranged like beads on a taut string. Herzberg cited as evidence his own pioneering spectroscopic studies, which measured the frequencies of light absorbed and transmitted by molecules; only by such methods could invisible molecules be observed at all, yet the uncertainties in interpreting 1960s-era spectroscopic data were great.

Meanwhile, theorist Schaefer, using a computer instead of a spectroscope, concluded that the methylene molecule must be bent--that in fact its two hydrogen atoms form a sharp angle with the central carbon atom. He calculated the allowed energy states of the molecule from the Schrödinger equation and concluded that the straight molecule was not a likely shape.

A year later, in 1971, Herzberg won the Nobel Prize; he richly deserved it, chemists agree, for his wide-ranging work on the structure of molecules. But even before winning the prize, Herzberg, confronted with Schaefer’s arguments, had already begun to retreat from the notion that methylene was linear; soon Herzberg reanalyzed his spectroscopic data and declared that methylene was indeed sharply bent--Schaefer was right.

Thus began what Schaefer and others call the second age of quantum chemistry, when experimentalists found themselves forced to take the predictions of computer-armed theorists seriously. Nowadays simulations that weren’t even dreamed of 20 years ago have become routine; more powerful computers and increasingly sophisticated approximations allow theorists to subject model systems to extremes of heat and pressure on- screen, to watch as atoms and molecules move about, undergo phase changes, and rearrange themselves according to quantum rules.

The success theoreticians have had in predicting molecular behavior has led scientists to put faith in theoretical results that can never be subjected to direct experiment. A recent example is the work of Giulia Galli and her colleague Richard Martin at the University of Illinois. The two were interested in seeing what happens to carbon under very high pressure. The idea was that the answer could shed light on the mysterious origins of natural diamonds, which geologic evidence suggests are formed at depths of more than 75 miles in Earth’s mantle, where the heat and pressure are extreme. One popular idea was that diamonds might crystallize out of liquid carbon, but nobody knew whether carbon was liquid or solid under such conditions.

So Galli and Martin set to work on a supercomputer, using simulations of 54 carbon atoms arranged in a diamond crystal. If you get it hot enough, it’ll just melt, explains Martin. We can look at every atom and compute the distance that it moves over time. If it’s still a crystal, the atom won’t get far from its site; it’ll just rattle around. But if it’s a liquid, it moves.

Once they had diamond melting, they started pumping up the pressure. You take the same number of atoms but reduce the volume they can live in, says Martin. Logically, you would expect high pressure to keep atoms locked in a crystal state, as it does in most materials. But carbon falls into the same family on the periodic table as silicon--a substance, like water, that actually melts more easily at higher pressure. (The fact that water melts at high pressure is what makes ice skating possible; actually, you float along on a thin layer of water.) Many people expected carbon to do the same.

To everyone’s surprise, Galli and Martin’s calculations showed that the melting point of carbon actually rises as pressure increases. Even at mid-Earth pressures and temperatures, it won’t melt. That means that most diamonds on Earth were probably produced not from liquid carbon, but from carbon that was squeezed and heated in volcanic eruptions.

Getting answers purely from the bottom up--analyzing simple molecular structures and simple chemical reactions by applying the Schrödinger equation--is only one way of using computers to understand the stuff of the real world. Quantum chemistry is not everything. We’re the purists, Schaefer confesses. By using more approximate theoretical descriptions, then correcting them from experiment, much more difficult problems can be taken on. The goal is what Schaefer calls the third age of computational chemistry, the age of pragmatism.

A pragmatic approach has been pioneered by molecular biologists and organic chemists, who have used computers not only to model some of nature’s biggest molecules but also to design novel molecules that mimic and even improve on nature’s. A decade ago molecular biologists began using computers to create intricate visual images of huge molecules like DNA and hemoglobin, and even of whole viruses and cells. The resulting spectacular computer pictures had nothing to do with the Schrödinger equation; they were derived entirely from experimental results. But just as bottom up programs tend to choke when confronted with more than a few electrons, these top down biological programs also had limits: although they gave researchers an idea of what a structure looked like, at first they didn’t provide a clue as to the way molecules moved.

Lately, though, with input from quantum theory, biology-based programs have shown the way to a rich new middle ground in the understanding of matter. The Materials and Molecular Simulation Center (MSC) at Caltech is a paradigm of the new style of research center devoted to the study of materials. The lavish facility is funded by both the Department of Energy and private industry, and its objectives are to match the tools and techniques of computational chemistry to the needs of industrial users such as Xerox, General Electric, and General Motors.

William Goddard III, MSC’s director, notes that about half the center’s 30 researchers study the molecular dynamics of diverse chemicals and materials, including a variety of polymers, composites, ceramics, superconductors, drugs, and the spheroid carbon structures known as buckyballs. Meanwhile the rest of the researchers are working on fundamental quantum-mechanical problems--strategies for approximating solutions of the Schrödinger equation--for ever larger assemblies of atoms. Goddard says their progress has been good. A year or two ago, 10 carbons and 10 hydrogens was a big system. Now a big system is 100 carbons and 100 hydrogens.

That progress requires the combination of the best that both theory and experiment have to offer. As theoretical approximations have been getting better, and computers faster, experiment has also been improving at a rapid pace, providing a better and better reality check on the theorists’ work. Experimentally, you can measure things more and more finely, explains Siddharth Dasgupta, MSC’s manager. Theoretically, we can work with larger and larger systems. And as top-down and bottom-up programs draw closer together, the researchers are exploring various strategies for making them meet. Essentially, they all involve using experimental input to fine-tune the theoretical models.

For example, when Goddard’s graduate student Charles Musgrave worked on an exotic silicon surface--a puzzling configuration in which the atoms spontaneously arrange themselves into unexpected patterns--he did his calculations on clusters of just five silicon atoms. Properties of the theoretical model--the strength of the bonds, for example--were then fine- tuned on the basis of experimental findings in the lab. Then the improved model was applied to the larger system.

The same strategy guides Musgrave’s collaboration with Xerox researchers on building nanoscopic diamond bearings. The bearing could contain as few as 500 atoms, but the theoretical model is manipulated at an even more manageable size of 10 or 20 atoms at a time. After I build it on a computer, Musgrave says, I can tell how it will behave. Is it stable? Will it fall apart? Will it be rigid?

In ten years, researchers hope to be able to answer such questions for systems of as many as 10 million atoms.

By the early 1990s most experimenters had long since stopped abusing theorists like Marvin Cohen; in the field of materials research, anyway, many had become receptive to theory. True, in 1989, when Cohen had described a material made of carbon and nitrogen atoms arranged in a structure not found in nature, he encountered lots of people who scoffed at his suggestion that this hard carbon nitride might be harder than diamond. But he also inspired several groups of experimenters to try to make the stuff. One, led by Eugene Haller at Lawrence Berkeley Laboratory, was close to home, and Cohen worked closely with Haller’s team. Among the others were groups led by Y. W. Chung at Northwestern University and Charles Lieber at Harvard; they had needed only the encouragement of reading Cohen’s ideas in the scientific journals.

Cohen was led to his discovery by his familiarity with Fermi’s pseudopotential method. To make a quick estimate of the promise of new ideas for solids, Cohen developed a simple formula that could be run on a hand calculator. When he tapped in the numbers for a crystal structure with three carbons and four nitrogens in each unit cell--C3N4--he got interesting results. The lengths of the bonds between the atoms looked to be so short that it would be difficult to squeeze the atoms any closer together. In other words, C3N4 looked very hard. It also looked light and strong and transparent.

Cohen’s student Amy Liu performed tedious calculations on a mainframe, confirming and refining the approximations. Quite similar crystalline compounds are known, but only a few carbon-nitrogen solids, of uncertain structure, have ever been made in the laboratory, and with difficulty. The problem is getting the nitrogen to form strong bonds with the carbon.

At Lawrence Berkeley Laboratory, Haller decided to use high temperature in a steel vacuum chamber filled with pure nitrogen gas. With powerful radio waves he sputtered--or jarred loose--carbon atoms from a disk of graphite. The flying carbon joined gaseous nitrogen to make a thin film on a hot wafer of silicon or germanium substrate a few centimeters away, forming a few tiny crystals of carbon nitride.

Meanwhile Chung, at Northwestern, had set out to make hard carbon nitride using a different sputtering technique that involved somewhat higher power, lower gas pressures, and lower substrate temperatures. Chung’s group used zirconium as a substrate because both carbon and nitrogen bond strongly to it. They also used glass, to make it easier to study the carbon-nitrogen film with optical microscopes. They even used table salt as a substrate--it’s easily dissolved, leaving the overlying carbon-nitrogen film intact for electron-microscope studies.

When Haller and Chung independently studied what they had made, they both found evidence for Cohen’s hard carbon nitride. But their evidence differed as much as their experimental techniques.

Haller’s group based its claim on X-ray diffraction that produced distinctive patterns similar to those expected to be produced by crystalline C3N4. The U.S. patent office seemed satisfied, for in May 1992 Haller’s group was granted a patent for hard carbon nitride and method for preparing same. Meanwhile, Chung and his group did practical tests as well. With steel and diamond tools they punched and scraped the 90-micron- wide carbon-nitride films laid down on zirconium. The surface showed no indentation when punched with a diamond point--meaning it was either very hard or very flexible. In friction tests the most remarkable observation is that there appears to be no wear, they reported.

Finally, just this past July, Lieber’s group at Harvard announced with some fanfare that it had made C3N4, and the evidence seems almost undeniable. The researchers blasted carbon atoms from graphite with a laser and mixed them with highly reactive atomic nitrogen--not nitrogen gas, which consists of molecules containing two atoms each, but nitrogen torn apart into single atoms. This technique allows the amount of nitrogen to be systematically adjusted, increasing the chances of finding the optimum recipe for combining carbon and nitrogen atoms in the proportion of three to four. Like the others, the Harvard researchers have not yet made enough of the new material for direct tests of its physical properties--although their flexible technique holds out the promise that they might soon produce much more.

Whatever Chung and Haller and Lieber have made out of carbon and nitrogen, it’s new, and apparently it’s very hard indeed. Marvin Cohen concedes that the pieces just aren’t big enough to do good experiments on. But he’s still hopeful: I would like to make a material that scratches diamond, he says. And that will take a sizable chunk of the stuff.

Like Chuck Yeager approaching the sound barrier in his Bell X-1 rocket plane, Marvin Cohen flies his computer programs ever closer to the hardness barrier. It will be broken soon, if it hasn’t been broken already. The hardness of diamond, like the speed of sound, is no ultimate limit. It’s not the speed of light.

What will be the significance of hard carbon nitride, or its equivalent? We’ll have more than a light, strong, hard substance that can carve diamond itself. We’ll have the realization of an idea: that, using computers and programs of our own devising, we can remake the stuff of the universe any way that quantum theory allows--which is almost any way we want.
Comment on this article