1997 Discover Awards: Sound: Sound Beams

Jul 1, 1997 5:00 AMNov 12, 2019 4:26 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

WINNER: American Technology Corporation’s HyperSonic Sound System

INNOVATOR: Elwood Norris

Elwood Norris wants to make audio speakers obsolete. He would banish forever that cumbersome arrangement of woofer, tweeter, and midrange speaker that only an audiophile could love and substitute a single speaker the size of an Oreo cookie.

Norris calls it a HyperSonic Sound System, and it is no ordinary speaker. Instead of a vibrating membrane, it uses a crystal wafer that can project a beam of sound across a room like a spotlight. When the beam hits a wall or a ceiling, it bounces off and creates the impression that the sound originates at that spot, like a ventriloquist throwing his voice. To achieve a stereo effect, two beams can be trained on opposite sides of a room or theater. You can focus each beam on a point, and that’s where the sound will be created, Norris says. Equally important, he adds, is that his new way of generating sound has less distortion over the full range of human hearing than even the most expensive speakers and is five to ten times more efficient, so less power is needed.

It may almost seem magical, but Norris’s sound spotlight relies on a simple effect first explained 150 years ago by the physicist Hermann von Helmholtz. When playing two notes very loudly on an organ, he noticed that a third note, whose frequency was the difference between the frequencies of the other two notes, was also produced. Norris’s device does the same thing, but in place of the organ he uses a crystal that produces two powerful beams of sound so high-pitched that they are beyond human hearing. The beams interact in such a way as to produce a complicated ultrasonic wave, of which one component is the difference in frequency between the two. That component is all you hear. For example, to produce a note of 440 hertz--A above middle C--Norris generates an ultrasonic wave containing tones of 200,000 hertz and 200,440 hertz. Only the difference between the tones--440 hertz--is audible.

Norris wasn’t the first to superimpose ultrasonic beams in this way, but he succeeded in mixing several signals electronically and sending them through a single crystal, which vibrates and sends out a beam of superimposed ultrasonic notes. The resulting sound retains a handy characteristic of ultrasound--it is directional, which means you can hear the sound coming from Norris’s speaker only if you are standing directly in front of it, or it is reflected off a flat surface, like a theater wall.

Norris, founder and chief technology officer of American Technology Corporation in Poway, California, spent four years trying to make the idea work. It was basically a garage operation, he says. He produced his first audible sounds early last year, and he is hoping to bring a low-fidelity product to market before the end of 1997.

Finalists

All’s Quiet

BBN Systems and Technologies’ QuietChip

INNOVATOR: James Barger

It’s a very strange feeling to be driving along and to turn it on, says James Barger of his invention, the QuietChip. Your first reaction is that the car in the next lane suddenly got louder. That, of course, is not the case. Rather, the sounds from your own car--the roar of the engine, the whine of the tires--have almost faded to inaudibility. The rest of the world seems noisier by comparison.

Just as two water waves will wipe each other out if the trough of one meets the crest of the other, two carefully matched sound waves can cancel one another and produce near-silence. Barger’s company, bbn Systems and Technologies of Cambridge, Massachusetts, has for years been making antinoise systems that do exactly that, but these were intended for the Navy’s ships and commercial airplanes. Using the technology to shush the family car would have cost $50,000--too much even for the luxury car market.

By 1994, however, electronic circuits had gotten small enough, Barger realized, to make an affordable antinoise system. He and his colleagues worked for two years to squeeze onto a single integrated-circuit chip everything needed to analyze the noise created by a car, boat, or small plane, calculate the proper antinoise, and send a canceling signal to a set of speakers. The result is the two-inch-square QuietChip.

Last December, Barger tested it in a Chevrolet Cavalier. Microphones and other sensors installed in the engine and the passenger compartment fed the noise to the QuietChip, which then fed the appropriate antinoise to the car’s audio speakers (it works even if the radio is on). bbn Systems and Technologies expects to start selling the chip to auto companies sometime next year. With luck, new cars will become dramatically quieter in a few years.

Bringing Music to the Web

MIT’s NetSound

INNOVATOR: Michael Casey

While building his home page on the World Wide Web, Michael Casey got frustrated. I wanted to put a sound track on it, he says, and since he had worked as a professional sound producer, he naturally wanted to make it of high quality. But the Internet is too slow for transmitting high- quality audio. Even with data-compression techniques and a fast modem, it takes ten minutes or so to download a five-minute audio clip.

So Casey, a graduate student at mit’s Media Lab, set out to change the way computers handle sound. Under the current standard, everything from a clap of thunder to a Mozart sonata is represented as a digital recording, in which a computer samples the sound wave tens of thousands of times each second and records it as a string of numbers. Rather than having to assemble and transmit so much data in all cases, Casey thought, it would be more efficient to describe the sound and let the computer re-create it.

You have to ask: What’s the most important information about a sound? Casey says. And that’s what you extract. For example, creating a model of what footsteps sound like allows a computer to produce thousands of different footsteps--quick steps or slow ones, heavy or light, steps on wood or concrete or marble. Using this type of model, Casey can generate realistic sounds with surprisingly little data. To transmit a symphony, for example, he would send the musical score along with models of what each instrument sounds like; the computer on the other end would reconstruct the entire symphony from that information.

In October, Casey assembled several different models of this sort into a proof-of-concept program called NetSound. Sending music between two computers equipped with the software takes at least a thousand times less data than conventional digital sampling methods--in theory, all classical music currently recorded could be represented on one compact disk. Casey admits, however, that the music, though note-perfect, would lack the warmth and personality of a human performance. For this reason, he thinks his invention will complement, rather than supplant, conventional recording methods. It might be useful, for instance, in supplying computer games with versatile and convincing sound effects. Or it could be used in a program that allows a person sitting at a keyboard to act as conductor, shaping the performance to his own taste. At present he’s trying to get his technique accepted as an industry standard.

Talk Like A Man

University of Chicago and Microsoft’s Speakeasy

INNOVATOR: John Goldsmith

It was the best of times, it was the worst of times. That is how John Goldsmith would have read the famous first sentence from A Tale of Two Cities, using a change of pitch to emphasize the two key words. His computer program, however, apparently still had a few bugs. It read, It was the best of times, it was the worst of times.

Still, it was a big advance over the utterly flat monotone that other computer programs have been able to achieve. Although computers have been able to produce intelligible speech for several years, says Goldsmith, the intonation has been so flat and artificial that it is painful to listen to for more than two or three minutes at a time.

In 1995, during a sabbatical from the linguistics department at the University of Chicago, Goldsmith set out to correct this deficiency and teach computers how to mimic the intonation patterns--the rising and falling pitch--of human speakers. He set down a series of rules that reflect how American English speakers vary their pitch as they’re speaking. For example, in a yes-no question such as Have you had dinner yet? the main accented word--dinner--is spoken with the lowest pitch. By contrast, in a why-what-where-when-how question such as When is dinner? the same key word has the highest pitch. At Microsoft’s research labs, where Goldsmith was spending his sabbatical, he turned the rules into a computer program he calls Speakeasy.

From there, development proceeded by trial and error. He would feed some test sentences, such as the line from Dickens, and see what the computer did wrong. You fix one thing, then another, then a third and a fourth, he says. All the while, Speakeasy became more and more human in its intonation for more and more sentence constructions--even though, Goldsmith says, you still know it’s a computer. He finished the program last September, but Microsoft won’t say what plans it has for the technology, if any. Meanwhile, Goldsmith is trying to teach his program to put pauses into speech in the same way that humans do. Maybe someday he’ll even have it make Freudian slips.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group