International Solid-State Circuits Conference
Gordon Moore
San Francisco, Calif., USA
February 10, 2003
ANNOUNCER: Please join me in welcoming Dr. Gordon Moore.
(Applause.)
GORDON MOORE: Well, thank you. It's a pleasure to be here.
This conference has grown a bit over the years. I didn't make the first few. In fact, I didn't get in the semiconductor industry, as you just heard, until 1956. And even then, I didn't go to solid-state circuits conferences.
Chemists didn't know an awful lot about circuits, but when chemists started making circuits in the early '60s, I did attend several of the conferences. At that time, they were all in Philadelphia. And Philadelphia in February, it was easy to convince your spouse it was really hard work; you weren't going on a boondoggle. I'm not quite so sure San Francisco is that easy to sell these days.
Well, many of the parameters related to the semiconductor and solid-state circuits industry have shown exponential dependences over the years. But no physical quantity can continue to change exponentially forever. There's always some kind of catastrophe if you project it far enough into the future.
What I want to do today is look at some of these exponentials and maybe give some idea where they might go and talk a little bit about how we're going to deal with the looming catastrophes that people seem to be projecting as they look further down the road.
The first thing I want to look at is the growth of revenues in the industry. This is a phenomenal growth industry. It's grown 80 fold over this 35-year period I've depicted here. A compound annual growth of some 14 percent, even with the flattening of the last few years.
The dips and bumps on this exponential curve may not look so severe. This one perhaps is a little more severe than the rest. But I look back here and realize that in 1974 and again in 1984, '85 Intel had to get rid of a third of its work force. The exponentials tend to kind of distort things that are more clearly experienced linearly when you're actually there.
But while this is phenomenal growth, if you want to see the real underlying growth of the industry, look at the output. This is the number of transistors shipped per year as near as I can estimate it over the same period. And this has grown eight and a half orders of magnitude over that period. 300 million fold. Now, that's a growth industry, maintaining an average growth of about 80 percent per year over this whole time period. And that in a significant period where it actually doubled every year. We had more electronics built in a year than existed in the beginning of that year when we started out. Truly phenomenal growth.
I tried to illustrate this number, ten to the eighteenth, and I've used raindrops falling on California.
E.O. Wilson , a noted Harvard biologist and expert on ants, estimates that the number of ants in the world is between ten to the sixteenth and ten to the seventeenth. So for years I used that. Now each ant has to carry ten to a hundred transistors.
(Laughter.)
GORDON MOORE: Perhaps another way of looking at the size of this number... I estimate that the number of printed characters produced every year is between ten to the seventeenth and ten to the eighteenth. That's all the newspapers, books, magazines, Xerox copies, the computer printouts, everything, is the same order of magnitude as the number of transistors that the industry sells. Of course, we print a lot that we don't sell. Those are the ones with the red dots on it that get shipped.
This is phenomenal, and we sell them for about the same price as the printed character in your Sunday New York Times.
But the power of the industry is to divide one of these lots by the other, you get the aggregate lot per transistor over this period of time. Dropping from about a dollar to two-tenths of a micro-buck. That's the average transistor cost during this time period.
If you look at DRAMs, it's an order of magnitude below this. You get 50 million transistors for a dollar these days. And not only transistors, they all have circuit design and interconnections thrown in free.
It really is a spectacular industry, completely unparalleled by anything else I can see.
In about 1960, Western Electric looked at the transistor and estimated that eventually it might cost about 60 cents to build one, looking at time and material studies and the way things were happening at that time.
This multiple million-fold reduction in cost requires some really special things from the industry. First of all, it requires a technology that has phenomenal capability. I think a unique technology underlying the industry. And a fantastically elastic market. It has to be able to consume ten to the eighteenth transistors a year and growing.
And also, in order to make it happen, it has to have the contributions of a lot of people doing circuit design, clever extensions, developing the capability to continue it and technologists that keep the technology moving rapidly.
And go back a period of time, this is what a transistor looked like about 1959. I didn't have one from '58 which was the first year transistors were manufactured on a whole wafer rather than one by one. But this was a one-inch wafer back in 1959. A real breakthrough at the time. One of my first contributions to the industry was proving if we went above three-quarters of an inch in diameter, the yield went to zero because the material was so bad.
This shows one of my technical contributions, that flat part on the bottom of the wafer that lets you align it.
(Laughter.)
GORDON MOORE: And there were about 200 possible transistors on a wafer like this, and if we were lucky, something like 10 percent of them were good.
Now, of course, we get a lot more microprocessors on a chip than that on a wafer, each with millions of transistors and very, very much better yields.
If we take a next step from these early, planar transistors to the early planar integrated circuits, this was one of the very first micrologic circuits made. The first integrated circuit was kludged together by Jack Kilby at T.I. Fairchild had the planar technology in place, and we had the technology to make things practical, and by the way, saw how to extend that technology to get to something that would be worthwhile.
This was one of the first ones we built. This was a flip-flop consisting of four transistors and six resistors. It was one of the micrologic family.
The die was round, so we could bond it right to the header with little dobs of conductive epoxy. Since the yield , I'm thinking six months for a chip was going to be so low, we thought we needed something that didn't detract from the yield from the assembly point of view.
It's a terrible picture. I'm sorry that this is what we have left to show the very early days, but maybe it shows a bit about the problems we had.
Now, you might think integrated circuits were an obvious solution that were immediately accepted. This was not the case at all. This was a tough sell.
Our customer in those days, the technical interface with our customer, was typically the circuit designer. We were making transistors and things like that. And to go to the circuit designer and tell them, "Hey, we're do the circuit design for you" wasn't something that was sold very easily.
The reliability people looked at these and they were used to making transistors and measuring their parameters over a period of time looking for drifts, they said, "Gee, we can't measure the transistors here. We can't get ahold of them. How will we know if this is reliable or not?"
I remember going to one aerospace company that said, gee, they used 16 different flip-flops in the circuits they built. They could never use a standard flip-flop. They had an expert for each of those and they really had to be specially designed.
And then Bob Noyce made another one of his major contributions to the industry. He said, okay, we'll sell you the circuit for less than you can buy the transistors/resistors to build it yourself. And that was a major breakthrough. The fact that it was also considerably lower than it cost him to build the integrated circuit at the time was of little consequence. We had to develop a market for them.
And of course the solution the semiconductor industry developed was, whenever there's a problem, lower the price. That's the way they solved all of these things. Let the elasticity of the market bail you out.
Well, during that time, the early integrated circuit, I was the director of R & D at Fairchild and had a little bit more visibility than most people did in where integrated circuits were taking us.
And I was asked to write a paper for the 35th anniversary issue of Electronic Magazine where I predicted what was going to happen over the next ten years in the component market; actually, semiconductor components. And that's when I plotted this curve that eventually became known as Moore's Law.
Extrapolating from about 60 components to 60,000 over ten years was pretty impressive. I never expected it to be precise. I was trying to get the message across that this was going to be the cheap way to make electronics, putting a lot of them on a chip, rather than building it up from individual components soldered together.
But in fact, it turned out to be more precise than I ever could have imagined. I put the date on it here. This is what happened, a few of the points over that time period are the most complex circuits available. They fit amazingly well on it. My friend Carver Meade, a professor at Cal Tech, dubbed this Moore's Law.
In 1975, I updated it, adding the purple line, and I argued why the slope was going to change from doubling every year to doubling every two years.
My data said it ought to change right then, but the stuff I saw in the laboratory, was generally CCD memories, suggested that there were a few generations that were going to continue double, so I figured it would double again for the next five years and then change to every two years.
And CCD memories never quite worked out that way. There was a problem called soft errors that came around. The CCDs are much better imaging devices than they are memories.
So instead of waiting five years to break, the actual data broke at about that time. I got the slope pretty close, but that five-year hiatus really made it so it didn't fit as well as it might have.
This shows what a wafer in the early production days, three-quarters of an inch, the size of the nickel, looked like compared with a modern wafer.
One of the others things I projected in 1975 a bit tongue in cheek was what was going to happen to wafer size, again, using my semi-log paper to extrapolate. One of my colleagues showed what actually happened.
(Laughter.)
GORDON MOORE: I had remembered that I had only predicted 56 inches, but he went back to the original data.
(Laughter.)
GORDON MOORE: And not only things like this occurred. The structure has gotten complex. If we take a cross-section through a modern process, a modern seven-layer metal process with things like tungsten plugs down here at the bottom, all of the interfaces with one thing and another, this is amazing to me.
I remember when the principal argument to go from bipolar circuits to MOS circuits was process simplicity. You only needed five masks. We succeeded in driving that up to 25, and I don't know where it's going to stop.
This actually surprises me. One technology that made this possible was the idea of chemical mechanically polishing after each layer, so you maintain the flat surface as you build up insulators and metal and develop the structure. Without that, the topography became so complex that you couldn't get more than two or three layers of metal before it was unmanageable.
With the IBM invention of the chemical mechanical polishing, Intel technology processing really allowed progress to continue. And I suppose if I plotted the number of metal layers versus time, it would also be close to an exponential, although I haven't plotted in that direction.
Of course the big driver for the improvements over a period of time has been our ability to make smaller and smaller features. The original planar transistor didn't push that too far, but starting with the early integrated circuits, we've been on a very constant change, or were for many years, of cutting the dimension in half about every six years. That's two steps, generally .7 for each step allows you to double the density every three years. Every six years, that went down by a factor of four.
And of course I would have expected that to start rolling off. Actually, the opposite has happened. The last few generations have projected going forward the time period between generations is stepped down to two years rather than the three we've done historically. This is amazing, to see a curve like this accelerate rather than start to round out.
And as you see, some of the other comparisons you might hear, people always have human error. Well, human error is a few orders of magnitude away from where we are now. We're working at AIDS virus sizes and moving on down towards single molecule. I picked a pretty big molecule here.
But there's still quite a ways to go before we get to that point.
Of course, I guess we're well into the realm now that's called nanotechnology. We're doing nanotechnology from the top down rather than from the bottom up. And doing it from the top down, at least with electronics we continue to connect everything.
The challenge of the people building single electron transistors and the like from the bottom up is how they put a billion of them together and make it into a interesting, useful function.
** It will take a while to see how that's going to happen.
For a more linear view, you'll get an exponential (inaudible) historic how we look at things from a human point of view.
If you just look at the qualitative progress over a period of time, taking an Intel 1978 technology and a modern technology, this is a six transistor static RAM cell compared with one contact opening in the 1978 technology.
The continued evolution of these things over a period of time really have a qualitative effect, not just a quantitative one.
I actually think we're breaking the laws of physics in some of this. You know, we're printing 16 nanometer lines with 193 nanometer light. We're printing lines at a quarter of the wavelength of the light. This is something that I wouldn't have thought would have been possible. And if I go back to that other curve, I remember thinking that one micron was probably about as far as we were ever going to go, a couple of wavelengths of visible light. We wouldn't have a way around that problem because, you know, imaging really requires -- well, it's hard to make anything smaller than the wavelength of light.
Then we moved to ultraviolet light, and I thought maybe a quarter of a micron, maybe we'll get there and that's about as far as we're going to go. And we blew through that. Now we're at sub tenth micron and 90 nanometer technology in production quantities, and we're looking at printing lines a quarter of the wavelength of the light.
It really is amazing we've been able to do that. Lasers are great for improving some of these capabilities.
Of course, this requires that you use a low contrast optical image. Your contrast drops as you do this, but you use a very high contrast photoresist, you can, with adequate controls, print these fine things.
For those of you who haven't been to our fab recently, this is the sort of cut-away picture of what a modern (inaudible) scan production tool looks like. Several million dollar tool using an (inaudible), a variety of things. It's a far cry from the original systems that Bob Noyce and Jay Last {??} put together at Fairchild to do our initial lithography.
Well, we've been working hard on the X and Y dimensions. We haven't neglected the third dimension, the vertical one, either. The minimum insulator thickness has stayed also on a line that goes down exponentially.
This one probably surprises me more than the line width did. I remember doing a back-of-the-envelope calculation about the time Intel was formed back here, and convincing myself that statistically, if you went to layers that were less than about a thousand angstrom, 100 nanometers thick, you'd probably get enough fluctuation that things wouldn't be very good.
But I didn't realize, or I should have, I guess, but the force was with us. Chemical forces really helped. And you don't get a statistical layer of atoms coming down when you oxidize the silicon. You get nice chemical reactions that really do maintain a -- the integrity of the layer down to very, very narrow layers.
Here we are a couple of nanometers thick physically. Electrically, you look at a few nanometers thicker than that, so I guess the electrons can't get all the way to where we see the edges.
And if you look with a transmission electron microscope at the silicon substrate, the insulator layer and the polycrystalline silicon on top, you see these structures really are getting down to a few molecular layers thick.
Well, see, you can't go much further than that, but you don't have to in this case. If we go to a material with a higher dielectric constant, we can actually get higher seals in the silicon with a thicker dielectric. This is an experimental structure, and the capacitance goes up to something like 50 percent.
But the great deal is the leakage currents. Here you're getting down where you get a lot of (inaudible). Leakage current decreases a hundred fold.
So there are things that can be done as we approach some of these limits that preserve our rate of evolution of the technology without getting off the trends we've been on historically.
These have all led to dramatic increases in the performance of the electronics that we build. Here I plotted processor performance, and I'll excuse myself for using only Intel data, but that's the easiest for me to get my hands on.
And you can see that over this time period, from the first microprocessor in 1971 to modern microprocessors, we had about five or six orders of magnitude improvement in processing on (inaudible). That's a compound annual growth rate of about 50 percent, or a doubling in performance every 20 months or so.
Those of you who have heard Moore's Law quoted as doubling every 18 months, notice I never said 18 months. I said one year and two years. One of my colleagues at Intel, Dave Haus {??}, translated that into processor performance and decided that went a little faster than number of components. So he was the guy who said 18 months, not me.
(Laughter.)
GORDON MOORE: But it's pretty close, what happened on the processor performance over that period. And it shows, really, no sign of decelerating. In fact, if anything over this part of the curve has accelerated.
Now, we get some problems coming along here. One of them that I think is an important feature of this conference, and that is what is happening to the power.
And here I'm looking at two contributors to the power, the active power of the processors, which are getting up to the power dissipation of a fairly bright light bulb, and power densities that we used to strive to get in power transistors.
This is getting to be a problem. I don't want a kilowatt in my laptop. It would be very uncomfortable. So from practical reasons, the power is going to have to grow off there.
But the thing that is probably more disconcerting here is what is happening to the power contributions of leakage, not the actual power of the device. That is a steep exponential.
We've been fighting the power for quite a while, and our best tool for fighting power has been the power supply voltage. You get to square dependence on it. I suppose I can call this an exponential also, although it's kind of a stepwise exponential.
We used to live with 12 volts, and we went to 5 volts. We were going to stay at 3 for a while, but we discovered that cutting the voltage was so nice for power, we just kept right on cutting the voltage. Now every new processor seems to pick an optimum voltage, and we continue to lower it. This principally to get the lower power, but of course as we have much better dielectrics, they like lower voltage also.
Now, again, this can't go on forever. You need at least a few hundred millivolts, I don't know exactly what, just to overcome some of the noise problems that will exist in digital devices.
I suspect something around one volt is going to be a limit, but I sure have been wrong on a lot of these other things that I suspected were going to be limits. And you folks probably know that a lot better than I do. We chemists don't understand this kind of stuff so well, especially when we don't look at it closely anymore.
But there are new things we can add that help with a lot of these problems. For example, moving from silicon to the silicide gates helps considerably.
We can continue to improve the performance of the transistor by purposely straining the (inaudible) in the region. You strain it one way for the N channel device, the other way for P channel device, and improve the performance ever both transistors.
As we look a little further down the road, we see things like a high dielectric constant gate dielectric, which gives us the high dielectric fields while still keeping the leakage current down and new transistor structures, one of which is the tri-gate structure I've shown here.
This is an interesting device because it kind of changes at least the way I've always thought of transistors around completely.
As we got to make very, very narrow line widths, down here in the hundred nanometer or less range, all of a sudden it doesn't make sense to deal with thin films in the other direction anymore. We just make a relatively thick film and make the transistor move again in the direction.
Here the gate wraps completely around the silicon, which is sitting on a silicon insulator in this case. So you get a source and a drain and a gate that depletes the silicon from three sides. You can make a fully depleted transistor in this manner and cut the leakage current dramatically and get very high performance.
It's the kind of thing you don't think about when you get down to the dimensions where it suddenly becomes possible to (inaudible) build this kind of structure.
This kind of a transistor might carry significantly farther than what we can go with all the techniques we had available in the past.
Of course we have to be able to continue to print finer and finer lines. To do that, we need shorter wavelengths. We can't continue to print with a smaller and smaller fraction of the light that we're using.
But the technology generation that we're seeing, even by the regular scaling that we've been doing, where we've reduced the line width by a factor of about .7 with every technology node we go through, from today's volume production, we can make conventional transistors down to the 30 nanometer range.
It takes two or three years between generations. Three years is the typical road map number that the SIA puts out, Semiconductor Industry Association. Two years seems to be what we've been doing over the last few years and expect to be doing for the next few generations. So we're talking ten, plus or minus two years, of conventional scaling.
Below 30 nanometers, it's not clear that the conventional devices will work, but something like that thin transistor I showed on the last slide looks like a very realistic possibility.
To make these very narrow lines, though, we need a step in lithography. And this is a tough transition that we're going to have to go through. But we've been working on a technology using what is called extreme ultraviolet, EUV. This is a wavelength ray that used to be called the soft x-ray, but x-ray has got kind of a bad name in the industry so the name was changed.
But with the experimental system that currently exists, we can print 50 nanometer lines in spaces, small dots, and continue to be improved.
It gets to be much more complicated optically. There are no materials that are transparent, really, in these ranges. And you can't use lenses. You can't use masks to put the radiation through. Everything has to become reflective optics. Even air is an important absorber, so you have to work in a vacuum.
Simple mirrors don't work. (Inaudible) a metallic mirror, the mirrors you use here have something like 88 layers. Alternative new materials can give you the high reflectivity that we need.
So working with about 15 nanometer light is something less than a tenth of the one 93 {??} nanometer that's been used in production now.
And the industry has been envisioning this kind of system for quite a while. This is a prototype that exists in Sandia {??} in Livermore. An industry that's already been working with Sandia and with the Livermore National Laboratory to develop this useful technology. We're working with them because the EUV technology generally came out of the old Star Wars program, and it seemed most productive to continue with the group that had been working with the technology rather than starting all over someplace else.
But as you see, this is a pretty complicated looking machine, and it's every bit as complicated as it looks.
The mirrors have to be figured significantly better than the Hubble telescope, and we have to be able to make them on a production basis. So it's a real challenge. But one thing it will do, it will keep us on the exponential of the cost of the lithography tool.
(Laughter.)
GORDON MOORE: Well, as you see, a lot of things are changing exponentially. There's reason to believe that each of these has a problem. But I remember thinking a million dollar machine was going to be prohibitively expensive for the industry. But we've blown through $10 million per lithography (inaudible) and are looking at things that are significantly beyond that.
The secret is the productivity of the tools has increased at the same time that the cost has, again allowing us to continue to decrease the cost of the transistor to make cheaper and cheaper electronics.
But we ought to remember that know exponential is forever. Your job is delaying forever.
In the 40 years since the first commercial integrated circuits and the 50 years since the first commercial transistor and the first ISSCC, I think we've built a fantastic industry. It's the most complex processing industry that I can identify by a significant margin.
We manufacture the greatest number of items, if you take my item as a transistor rather than the product we typically sell. And I don't think you can count MIPS on disks. They're not individually manufactured. They're just an area that happens to get (inaudible) in a particular way.
We've had million fold, actually ten million fold, cost reductions, and we've passed it on to the consumer. Again, something that no other industry I can identify has done.
I discovered recently or I was told by -- in Congress that the semiconductor industry has become the largest manufacturing industry in the U.S. and it's measured by value added. Because, you know, we start with sand, a lot of value is added in there.
(Laughter.)
GORDON MOORE: And, of course, the electronics industry, taken worldwide, is the largest manufacturing industry in this era.
But there's still a lot to do. And I think there's a lot of life left in the technology we've been developing and a lot of clever ideas of how to extend it in some nonconventional ways.
And its ability to infiltrate everything society does is tremendous. It is really ubiquitous technology, and something that I think will continue to have a very important role for the foreseeable future, and beyond what I can foresee, certainly. And I am certainly honored to have been part of it. Thank you.
(Applause).
GORDON MOORE: They've been loosely connected, I suppose, but the technology goes on irrespective of what's happening in the economy short term.
But one thing the industry has discovered that I've described is you never get well on the old product.
When there's an economic problem and the semiconductor industry's recessions are always price recessions, they're not lack of demand. You notice the demand curve, the number of particular (inaudible), is fairly constant. The prices go up and down. And the prices never go back up enough to recover on the old product.
So you depend on getting the new generations and the new products done for the recovery. So the industry tends to just spend on development right through the recessions.
Now, if the economy is bad long enough, you can't continue to do that, I guess. But so far, they have not been closely correlated from that point of view.
But if Moore's Law hits a wall, will economy productivity (inaudible)?
GORDON MOORE: It should. Either we'll put less stuff on -- we'll either bend the number of transistors produced curve, and it has been gradually falling off, or we'll bend the revenue curve. One or the other, either one of which will slow it down, I guess.
But none of these things is an abrupt wall. I think that's the wrong way to look at it. There are a lot of challenges out there, and, you know, they'll degrade rather gracefully where we can come to an end. It won't be, whoops, we can't go any further.
Hi, Linda (inaudible) Magazine.
You were, in your discussion of power on growing exponentially, you mentioned a couple of possible solutions. One was the high K {??} gate dielectric and the other was the tri-gate transistor, I believe. And I'm wondering whether you think that's enough to stem the increasing power consumption as we get up to a billion or two billion or ten billion transistors on an IC.
GORDON MOORE: It's certainly not. Circuit design is going to contribute very significantly to reduction in power. And you're going to see, I think, a lot of that at this conference. It's, as I understand, one of the principal themes.
But we all have to work together. We've been seeing the power problem looming ahead of us for some time, and we've been able to get by with it. I wish the battery on my laptop lasted a lot longer, but the compromise people have preferred has been a relatively shorter battery life, high performance.
We keep squeezing more performance out of a given amount of power, or more performance by more power. And a lot of it is engineering compromises. But the new technology ideas, I think it would be quite significant.
It's very encouraging to me to see how much the leakage power can be lowered by new transistor structure. That's a dramatic change.
(Inaudible) talking about radios and analog features on the same piece of silicon, and I'm wondering sort of what you think of that trend. Is there any limit to sort of mixing and matching different kinds of features on one piece of silicon?
GORDON MOORE: You know, there's a lot that can be done here now that would have been very difficult just a few years back. And I'm probably not as close to it as I ought to be to give you a definitive answer there, but from what I've seen, we're going to make tremendous progress on mixing all the signals on the chip.
It complicates the process a bit, but there are always a couple of components, like inductors are not too easy to put on a chip. But even there, for the high frequency communications, seems to be something can be done.
So I believe there's going to be an economical way to do it, but you'd be better off talking to one of the guys closer to technology today. I'm sort of obsolete on this stuff. I can talk about the history a lot better than I can talk about exactly what's going on now.
Next question.
GORDON MOORE: Well, if you look back, you notice I changed the slope in 1975 and nobody cared.
(Laughter.)
GORDON MOORE: I think there's -- another decade is probably straightforward.
Essentially we can see as far in the future now as we've ever been able to. It gets complicated and expensive, but technical solutions seem to be there.
And of course even if it gets to the point where the technology can no longer squeeze more stuff in there, we'll be putting literally billions of transistors on a chip. And what that allows in flexibility for the circuit designer is absolutely phenomenal. So it's certainly not the end of creativity in the industry.
Industry has had a lot of different things going; the process technology on one end, the design technology on the other, what we can design. I don't know which surprises me more, that we can build these things today or that we can design and test them.
But that technology has moved along at least equally fast, and it doesn't have any even apparent roadblocks.
It's kind of a bootstrap operation. The more powerful computers we build, the more complicated and powerful links people can design. So it keeps that going for some time.
And then the application of the technology to a variety of other systems I think is continuing to be important.
What other exponential items to show? You can show the growth of the (inaudible) revenue and the world's GDP, and they intersected in the year 2017 the last time I plotted them. So at that time we took over the entire economy.
Now, clearly that can't happen, and I guess I'm not enough of an optimist to say, well, if something (inaudible) is going to increase the rate at which the world's gross products increase. So it has to slow down.
You know, we're one percent of the U.S. economy now, sort of, semiconductor industry. Manufacturing, 17 percent. How big can you get?
But there's a lot of opportunity for innovation within that, so I hope a significant opportunity for growth for the industry overall.
(Inaudible) what's your view on research going on in areas like photonics and genetic computing to develop new generations of microprocessors, and do you feel like the big companies like Intel are active enough in participating in the recession in those areas.
So DNA computing, photonic computing, all of these new....
GORDON MOORE: Well, the things I have seen about these alternative approaches to computing usually end up being aimed at a fairly specific kind of problem that's hard to calculate otherwise.
Dave Baltimore, the president of Cal Tech who is a biologist, he doesn't believe in DNA computing. Maybe quantum computing.
Quantum computing is very interesting. From my point of view, it's most interesting in showing how non-intuitive quantum mechanics really is.
It's a very long way from being a practical way to compute anything. And if it ever gets there, I'll be surprised.
You know, it's tough for something to compete with the technology that's developed around here. The semiconductor technology that's the basis of the solid state circuits is the cumulative result of well over $100 billion in R & D. And for something to come in and replace that, it has to get to the same level of capability sort of in one step. And that's extremely hard to do.
I remember the difficulty we had in the beginning replacing magnetic cores in memories, and eventually we had both cost and performance advantages. But it wasn't at all clear in the beginning.
In fact, I used to get invited to magnetic conferences in the early days of Intel so they could see the competition up there, you know, and kind of laugh that semiconductor memory was going to be another shot taken at cores. We finally won that one, but it was a huge investment to do it in an industry that hadn't had a big investment before that, if I can look at it that way.
Semiconductor technology is phenomenally complex, and rather than being replaced by something else, it's kind of going the other way. We're making these micro-solidic devices, little laboratories on a chip now, you can do blood analyses and a variety of other things. It's a very powerful technology for making fine structures of materials. And to me, it's unlikely something is going to come out of the blue and replace it broadly.
* Other names and brands may be claimed as the property of others.
|