Keynote Transcript


IDF Spring 1998

Andrew S. Grove
Chairman and CEO, Intel Corporation
San Jose, CA
February 17, 1998

ANDREW S. GROVE: Good morning. Welcome to the Intel Developer Forum. What I hope to do today is to give you a little bit of an overview of the conference starting with a view of the industry dynamics as we see it.

The two major forces that seem to be affecting our industry these days are having to do with market growth and market segmentation. We find ourselves in a very fortunate situation that, notwithstanding economic cycles and events taking place around the world, it appears that the worldwide PC industry continues its healthy growth. And through this growth that has taken place over the last 15 years, and is forecasted to go on in the next several, the PC has become one of the most common devices of modern life.

In the last several years, the biggest growth driver in our industry has been connectivity, connected computing, also known as the Internet. This continues to be the force that drives applications both in business and in personal use in the next several years.

An interesting statistic to look at is the worldwide census of connected computers. This includes both business and home computers connected in all different ways to the Internet or to corporate networks. Today's number, 1998 number, is estimated to be someplace in the 150 to 200 million computers. The compounded annual growth rate of the population of connected computers continues to be in the vicinity of 35 percent a year. And it is a fairly safe bet to extrapolate that through this growth, this number is going to grow to be a very, very substantial number.

In fact, I have a favorite way of looking at things in general, but particularly industry dynamics specifically. The best way to look at an industry is to describe its characteristics on a fortune cookie. And the fortune cookie that I like to look at today is that when you open it up it says we are heading toward a world of a billion -- a billion connected computers. That is a very substantial portion of the world's population, and it is a large enough number to shape everything that it touches.

The impact of this phenomenon, the impact of the spread of connected computers, impact of the applications that play on these connected computers, is that the Intel Architecture is entering into more and more segments of computing.

When you go back and take a look at, on a historical basis, go up to the 50,000 foot elevation and take a look at what has happened in the history of personal computing, it has been really a continued adaptation as personal computers have penetrated more and more segments. As we all remember, the personal computer originally was used as a personal desktop business tool. Subsequently, people took it home and started using it for the same type of tasks at home. Then the tasks proliferated, and this led rise to the birth of the multimedia consumer personal computer.

The phenomenon that has happened here is first the computer was used in a new way, and second, the computer adapted to that new way has been the pattern and continues to be the pattern of evolution of personal computing.

The next step in this evolution was the emergence of PCs turned to the side and used as servers. This again, in the same adaptation that I indicated in the case of multimedia computers led to the birth of standard high-volume servers that brought PC economics to server hardware.

The current trends of use project the penetration of Intel Architecture-based computers into higher -- higher and even the highest segments of computing. And at the same time that the Intel Architecture is moving up into those segments, it is also moving downward through the emergence of what we call the phenomenon of basic computing.

And again, as you will see, basically, the theme of this Developers Forum will be the adaptation of the personal computing platform to these riches of computing spectra.

To put this in an overview, the history of personal computing in one chart, if you wish, including what we just talked about, shows the growing segmentation in its history, starting from the original business desktop, going on to the presence of business desktops, the emergence of mobile computing, consumer computing, and enterprise computing, to the segments that we described today that range from high-end enterprise servers and standard high-volume servers on the server end, engine workstation performance desktops and the basic and linked client computers on the desktop, and mobile continues its presence, and finally the consumer end, the bifurcation of the consumer phenomenon into performance, multimedia, and gaming computers as well as the basic consumer computing.

In the course of this week, you will have courses and tracks dealing with design practices and principles dealing with each of these segments, how to design for mobile, how to design servers, workstations, designing for basic computing for lean clients and performance desktop. All of them in the principle of the technical adaptation to the different use patterns.

But as you step back from this and take a look at this, the two major departures of personal computing in simplest form is the computing phenomenon and the inherent architectural building blocks going high and going low, and correspondingly, the Intel product portfolio is following these trends. And the further principle of all of this is, the underlying factor of all of this is the use of the latest P6 architecture as the foundation for a product line from top to bottom.

But even though the microarchitecture is the same spanning the entire product spectrum, the products will be specifically designed to the application needs of each of the segments of computing that I described. That is the theme for this Developer's Forum, and everything that I'm describing today is going to be described in real terms, tangible terms, and in detail by Albert Yu tomorrow when he describes our microprocessor roadmap.

So let's start by the low extreme, by describing the phenomenon for basic computing. The principle here is what we call smart integration, selective integration, integration that puts in those features of the chipset and the motherboard functions onto the microprocessor containing chip that make economic sense at a given level of technology, given level of integration.

Accompanying this will be lower-cost designs, not just of the silicon chip but of the packaging as well. And the combination of all of these has become a very major thrust for Intel in the course of the last year where this basic segment is now serviced and supplied by the work of over 600 engineers as compared to no one a year ago.

The first product along the line of basic computing, the product line of basic computing that we will be describing is code name Covington. It is a P6 architecture based product. It is a cost effective foundation for basic computers and provides as excellent performance for multimedia applications.

Just to give you a glimpse of the Covington processor, by comparison to the difficult Pentium® II package which looks like this, the Covington architecture, Covington product, comes on the same slot 1 connection scheme but at substantially less expensive and complex packaging form.

I would like to demonstrate the capabilities of this product and call Annie Lung to give us a tour. Annie.

ANNIE LUNG: Good morning, Andy.

ANDREW S. GROVE: Good morning. Show us the product.

ANNIE LUNG: Today, I'd like to show you a couple examples of multimedia application that are available for consumer desktop PC, and we'll be running it on this basic PC platform. Let's start by showing you this game first.

So as you can see, this is a pretty high resolution gaming environment with a lot of special effects created with the intent to make the environment most realistic and interactive for the end user.

If you look at the explosions, it's alpha blended, and other special effects with special lightings and other techniques to make this a most interactive and fun experience for the consumer desktop.

ANDREW S. GROVE: Do we have any other non-shooting demos?

ANNIE LUNG: Sure. Let me get out of this.

(Laughter.)

ANNIE LUNG: We can also look at other type of areas of applications that are consumer might want to utilize for reference or education purpose. This title is a medical library that is targeted for consumers so that they can learn about different parts of the human body and respiratory system and so forth. Includes area for education, that includes video and animation in explaining to the consumers about different areas. For example, if I'm interested in the skeletorial system, there's links to different pages in the CD-ROM, and also in video, an animation explaining about this product.

(Video Playing): You'll have an average of 206 bones. Your skull bones begin as 26 separate bones. As a baby --

ANNIE LUNG: So besides the video and animation, the user can also have an interactive session to find out more details about parts of the body that they're interested in learning about. For example, I have the human skeleton here, and if I'm more interested in finding out about the skull area, particular sections of it, I can look at it in more detail in three dimension and rotate it to the part I'm interested in. For example, I want to look at the teeth section. I can smoothly and quickly with the high performance capability of the basic PC platform, look at each area and get more information about the areas that I'm interested in.

ANDREW S. GROVE: Very interesting aside here. I've been interested in various forums BodyWorks* for the last several years, and through more and more powerful platforms, more and more powerful processors, the Body Works got revised and became modernized and included more and more multimedia data types, more and more capabilities like that. It's a perfect example of the software spiral that is the dynamics of this industry where software evolves to take advantage of the hardware capabilities and hardware rises to the occasion, and this cycle repeats and Body Works is a very good example of this. Thank you very much, Annie.

ANNIE LUNG: Thanks.

ANDREW S. GROVE: This, incidentally was the first demonstration of Covington outside of Intel.

Now, the processor alone, however, is not enough. We must have motherboard s and chipsets going with this to take advantage of it and deliver it in a cost effective fashion. And one of the things that we were delivering for the Covington processor is a micro ATX motherboard that contains the entire innards of the computer on this small form factor motherboard.

What you will see in the course of the next several days, particularly in Pat Gelsinger's talk on Thursday, is the description of our plans in the chipset and motherboard area as we service the basic computing segment of the spread.

We also will be introducing this processor under a different brand name in order to be able to highlight the differences between different parts of the product line, all having the same basic microarchitecture, but all of them designed for the segments.

So our brand name, not particularly significant for a technical audience, but nevertheless is a background, our brand evolution going forward will include an as yet undetermined brand name, I'll call it XYZ processor, that will incorporate the product line that we design for the basic PC segment of the market.

Going on -- Looking at the opposite side of the spectrum, going high, I want to talk about mobile computing, workstations and servers in the higher performance portions of the product segments spectrum. And the most important thing that we are going to be encountering here is Pentium II processors will enter the mobile computing space in the course of the year.

Again, these required design modifications and required packaging modifications, and the same Pentium II processor that you have been accustomed to seeing in the slot 1 form factor will come in a mobile form factor, maintaining the slot 1 electrical characteristics in a pin structure here. And through the combination of design modifications and through the combination of packaging technology, when you look at the comparison between the single etch connector form factor Pentium II and the mobile Pentium II, you notice that at the same frequency, we deliver the same performance level at half of the power of the slot 1 Pentium -- the SECC Pentium II, something like one-fifth to one-sixth of the weight and size volumetric size of this. And interesting enough, as attribute to the technological work implicit in both the chip technology and the packaging, this power dissipation of 8.6 watts compares with a Pentium processor with MMX™ technology at the same frequency operating at nine watts. So we are achieving it at same or even slightly lower power dissipation as its previous generation.

Again, you will hear more about this in Albert's presentation.

Extending the performance upward first involves workstations. And here, to demonstrate the performance capabilities of the Pentium II processor at higher frequencies, I would like to ask Ramesh Subramonian from our microprocessor research lab to give us a demonstration involving data mining. Ramesh. Good morning.

RAMESH SUBRAMONIAN: Hi, Andy.

What I've got here today is a data mining application that we prototyped in Intel's research lab. Data mining is an emerging application area that allows you to find patterns hidden in massive databases. So instead of having your data sit around collecting electronic dust, you can make it work for you.

In this particular case, data mining poses a peculiar challenge to the platform because it's both data intensive and compute intensive. So we handle the data intensive side by some neat algorithms. When it came to the compute side what we're doing here is we're using a dual Pentium II machine, and it's a multiapplication that will scale well beyond that.

Now, we have many tools in this tool kit but one I'm going to show you today is the concept of DIF for databases. The concept is you've collected data for different periods of time or different geographic regions and what you find out is fundamentally what has changed. You want to detect trends.

In this case I'm going to use a census data set. It's got about 50,000 records and I'm going to ask the system to tell us what's different between males and females.

So you notice the moment I ask it to go ahead and rank, the performance meter cranks up because it's going to uses the CPU, both CPUs to the max and it's going to keep reporting results as it figures them out.

So we can go ahead and look at things even as the system is chugging away. Let's say we ask it to tell me us the difference between men and women in terms of the number of years they've gone to school. And you notice there really isn't much difference. If we take a normalized view of this, it's extremely flat. However in you look at things in conjunction, things spring out in multidimensional space which just aren't evident if you don't spend the computer resources to figure them out.

So what we have over here is on the right the women, on the left are the men. The axes over here are age on this and the number of years of schooling here. And a neat historical trend springs out. Notice this rather sharp fall-off over here. What it tells you is that women around 40 went to school significantly less than women around 20. Whereas you don't see such a sharp fall off among the men. I mean it's relatively flat.

So these are -- You can imagine using this tool for other purposes. If you want to say what's different between last quarter and this quarter, what's different between California and Iowa.

I'm not sure we could tell you that. Let me show you another tool that we have from our data set.

The idea, we're now going to switch gears. We're going to pick a different data set and a different tool and I'm going to look at a bunch of cars now.

So instead of saying things like, you know, the correlation between "X" and "Y" is .236, whatever that means, it's much more useful to put up a picture and show you -- capture a much more complex phenomenon but in a very simple way. What we have here is I asked it to tell me the relationship between the miles per gallon and the engine size. This corresponds that we find cars that have huge engines give you poor mileage, and as you come down this shape, you find cars that give you much better mileage with much smaller engines.

But it's not enough to just put up a picture. You want to play with this. You want to incorporate your intuition into this process. So that's where we go in and say let's interact with this, and what we have on the right is the same picture, and on the left is an aerial view of it.

Now, as a business user, what you want to say is you want to say let's say I'm interested in the mainstream. And in that case you're going to set a threshold -- threshold your interest. You're going to say show me things that are more likely to happen.

So rather than have a complex sequel where it tell me what's between 40 and 50 here you're telling the system tell me what's of interest.

You can run this -- this is now a complex query. You can run that against the database. It will figure out what corresponds to it and it reports the results in a familiar spreadsheet format.

ANDREW S. GROVE: This eliminates the outlier, basically.

RAMESH SUBRAMONIAN: Yes. So in fact, it's a filter that always sits there, so every time a new data set comes in, it will figure out who the outliers are and it will figure out who you want to know about and let the others go through.

So this is figuring out the mainstream. One can say well, I'm really more interested in, you know, the one sheep that strayed. You can say, well, we've got an outlier here. In fact, this puzzled us when we first saw it, but we can go ahead and ask the system to tell us who this is. When we did that, we found our culprit. The give away was in the name, and we found that it was a diesel car. This is the kind of car that's going to have both big engine as well as great mileage.

So to kind of conclude, we see this as an emerging application area which requires two things to be successful to be democrat advertised. One it needs tremendous power to do the searching and two it needs great graphic abilities to show the results in an easy way.

ANDREW S. GROVE: Thank you very much, Ramesh.

RAMESH SUBRAMONIAN: Thank you, Andy.

ANDREW S. GROVE: As Ramesh said, one of the key obstacles that we had in the way of developing high-power workstation capabilities on Intel Architecture-based computers had to do with graphics performance. And the graphics solution that provided the break through to the limitations of previous approaches to personal computer design was the introduction of Accelerated Graphics Port or AGP.

You see that even in the first I am implementation of AGP routed to PCI had a doubling of graphics bandwidth. We were seeing expansions of the AGP technology to 2X AGP and you will hear descriptions of 4X AGP resulting in one gigabit per second graphics bandwidths, eight times the peak bandwidth of PCI. Again, more details of this 4X AGP in Albert Yu's presentation.

This year we're also going to pay attention to our internal bandwidth of the platforms. And particularly, the device that was used in Ramesh's demonstration is the Pentium II processor with slot 2 architecture.

To give you a very brief overview of slot 2, it provides 100 MHz system bus, and through the use of full-speed cache bus accessing the level to cache, it enables higher frequency processors and enables the bus to be completely synchronous with the processor speed, allows the scaling of cache speeds, and allows one to N-way multiprocessor capability as a result of the slot.

The package itself is somewhat larger than the familiar Pentium II package. This is our slot 2 package. And you will have in the course of the week courses on designing with slot 2 and courses on designing with multi- -- slot 2, creating multiprocessor architecture, multiprocessor designs with slot 2.

The combination of the higher frequency system bus and the slot 2 architecture results in a scaling and extension of the total system bandwidth from the Pentium II processor dual independent bus architecture, slot 1 implementation, to the Pentium II processor operating on 300 MHz. Also the dual independent -- continuing with the dual independent bus architecture with slot 2 architecture, more than doubling the system bandwidths as a result of these two parameters.

Now, what that allows us to do is fantastic graphical performance in workstation application, and I would like to ask Brad Peebler from NewTek, Inc. to come up here and give us a glimpse of what can be achieved with Intel based workstations of this ilk. Brad, hi.

BRAD PEEBLER: Snuck up on you. We're going to show you Light Wave 3D*. Before we get too much into the software, I want to show some clips so everyone can understand what we're talking about when we say 3-D animation. So if we can run that footage.

(Video running.)

BRAD PEEBLER: I love that stuff. Everybody loves that stuff. (Footage) But it takes a long time to do because there's three major phases you have to go through. There's the model building, which takes time to visualize your wire frames, there's texturing and animating those projects, and then there's the actual final rendering stage which also takes time, especially now in the age of photo realism where everything is trying to mimic reality. Here we have some shots, you can see it on screen, that's totally generated on the computer, completely rendered in 3-D.

And to achieve that kind of photo realism involves a lot of tracing and photo mapping. Here we have a shot that uses a combination of RealVideo*and 3-D animation. And what we're going to do now is take a look on screen at how the slot 2 architecture helps us in those processes.

What we have here is the same exact shot you just saw, but this is pre-rendered; OK? This is actually on screen. We're looking at this on open GL* using a Dynamic Pictures 402* card. And one of the advantages with this card is it is using a thing called Power Threads* which allows it to take advantage of the threaded environment. OK.

Now, as I mentioned, the slot 2 gives us a lot of advantage here. The first one being the 100 MHz bus. As I mentioned, this is a convergence of medium between 3-D and 2-D. So if we just go to an interim frame, that has pulled up a frame of video and composited the 3-D into the background. So you can see these jets, to exemplify how they are 3-D, I'll zoom my camera in and you can see how I can change in realtime, I can change that composite. If I want to change this, I can rotate around in realtime and I can change that.

So that enhances the ability of the effects creator to see that in realtime and do more shots and do it more quickly.

And of course the other thing about this architecture is the extended cache. We do a lot of things up front when we render a frame, such as calculating shadow maps, filtering images, and all that can be done more quickly if we can store it in cache. And the final thing the new architecture allows us to do is go to a higher CPU speed, so for the final renderings, it goes more quickly.

Let's go ahead and take a look at a frame here. As I mentioned, this is an open GL, so your on-screen display happens near realtime. But when you go to render, adding things like motion blur can take time. I can come over to my modeling application and continue to work on something in the background using the threaded environment. Let's go ahead and we can see in fact the frame is finished back here.

ANDREW S. GROVE: How long does it take to render a frame?

BRAD PEEBLER: In this case it took 12 seconds. On an older machine, it might take up to a minute per frame. We rendered this actual sequence in our office yesterday and it took a good portion of the day to render that. We could have cut our render time significantly, reducing my stress load in preparing this, by about four times.

So I mentioned the main stages for the pre-vis and the rendering increase in speed. And then for building your objects, allowing an animator to go in and see things in realtime and moving things around visually, allows more people to get into 3-D animation so we're not just requiring technical people who can look at these wire frames and determine what that's going to look like in the final shot. They can come in here and actually work as if they're modeling with clay.

So if I want to come in here and just -- let's just go ahead and take and pinch this out like I was squeezing you with my thumb there, and then we're going to double this up a little bit, which is again like we're pulling it up a little bit, and pinch it back in and you see it becomes very organic. And that's what we're trying to do, is give people an environment they can work in and feel like they're working with a natural medium instead of working on screen with wire frames.

We're going to make a character here. Character animation is probably one of the most difficult things to do and is associated with the more costly of the processes. And you can see that I'm very rapidly able to prototype out my little bunny rabbit here. And I'm going to take as if I'm poking my fingers into the clay, push in some eye sockets. And if I want to go in here and give him a cute bunny nose, all I do is apply this little pug nose there.

So again, if I want to make this into -- say, for example, turn him into a coyote, we can stretch these out a little bit and there's your Wiley Coyote. So the modeling process becomes more organic. We save time in the pre-vis where we're setting up and animating our shop and seeing it in realtime and putting out a full realtime image in 12 minutes is going to be a benefit.

ANDREW S. GROVE: I think this should allow to us do a sequel to "Titanic" in no time.

BRAD PEEBLER: The interesting thing about that is those special effects -- again, we talked about photo realism. They're able to accomplish things that when you see them on screen you don't realize you're watching a special effect other than you know that boat doesn't exist. But those things take a lot of time. So we'll see more and more and better and better in the future.

ANDREW S. GROVE: Thank you very much, Brad.

This gets us into the last bit of the presentation, which is servers. And the whole idea of standard high volume servers was to bring PC economics to servers. Volume economics, standard parts, multiple manufacturers.

Servers are a very major phenomenon. In terms of unit growth, they are forecasted to grow -- substantially faster than the PC market as a whole. And when you really think about the world of a billion connected computers, they will require tens of millions of servers to run them and fuel them.

Because of the PC economics that I mentioned, we are fortunate enough to enjoy a fairly large and gradually growing share of the server market segment to the Intel Architecture, and we continue to press the cost performance characteristics of Intel Architecture-based servers by lowering the cost and improving the performance leading to recent records as of last Monday of $36 per TPMC, arrived at using a Unisys* six-way Aquanta* {?} server.

The technical evolution that lies behind these numbers is this very similar in principle to what we described before. First, an ordinary personal computer was, so to say, repurposed for server use, stood on its side. It occasionally used multiprocessors, the processors being Pentium processors then, and used a PCI bus.

The current SHV server which is most -- mostly in use today operates with one 28 Pentium Pro processors, up to one megabyte level to cache, 64 gigabytes of memory and incorporating for the first time system management.

Next what you will see as a result of the use of the servers in higher and higher echelons of applications, the adaptation of the standard high volume servers for enterprise applications and correspondingly the evolution of the server platform to one to anyone way Deschutes processor, that's a processor with slot 2 architecture, full time speed level two cache as a result of the slot 2 architecture, multiple 32 and score bit PCI buses, the introduction of intelligent I/O server management, and lastly, systems area network virtual interface or VI clustering architecture (SAN VI). We'll hear more about that in a minute.

Here, too, we will differentiate in terms of branding the product servicing this market segment by calling them with a sub brand Pentium II ABC processor. ABC to be determined hopefully before the introduction of the product.

(Laughter.)

ANDREW S. GROVE: Sometimes this is the limiting step.

So you will see the segmentation of a brand road map into the basic Pentium II road map, the XYZ brand for basic computing at the bottom and the Pentium II processor -- Pentium II ABC processor for server applications.

The important ingredient that is required for scalability of this architecture which is necessary, in turn, to get them into highest end applications has to do with clustering. And San VI that I mentioned a moment ago is an industry standard that defines low latency messaging between clusters and enables applications to write directly to the networking hardware without going through system bottlenecks, and consequently allows scaling of transaction processing and database applications. And I would like to call Gerry Peterson, a senior vice president of Tandem Computers*, to come up here and give us a San VI demonstration.

JERRY PETERSON: Good to see you again.

ANDREW S. GROVE: Good to see you, too. And by way of recollections, the first time you and I met, I was on this stage at a different meaning and you were back in the shop, and we were conversing on video conferencing and you were demonstrating clustered application of Pentium Pro processors on a system that was called Tuton*, which was probably literally two ton, and consequently it was not very feasible to bring to the convention center here.

Things have progressed. That was about a year ago?

JERRY PETERSON: Yeah, about nine months ago. Now I brought you a quarter-ton and we brought the whole system right here. So what we're going to show today is a demonstration -- really one of the first major demonstrations of the VI architecture.

And this system has been put together with a combination of folks from your server group, from Compaq's server group, from our Tandem servernet division, and also IBM* software group with their DB2 database*.

We've all worked together to build this system and show I think a pretty impressive cluster demo.

ANDREW S. GROVE: Let's see.

JERRY PETERSON: What I'd like to do, Andy, is actually get this demo underway because it takes a few minutes to run, and then we can talk about the hardware a little bit.

What we have here is a complex database sitting on this server cluster. Actually, four versions of it are sitting on this server cluster. We're going to run four different queries. It's the same -- same query will run four different times, but on four different configurations.

First we're going to run it on a single node of the server, and I'll get that started up. Then we're going to run it on two nodes; in other words, the same query will run but now the database will be partitioned across two nodes and using VI and servernet, the nodes will communicate with each other to split the load.

There you can see the first node, two processors in it, two Pentium Pros working away on the initial query. I'll fire up the second query and now you'll see four processors, four additional processors kick in. And we'll take the last three nodes in the cluster and run a third instance of, again, same query, same database, but now the database is split across three clusters.

Now, the whole idea here, of course, is that if VI works as advertised on your slides and if servernet works, etc., so there's very low overhead messaging between all these servers, then these results should be very linear. In other words, three server nodes should be able to do this query three times as fast as a single server node.

ANDREW S. GROVE: What is in that big box? What is the hardware we have?

JERRY PETERSON: Well, let's go over here and take a look.

What we've got in here are, as I said, six nodes. These are Proliant 6500s* from Compaq, latest and greatest with all the Hot Plug* technology. We've got -- these are big machines. They've each got 256 megabytes of memory. They've got a megabyte of L2 cache. And as I said before, the combination of all the disk storage in here adds up to about a quarter terabyte of storage.

Hooking them together, up here in the top of the cabinets you can see the servernet switches. There are two switches here that interconnect. They're six port switches. They interconnect the six nodes of the demo. And they have an aggregate data rate of 4.8 gigabits per second. And that's important because, again, when you break this database across multiple nodes and you start doing sub-queries, you have to aggregate the results and move a lot of data between the nodes. So having low latency and low overhead which is what VI gives you and then having very high performance interconnect technology with servernet means you get rid of all the overhead. And so six nodes can act like one node. It's really the whole -- the whole idea.

Let's go back --

ANDREW S. GROVE: The question is do they?

JERRY PETERSON: Well, let's go back and see how our queries are doing.

We can see the queries with three nodes and two nodes have already finished and the single node query is still cranking away here. And it's probably going to take -- if the three nodes took just a bit over a minute, one minute and five seconds, then I suspect this is going to take about three minutes, if our linearity and scalability works as it should.

ANDREW S. GROVE: You mentioned you are running DB2 here.

JERRY PETERSON: Yes. DB2 is part of the equation much how to make all this hardware work together. Because the VI architecture is very important as a foundation for making it very easy for one server to move data quickly and with low overhead to another server's memory, and vice versa.

Server interconnect is important in terms much high data rate but you have to have some software that uses this and that calls it. And the IBM software group, their database software group has done some real leading edge work here, again with the VI folks, to take their API, their new version of DB2, they call it the extended enterprise version for {Windows} NT*, and they ported it to VI. So this is one of the first demonstrations of a real VI mainframe class database running on Pentium Pro servers, with standard Windows NT server 4.0.

So now we can see our single node query finished, and it ran in three minutes and three seconds versus our one minute and five seconds for the three nodes.

Now that all the processors are freed up, let's try it with all six nodes, and see if the scalability still holds true. So we'll fire off our --

ANDREW S. GROVE: It should be around 30 seconds?

JERRY PETERSON: Should be around 30 seconds. You can see all the processors firing up here.

Again, the whole concept here is how to get high-volume components, off the shelf components, in this case Pentium Pros, housed in Proliant* servers, how to get these off-the-shelf components using off-the-shelf software like NT 4.0, how to get it all to work together in mainframe class performance in clusters. And as you pointed out in your slides, the VI architecture effort, which Intel and Compaq and Microsoft* got underway and now the whole industry is participating in is the foundation for making that happen. So software developers like IBM with DB2 can write their code, port it to VI and make it work.

Well, look at that. We got 34 seconds for all six nodes in the cluster and thankfully all of the queries add up to exactly the same number. So I think we have proved our point here that scalability can really be done right with VI architecture and with an extremely fast server interconnect.

ANDREW S. GROVE: What would happen if we brought you even faster processors.

JERRY PETERSON: Andy that would be terrific. I expect a year from now, 18 months from now, you have to tell me, we'll stand on this stage and we'll do this same demo with Merced™ processors and Servernet 2*, which will be two and a half times this speed, and those results are going to be mind boggling.

ANDREW S. GROVE: Well, thank you, Gerry.

JERRY PETERSON: Thank you, Andy. Always a pleasure.

ANDREW S. GROVE: And with that, let's talk a little bit about the next step in the migration to the enterprise which has to do with, as Gerry indicated, the migration to IA-64 of server and workstation applications. Again, you're going to hear a whole lot more about the IA-64 architecture from Albert Yu.

Our intent is to carry forward two parallel paths, one the IA-64 path which we'll be introducing sometime in '99 for workstation and server applications, for applications like Gerry Peterson demonstrated here that soak up all the processing power we can throw at it, and at the same time we continue to carry forward the IA-32 for mass applications, for desktop applications for basic computing applications, for mobile applications indefinitely.

The question that I would like to ponder for the last minute is what does all this display of technology and segments mean to developers whose life is dedicated to developing products based on Intel Architecture microprocessors.

Again, looking back historically, when the PC was a single device, the means of differentiation in terms of different chipsets, different motherboards and different implementations was an economically viable strategy. With a growing segmentation of the Intel Architecture-based computing industry, it is increasingly difficult to differentiate in the same fashion and remain competitive given the number of segments that this differentiation would have to be practiced in.

The result, I think, is going to be an increased use of building blocks technologies, building block chipsets, motherboards, graphic solutions and the like, which, of course, is what Intel is dedicated in terms of our corporate mission to supply.

Our corporate mission says that we intend to be the preeminent building block supplier to the computing industry, and I might add, to all segments of the computing industry, and our hope is that we will be able to work with you as you proceed to adapt and design for all segments of the computing industry as well.

Thank you very much. Enjoy the conference.

(Applause.)

(9:54 a.m.)

* Other names and brands may be claimed as the property of others.