Keynote Transcript


Intel Developer Forum, Fall 2000

Craig Barrett
San Jose, Calif., USA
August 22, 2000

Pat Gelsinger's Introduction

I have a very simple overview message to give to you. It's really to give to all of us here. And that is we're all combined in an effort to provide solutions to our customers. None of us does that by ourselves. We really all contribute and participate to provide modular building blocks to the Internet infrastructure solution. Our customers are only interested in solutions, typically. Therefore, they're interested in us working together.

So I've tried to choose a theme for today's presentation which was indicative of that. And the theme I came up with is something you ought to all be familiar with, which is -- really means out of many, one, or as far as the Internet is concerned, out of the hard work of many of us individually contributing pieces of the solution comes one solution that the end customer is interested in.

So if you take something away from my comments today and use them, use that as a framework to interpret the other keynote speakers and all the sessions that you attend here, it's really that we're all providing modular building blocks, those modular building blocks have to work together. And it's only when we cooperate and work together with industry accepted interfaces, building blocks so customers can easily craft a solution that we're successful.

What I would like to do, then, in that context is spend a few minutes just talking about some of the Internet trends, talk a little bit about e-Business front-to-back. And what we'll try to do is demonstrate with a common architecture and these common modular building blocks, what you can do in the way of creating a total e-Business solutions. Try to give you a simple demonstration on that.

Talk a bit about peer-to-peer computing as that's a hot topic in the press and it's a hot topic mostly from the music industry standpoint. But we think there are huge ramifications in the business arena in terms of peer-to-peer computing. I'll make a few comments about that, and Pat Gelsinger will come back on Thursday and talk about that in detail.

And then we'll talk about developing solutions for the Internet.

I want to just give two very simple examples of -- I think you're generally familiar with in terms of the growth of the Internet and its importance. If you look at just business-to-business commerce, this is the latest forecast of what's going to happen over the next several years. We all expect an exponential growth, just the business-to-business aspect to some $7 trillion by the year 2004. Perhaps the only interesting thing about these forecasts is they're always conservative and we revise them upward as we move forward.

Essentially every business is deeply involved in this. Companies like Intel, for example, have converted their entire business over to electronic Internet transactions. So we do some roughly $30 billion a year over the Internet at our current run rate. So an exponential growth in the importance from the business community. All these businesses are, again, looking for solutions.

I think the other interesting trend that we need to look at is the discussion that's ongoing in the press and in the technical circles about what's happening between wired Internet connections, that is, the so-called PC Internet access devices and mobile or wireless devices.

And I think much of the press has this wrong from the standpoint that they talk about competition between wireless and wired. If you forecast in the future, our vision is for a billion interconnected computers attached to the Internet. The mobile or wireless vision is a billion subscribers who can talk to each other and also have Internet access.

I think we're going to see not competition between wireless and wired connectivity to the Internet, but we're going to see cooperation and the fact that the two will make the Internet even more interesting, more exciting, and grow consumer and business interests in the Internet.

The real challenge here is to have the wired connectivity, with its standards for privacy, connectivity, protocols, security, et cetera, to have that same sort of infrastructure created in the wireless side, and then to have the connectivity between the wireless and wired connectivity.

I think this is a great challenge to not only our industry but to the wireless industry as we move forward. Clearly, multiple contact points to the Internet is going to make it more interesting, more valuable, more exciting to the end user. And just as we've seen the initial handheld devices come out be adjuncts to the PC, I think as we go forward, the wireless and wired connectivity will work together as opposed to being in competition with each other for access points to the Internet.

And that concept of standards, interoperability standards between wireless and wired connectivity, I think is very important, especially when you look at today's infrastructure for the Internet as we know it.

That infrastructure is really a horizontal infrastructure or a modular concept. And I've just simplified that very much in this graphic. We can talk about hardware, networking, software and solutions, and in fact, each one of these horizontal layers is a topic of the key notes and many of the sessions that we'll have add IDF this week.

We can talk about hardware, and hardware is, in fact, servers and server appliances built off a common infrastructure, common architecture with common interfaces. We can talk about networking, and networking is the ability to tie into the Internet or into any network anywhere, anytime, from anyplace. So you want interoperability and modular characteristics there with open interfaces.

We can talk about software and the necessity for software to play. My software has to play with your software who has to play with her software. And so we have to have this open network with the capability of many different suppliers of software interacting together. The dot-net proposed by Microsoft I think is a good example of this.

And if we look at the solutions today, solutions are not crafted by any one company. They're drafted by software applications from a variety of vendors running on different hardware from a variety of vendors running on different networks.

So if we look at the Internet, it's really this modular array of building blocks. And our task, our collective task, is to make those building blocks play together simply, effectively, and seamlessly.

So back to my e-Pluribus Unum issue; out of many, increased end custermer options. We really have a situation where the Internet is not driven by one company or one individual but it's driven by the collection of companies. And I think our challenge going forward is to make our efforts more interactive; to speed bringing solutions to the end customer. Our horizontal nature lets all of us innovate, and that's the beauty of it. The complication or the challenge for us is with all that innovation, to let that play together seamlessly as far as the customer is concerned.

Now, the challenge where this comes in is if we look at our individual companies, not just our products that we provide, but if we look at our companies and we say, "We want to become 100 percent e-Company," what does that mean to us, and the challenge is pretty substantial. If you want to become 100 percent e-Company, you have a chart or an infrastructure that looks like this.

The blue egg-shaped symbol is really the computing environment, and off of that computing environment you want interactions with a variety of constituents. You clearly want to have the business-to-business interaction. You want to have that interaction with suppliers, customers, and also indirect suppliers and indirect customers.

A simple example of an indirect customer for a company like Intel is the end purchaser of a PC. That's the indirect customer for many of you, if you sell to computer OEMs, for example.

So we deal not only with computer OEMs as customers but their customers. We want to have connectivity and capability to communicate with them. We want to have the ability to deal with our co-travelers, and we also want to have the ability to deal with our employees.

So the connectivity and communication capability we have is vast, it's varied, and it's built up over a period of time. And this building up over a period of time is one of the real challenges that you have when you try to put an infrastructure in place.

You know, if you were just going to create your company tomorrow and say, "Fine, I need an e-Business infrastructure. Let's build it from scratch. Let's start and do everything absolutely right, buy it all at once," that's one challenge. If you want to do it in a piecemeal basis and have it grow with the company as most of us are forced to do, then it's another challenge.

So the issue of building an infrastructure, building an environment front to back which is scalable, expandable, can communicate with different networks, is a challenge. And this, in the past, has been, I think, probably an unnecessarily complex challenge to many of our end customers where they have to spend thousands of hours and millions of dollars to create this.

But I wanted to give you a simple example this morning to show you what you can do with modular building blocks from a variety of suppliers to build a e-Business or an e-Commerce infrastructure from scratch in a short period of time.

And I don't have quite long enough in my half-hour keynote this morning to do this from scratch, but what I thought I would do is show you a little bit of time lapse photography on how you can build one of these infrastructures in the space of a couple of days.

So what we want to do now is roll the video to show you how you can build a three-tiered e-Business front-to-back infrastructure on a common architecture, Intel architecture, within the space of 48 hours. Please roll that video.

(Video playing.)

CRAIG BARRETT: Well, I just happened to have the result of that video shown behind me.

(Laughter.)

CRAIG BARRETT: We were going to build it during the presentation but I said it took a little too long.

What, in fact, I have behind me is a three-tiered infrastructure. It's basically got Web servers, application servers, and big database servers. They're all IA. They're from a variety of suppliers.

Behind this array of racks, we have another set of computer racks, basically, which are making this operate like an e-Business center. Basically, we're simulating currently about 5,000 concurrent users addressing our imaginary business that we have here. And so we're flexing the system as we speak.

But I want to give you a little bit more detail and I want to have Vivek Sanji from our demo group who was in the video but you probably didn't recognize him. Hi Vivek.

VIVEK SANJI: Hi, Craig, good morning.

CRAIG BARRETT: Why don't you tell the audience what we have here.

VIVEK SANJI: Sure. Before I get into the degree of architecture and infrastructure we have here I want to give the audience the flavor of the business problem we're setting out to address here.

Imagine, if you will, that sportzmecca.com is an e-Marketplace for sporting goods and I am a sporting retailer, a fellow business. And, really, this e-Marketplace receives three broad categories of users. You know, the first set of users are those who come in and just browse. These are probably 55 to 60 percent of the traffic that comes in. They just browse through different, you know, aspects of the Web site.

The second category is I'll come in and do some personalization. So I may look at my past history of purchases and I may look at what the most popular shoes are. And then the third category is the ones we really like, you know, about 15 percent of users who will actually come in and buy the stuff; okay? So they will actually conduct commerce with a back-end server.

And so that's kind of the environment. And this site, as you said, right now is hosting about 5,000 simultaneous users.

Now let me give you a preview of the structure we have in place to address these three variety of users. To begin with, we have our tier one Web servers comprised of a variety of OEM systems. You know, you've got IBM, Compaq, HP, and some Intel products as well providing the first tier servers. And as you can see, as you mentioned earlier, this infrastructure has been built over time and so there's a variety of products in there.

CRAIG BARRETT: As I suspect most data centers have the sort of heterogeneous supply from various OEMs.

VIVEK SANJI: Exactly, and they all work separately.

At the back end, we have an array of high-end servers running Microsoft SQL server as the database, and most notably we've integrated the Itanium™ processor running the Microsoft SQL server and we're supporting a tremendous transaction load that we're imposing on this structure. And finally, in the middle tier we have the (inaudible) servers running Vignette as their application tier environment. And this entire center infrastructure is tied into the base of users using this network infrastructure rack which includes a 6,000 series gigabit switch, includes some load balancers and SSL accelerators and some switching gear.

CRAIG BARRETT: So basically what you've got here as you said you've got a Cisco router coming into it, you have the entire data center right there behind you; right?

VIVEK SANJI: Exactly.

CRAIG BARRETT: In four racks.

VIVEK SANJI: And right now, just for the purposes of the demonstration, only this first rack is engaged online. The other two are on hot standby.

CRAIG BARRETT: So you're going to try to tell me this is a scalable architecture that we could just add capacity if we needed to.

VIVEK SANJI: Actually, I'm not going to try to but I'll hopefully convince you it is one.

CRAIG BARRETT: So let's say CNBC has discovered sportzmecca.com and you've got a holiday special and your number of users is going to jump from 5,000 to 15,000 instantaneously.

VIVEK SANJI: Absolutely. And before we start simulating that many users, let me just point out to the audience a couple of the graphics we have here. For instance, we've got transaction response time on the stop left, they've got throughput below that, you've got hits per second on the top right and transactions per second overall.

So what I'm going to do is as you suggested, I'm going to try to increase the number of simultaneous users from 5,000 to the range of 15,000. And really what I did just now was I instructed the Load Runner {?} software which is running on that bank of 32-odd servers to add that additional load. And this is very really world load. We are simulating users going to Web sites, waiting for a couple of seconds, and then browsing, making purchase transactions, so on and so forth.

As you will notice, the most notable thing here is the user experience. And you can already see there is an upward trend in the response time. And so our users are now beginning to feel this additional load. You know, they are having to wait. We may actually turn away some users because we can't handle them.

CRAIG BARRETT: Then we should add more capacity, I think; right?

VIVEK SANJI: Yes. And even though you played the Mission Impossible theme, I think this is mission possible. In fact, we've made it so easy that even you can do it.

(Laughter.)

VIVEK SANJI: So --

CRAIG BARRETT: How long have you worked at Intel?

(Laughter.)

VIVEK SANJI: The question is how much longer I will continue to do so.

(Laughter.)

CRAIG BARRETT: If a CEO can add capacity in front of 5,000 people, anybody can do it.

VIVEK SANJI: Do you want to do the honors and plug those two sockets in?

CRAIG BARRETT: Let's see. We take these two sockets.

VIVEK SANJI: Yeah.

CRAIG BARRETT: Plug them in. Like that?

VIVEK SANJI: Like that.

CRAIG BARRETT: Like that.

VIVEK SANJI: You've got it.

CRAIG BARRETT: Now, you tell me I've just added this rack and this rack; right?

VIVEK SANJI: Yes. Basically what you accomplished there was you added these two racks and the network infrastructure is reconfigured to adapt to this additional capacity and start routing traffic to them. There is a drastic so our customers instantaneously see the benefit of that. There is an upward trend already in the hits per second, and, you know, the throughput is beginning to go up, and this will, over time, you know, increase significantly. And likewise, our transactions per second are on an upswing.

So really what we've done is very instantaneously, very seamlessly, we've scaled the situation and we have enough capacity to handle another round of upswing.

CRAIG BARRETT: Super. Thanks for that.

VIVEK SANJI: Thank you, Craig.

CRAIG BARRETT: If it was only that easy for most dot-coms to triple their number of customers.

The real point I want you to take away from this is what we have here is a front-to-back architecture, common architecture, heterogeneous in the fact that it's built from a variety of suppliers, could have been pieced together over a period of time, but simply scalable, can scale out as well as up in the back end.

So this is what we're all about at this conference. This is what our customers want us to provide to them. And not one of us by ourselves provides it, but it's really the combination of all of us that do this.

So customers want a solution. They want to just push a button or be able to do what I did, just plug in a plug, and add capacity, seamlessly, effortlessly, and have it work. That's our challenge. And to achieve that challenge, we have to, I think, rise to a new level of working together in a cooperative fashion. And conferences of this sort I think are an excellent start to that.

Another area that I think is kind of interesting and getting a lot of attention in the press today is this concept of peer-to-peer computing. And if we look at the fact that we have shared resources around the world, if you look at the example of Intel, for example, where we have perhaps 10- to 15,000 engineers who are doing detailed integrated circuit design, each with workstations, and those workstations are spread around the world, there's an immense amount of collective compute capacity there. But, in fact, unless you can tap that in a peer-to-peer and a cooperative fashion to use that compute capacity, that storage capability, then you're wasting a lot of that capacity, because, typically, those engineers are only working their standard 10- to 14-hour days. When we let them go home at night, their machines sit idle.

What you'd like to do is harness that capacity in a shared fashion to be more productive.

And Pat Gelsinger's going to talk about this in great detail on Thursday, and talk announcements about how the industry can work together to make peer-to-peer computing more exciting, more doable, and a more productive activity for business and enterprises.

What I want to do today is talk just a little bit about some examples of peer-to-peer. And Napster is perhaps the most obvious example of sharing music over the Internet. And I've just tried to list the users per day hitting their site, and the total storage available to those users and the number of servers which are directing traffic. So you can use either a Napster or a Nutella {?} type example with the latter case not having a central server directing the traffic, but just using, really, the overview of the Net to direct the traffic.

The national participation of Advanced Computing Infrastructure is another example, ganging together some 3,000 or so computers over four continents with a couple of teraflops of memory to do computing-type applications as well.

And both of these are examples of peer-to-peer computing. One is really a consumer example. The other is more of a business or scientific computing example.

What I wanted to do is bring out someone for a few moments from a company who's involved in peer-to-peer computing and has been for some time and has an interesting story to tell about an industrial example of where this technology works and the future potential of it.

So I'd like you to welcome Andrew Grimshaw from Applied Meta Computing. He's the founder of this organization.

Welcome, Andy.

ANDREW GRIMSHAW: Good morning. May I have the mouse?

CRAIG BARRETT: You need that mouse?

With a great name like Applied Meta Computing, I think you ought to give the audience a little insight into what your company's about and what you do.

ANDREW GRIMSHAW: Well, Applied Meta Computing is a grid operating system company. And what that means is we provided integrated, scalable platform for developing protected applications and sharing resources in a wide area environment securely. And those applications could be anything from PCs or toasters to machines like we have behind us here all the way to full-end supercomputers, which we'll be talking about in just a second.

Specifically, let's see if the button works, let's take an example of a grid environment or a peer-to-peer environment. And what you want is a transparent environment where somebody can sit at a workstation, manipulate resources scattered throughout the country or, in fact, throughout the world and not need to know where any of those resources are, not have to deal with failures or security issues at all.

In fact, what you want is a transparent system that's fully integrated, manages the complexity of this wide area environment, is secure enough, it's got to be scalable if we're going to add millions and millions of hosts, fault tolerances also are of importance. And legacy support is, obviously, important as well.

The challenges in peer-to-peer computing, however, first off, complexity management, you can imagine thousands of different machines all over the place of different types, it's going to be difficult to deal with. You have disjoint name spaces and file systems. Mutually distrustful organizations.

CRAIG BARRETT: Have you been dealing with the U.S. Forest Service, too?

(Laughter.)

ANDREW GRIMSHAW: Architecture, operating system, heterogeneity, fault tolerance, security, and many other problems. So what I'd like to do is give you a short example, this in an industrial setting. This is something we did with Boeing. There's a code call Overflow developed at NASA, just up the road here. It's a CFD code to do modeling of aircraft and other bodies. If you look at that airplane -- I hope I'm never in a four-engine aircraft that is spinning like that.

(Laughter.)

ANDREW GRIMSHAW: Anyway, the basic idea is it's a very large numerical problem to solve. What we did in this particular example is we took the problem, we scattered across multiple supercomputing centers, two DOD, centers, where security is strong. Scattered the problem over that area. All the pieces talked to each other, worked together, and solved the problem. The interesting thing from this perspective is that we provide the user transparent access to the supercomputing environment, transparent access to their data, regardless of where it is, it's cross-platform, cross site, parallel execution, which is a nontrivial problem in and of itself, with a single sign-on and complete data integrity because the DoD has strong requirements with respect to their integrity.

CRAIG BARRETT: The peers are not necessarily equal, you have workstations in Boeing in Seattle that are communicating and using supercomputers at different sites.

ANDREW GRIMSHAW: Right. The largest machines that money can buy at NAVO and ARL.

CRAIG BARRETT: We really run the gamut of something like Napster, which is really a PC peer-to-peer to an -- and inside Intel we have workstations which are peer-to-peer. In this case, we have workstations midside and supercomputers are peer-to-peer.

ANDREW GRIMSHAW: All of them potentially mistrustful. And you have to handle the security problems.

CRAIG BARRETT: The issues you mentioned are substantial to make this a ubiquitous application. But looking at those issues, though don't strike me as very different from what most corporations have gone through in the past to get their current internal structure together.

ANDREW GRIMSHAW: Right. The difference now, though, is that you have multiple organizations that need to interact with each other. And you really have a problem, it's like a wooden puzzle ball. You have lots of different pieces. But if he don't get them all to work together, when you roll it down the hall, it'll break into a million pieces. And you want it to hang together.

CRAIG BARRETT: Do you have a final comment about what you think the future of this is?

ANDREW GRIMSHAW: The future of computing, I've believed this for a number of years, is large-scale distributed systems, peer-to-peer systems, where you have many computers talking to each other that don't necessarily trust each other. That's what we've been working on for seven years now.

CRAIG BARRETT: Great. Thanks, Andrew.

ANDREW GRIMSHAW: Thank you very much.

CRAIG BARRETT: You're supposed to take that back?

Thank you.

(Applause.)

CRAIG BARRETT: There are a number of challenges that Andrew is mentioning. And I think these are tractable. These are doable. These are the things that IT managers are going to have to worry about but we, as an organization, are going to have to worry about solving the issues of protocols and ease of use, standards, security, scalability. Again, I want to encourage you to be here on Thursday when Pat Gelsinger talks about not only the examples we've mentioned this morning in this brief passing on peer-to-peer computing, but talks about other examples and talks about some industry initiatives that we'd like to kick off. So I think there's a great deal of excitement, great deal of opportunity here as the new wave of computing spreads throughout not only the consumer, but also the business environment.

The concept here, obviously, is putting many computers to act together, we can foster a new wave of innovation, a new wave of computing, and do things which were clearly impossible, much as the example that Andrew mentioned of Boeing, where they did not have enough internal resources to do the sort of simulation they were talking about, ganging together computers from around the country, they can do new and different things and increase the speed of product innovation, because they can do this by computer design as opposed to manual testing.

If you look at this concept of the modular Internet and the fact that solutions come from many different suppliers and that these solutions have to be interoperable modules, I think you can see the need for closer cooperation between you and your counterparts in the industry. And we hope that this is what IDF fosters, this sort of communication, these joint industry efforts to provide interoperable capability so that you can build simple business infrastructure like we're showing behind us.

We firmly believe that this innovation and optimization occurs when you use standard building blocks. And this will be a theme for the presentations to follow mine, standard building blocks from a hardware standpoint, standard building blocks from a networking standpoint, standard building blocks in the wireless communications space. Using standard building blocks, innovation occurs faster, the interoperability of the modular building blocks is much more rapid.

So I want to urge you to cooperate not only with companies like Intel, but also with your competitors, industry groups that can get together and set those standards and then compete once the standards are set will help to advance our industry much more than isolated development and then competing industry standards.

Our horizontal model is -- in fact, works beautifully from an innovation standpoint. The continuing challenge we have is to make that innovation work seamlessly in a vertical standpoint to provide solutions to the end user.

So this concept of providing a vertical solution requires us to cooperate even while we are separately developing our individual products in a horizontal fashion. And this concept of solutions is very, very important. Will Swope on Thursday, along with Pat, will talk about solutions and what Intel is trying to do in working with you to create solutions in the marketplace. You know, simply stated, our customers are really interested in us providing them solutions. They're looking for life preservers. And, really, what they care is that the life preserver floats, not necessarily just who was the manufacturer. Our job is to make the best possible life preservers to give to our customers.

Let me just simply summarize. This concept that the Internet is driven by our collective efforts, not by any one of us, I think is important.

We develop the Internet in a modular fashion. Whether hardware, networking equipment, software solutions have to play together in a seamless fashion. That's our collective task.

Customers want complete solutions. And this concept of peer-to-peer computing, I think, is a new wave which is going to have material impact on our industry, will require a lot of hard work for us to put the infrastructure in place to achieve that. But there are many examples of where this has been done already. And, again, Pat Gelsinger will detail those for you and give a call for action a little bit later in the week.

And, lastly, this concept of standard building blocks increasing the rate of innovation. It has been what has driven the PC and the server industry over the last ten to 15 years, the use of standard building blocks, open interfaces to rapidly innovate, rapidly bring new products into the marketplace has been wonderfully successful in the computing space. And I think it has equal opportunity in the networking and communication space as well to accelerate the rate of innovation and bring new and exciting solutions to our end customers.

With that little bit of introduction in what you're going to see later in the week, I want to introduce our next speaker, Albert Yu, who is our senior vice president in the Intel Architecture Group, and Albert is going to talk a little bit about some of the innovations on the hardware side and the client and server side of the business. The title of this speech is Intel Architecture Platform Leadership, and I'd like you to give a strong, rousing welcome to Dr. Albert Yu. Albert.

(Applause.)

Go to Dr. Albert Yu's keynote at IDF Fall 2000

* Other names and brands may be claimed as the property of others.