Keynote Transcript


Intel Developer Forum, Fall 1999

John Miner
Palm Springs, Calif., USA
September 1, 1999

JOHN MINER: Good morning. Welcome to Intel Developer Forum. I believe this is day 2.

It's a pleasure to be here, I have to admit, although it's more of a pleasure to be here in February than it is in September.

I left Portland yesterday around noon, and the temperature, on my drive to the airport, was 63 degrees. So one more time I come down to Palm Springs for a little thermal cycling.

This morning, while my title officially as of today is vice president of the Communications Products Group, up until yesterday I was vice president of the Enterprise Server Group. The majority of my presentation is going to be about servers this morning. We're going to talk about servers in the Internet economy. And with that what we'll do is get started.

You've heard a variety of speakers come through Intel Developer Forum for the past several developer forums and talk about how the Internet economy is changing the way a business runs, changing the means and methods by which we conduct our business. And in addition to that, and more relevant to this forum, is changing the way we develop and design our products.

Well, servers are no different. In regards to the marketplace, the explosion of demand in the Internet is creating an incredible business opportunity for those of us that are in the server business.

The more users there are, the more types of devices that attach to the Internet, the more commerce that is transacted on the Internet, and the more databases expand and grow as user and customer data and business data grows, the more demand there will be for servers.

I think Craig Barrett yesterday mentioned something along the lines of the fact that approximately one-twentieth of the number of the servers required to build out the Internet infrastructure have been deployed lifetime to date. That's a pretty staggering number if you stop to think about it.

We are headed for a very visible time horizon where we will be shipping more -- as an industry we'll be shipping more than ten million servers per year.

Now there's some characteristics of this Internet economy which are relatively new, and new demands placed on our products. Customers have come to think of the Internet as a utility. Just like they pick up the telephone and they get a dial tone, just like they plug the toaster into the wall outlet and they get electricity, they expect when they log onto your Web site that they will be able to access the information, the goods and services, that they seek 100 percent of the time.

In addition to their no tolerance to down time, these customers are pretty demanding from a service level perspective. They are looking for instant response.

Remember this: Your competition is one click away. That's become a cliche in our industry, but it's a very real and very true cliche. If they can't get the service they need from your Web site, they will go somewhere else to find the information or goods or services they're seeking.

And last but not least, every single forecast and prediction about the Internet has been wrong. And it's been wrong on the low side. People have not been able to accurately predict in six months' horizon what their workload requirements are going to be.

The IT managers that are supplying and supporting the lines of business around the world are constantly struggling to react to increases in capacity, increases in demand, and meet these requirements of always available and instant response.

In addition to that, the Internet is creating new opportunities for marketplaces and products and services that had not been anticipated in the past. All of these are adding up and imposing a constraint on how the environment is developed and designed, that we be able to react to change very rapidly.

The only way that this can be accomplished is with modular, standards-based scalable products and platforms on which the Internet infrastructure gets built out.

So these are the constraints imposed on us by the Internet economy. Let's look at how we address these constraints.

There's some key enablers to meet the customers' requirements here. The first is designing for availability. But designing for availability is more than just building big, redundant, nonstop iron. There are easier ways to solve this problem. The problem has to be looked at from an end-to-end perspective, from every node on the network, to how the network itself is architected and implemented, to assure application availability. Because what really counts is when your customer logs onto your Web site, he gets the information that he's looking for. It doesn't mater whether two out of ten machines are working that day or not working that day. It matters that the customer gets the information he's looking for and gets a fast response.

We have to design for performance and scalability. The right platform architecture, the right processor architecture, the right I/O architecture, and the right complete solution stack implementation all have to be designed for this performance and scalability requirements.

As I mentioned earlier, one of the keys to agility and flexibility in this environment is taking advantage of high volume, standards based building block. Think of the concept of vertical scalability versus horizontal scalability. Horizontal scalability is much more affordable and much more practical than continuous vertical scaling in this environment.

These standards based building blocks enable this paradigm to be the practical means of implementation in the Internet environment.

And last but not least, all of this has to be affordable. If we were in an environment where large scale, nonstop systems were the only practical way to meet the customer requirements, we would have ten service providers around the world, because very few of them would have the capital to be able to provide and build out the Internet infrastructure on that kind of paradigm.

In summary, what's required to succeed in this environment is an end-to-end perspective that enables rapid application deployment by taking advantage of these standards-based building blocks.

Now, what I'd like to do is give us a real, live example of a customer problem with a real customer, by the way, as to how they are approaching meeting the demands of always available and instant response in the Internet environment.

David Yeger who is the enterprise architect with Merrill Lynch is going to come up on the stage this morning and give us an example of an application that Merrill Lynch is developing for their Web-based environment.

Now, many of you may recall the headlines in the Wall Street Journal a couple months ago when the CEO of Merrill Lynch drove a very aggressive stake in the ground declaring that Merrill Lynch would become an online company by December 1st. That's the kind of thing that makes life excite accompanying challenging for David Yeger and he's going to tell us how he's approaching that.

Thanks for joining us.

DAVID: My pleasure.

JOHN MINER: Why don't you tell us about what this application is supposed to do first and then how you're doing it.

DAVID: The application that we've built on our cluster is designed to deliver portfolio information out to the brokers in our company and the end users over the Internet. And it's fully browser based, Web server enabled and actually just went up on the screen.

JOHN MINER: Can we see it here?

DAVID: Yeah. And this is actually a live demo, John. We took a chance on going live into the cluster on a 56K line, and the cluster is currently generating about 300 transactions or so per second. And that's the response time that we're getting on the Internet.

JOHN MINER: Wow.

DAVID: So we've achieved our transaction rates and we've achieved response time as you can clearly tell.

JOHN MINER: Web based application, customer portfolio information as well as portfolio transactions; is that correct? I have had.

DAVID: Yep.

JOHN MINER: The time is one second?

DAVID: One second in and out of the application.

JOHN MINER: For 300 transactions per second; right?

DAVID: Exactly.

JOHN MINER: Okay. So that kind of gives us an overall framework of what it is that Merrill Lynch is trying to accomplish with this particular application.

David, why don't we take a look at how this application is implemented. I believe we have a diagram we can bring up on the screens of a logical map of the architecture. Why don't you step us through this.

DAVID: What we have here is a standard three-tier application architecture with a browser handling the presentation, a Web server. We have an MTS transaction server sitting in the middle running on Windows NT and it's totally stateless in the middle to achieve the scalability.

The back-end database is the DB2 EEE database which scales horizontally across all four. The tables you saw were somewhere between 42 and 44 million rows of position data that we're querying.

JOHN MINER: What was the business motivation that drove you to this kind of an architecture versus all the alternatives out there for implementation?

DAVID: The architecture here and what drove us in this direction was the ability to scale wide and get high availability and get immediate scalability by adding more nodes into the processing environment in a very cost-effective fashion. So we can leave spare capacity on the floor and add it into the environment on demand.

JOHN MINER: So were you concerned with system availability, system scalability and response time, and you wanted to be able to afford to deploy this system and grow it over time effectively. The horizontal scalability concept.

DAVID: Right. And we wanted to be able to grow it easily instead of having to go out and buy big boxes or big iron.

JOHN MINER: Fantastic.

This chart has more three-letter acronyms on it than an Intel org chart. In fact, I don't recognize any of these just like the other org chart.

We have a facsimile of the system David is working on over here. Each year I try to increase the tonnage I bring to IDF for demonstrations. This year I brought 10,000 pounds just to put together a facsimile of the system David is working on. We're going to look at a block diagram of the system. We're also going to come over here and talk about the system a little bit with some of the particulars. But the block diagram you can see up on the screen.

I know what these are. These are disks.

DAVID: Those are disks. And these are computers, John.

JOHN MINER: Great, great.

(Laughter.)

DAVID: And behind the front panel here we have chips and stuff like that.

JOHN MINER: Very good. Intel chips.

DAVID: Intel chips, yes.

So basically, this facsimile more or less represents the type. We have a fourth rack of computers. Our computers are 7-U {?} and we're trying to move it to the 4-U {?} to get them down to two racks.

JOHN MINER: Today you're using 7-U four-way servers and that's the space here to here and they're four way servers.

DAVID: Yes.

JOHN MINER: In your block diagram you have Web servers, you have your application server and you have your back-end database server. Are you using the same server for all those applications?

DAVID: We're using exactly the same server for all of them. And this gives us the ability to have an easier way to manage it by having the same OS build the same hardware. We know what parts are in each box and it also gives us the ability to switch boxes.

So for the computers that are maintaining state, we have the ability should we lose one of them to automatically fail over to a transaction processing box and have it immediately pick up the load.

JOHN MINER: So you have a great deal of flexibility by in a sense doing a copy exact across, very different from a hardware operating load system across each of the different systems.

DAVID: Yes.

JOHN MINER: 7-U though for four-way system is probably not the most effective use of rack space. Is density important to you?

DAVID: Rack density is very important. Like everybody else, we're adding hundreds of servers and thousands of servers. And we're running out of machine room space. So rack density is very important, and we actually -- you've actually built these 4-U units, and we're very excited to be able to get four CPUs into this type of space.

JOHN MINER: You're going from four CPUs and seven units of rack space to four CPUs in four units of rack space, nearly doubling the processing density per rack that you can accomplish.

DAVID: Yep.

JOHN MINER: Now, when you go to this kind of paradigm of rack density, you have the need for external I/O and shared storage. I take it that's what all this is about?

DAVID: Absolutely. We have hooked up, this slide says one to two terabytes. It's closer to four terabytes. We're using Dell's version of this, the power bolt, and we have a number of terabytes hooked up and we're getting very nice I/O rates out of that. So we're seeing close to 70, 80 megabytes her second throughput on that.

JOHN MINER: The other thing I noticed on your block diagram is you're using VI architecture. Can you tell us why and what you expect to benefit?

DAVID: In a client/server environment when you write these applications there's a significant amount of message passing going through the different tiers. In order to make the environment scale, we were looking for a very fast way for the messages to move back and forth, especially in the database tier where the queries are being dynamically split up across the different nodes from the application service.

So we needed a very high-speed way of getting this through, and VI gives us that and gives us the scalability by the speed. So we're able to get more nodes on the network, on the system area network, and able to scale it much wider.

JOHN MINER: All right. Now, is there anything important that we've left out?

DAVID: I don't think so.

JOHN MINER: VI architecture, rack density. Right now, you have out here in the audience, you have the people that are responsible for building these systems, writing the operating systems, developing the applications and developing the peripherals. What is your request of these guys?

DAVID: We want them fast, we want them easy to manage, and we want them cheap.

JOHN MINER: You want them fast, easy to manage and cheap. This guy doesn't ask for much. The financial industry is very demanding.

DAVID: We actually think we accomplished it with this project. We built an easy 250 transactions a second for under a million bucks.

JOHN MINER: For under a million bucks, 250 transactions per second. That was your goal, 250. You're actually achieving, as of yesterday by deploying VI, something higher than that.

DAVID: Yeah. We're actually achieving 320 as of yesterday.

JOHN MINER: 320 transactions per second?

DAVID: We got that 30 percent bump by deploying this year and a half old -- I'm going to call it hack of DCOM on top of VI, which I hope Microsoft will ship with the data center.

JOHN MINER: That's fantastic. I think we've covered all the ground here. A couple key points I'd like to point out, this is an eight way server by the way. This takes seven U's of rack space and Dave was talking about how he's using 7-U of rack space in a four way server today. This is a four way server and 4-U of rack space so you're seeing an increasing density as the industry focuses more and more on the packaging requirements for this server farm concept that the Internet infrastructure is being built upon. And you can see the same thing happening here with two way servers that are in 2-U's {?} of rack space. This is an old-fashioned pedestal server put into a rack. It occupies 7-U's of rack space and the same processing functionality.

So by focusing on the industry needs and the customer needs, the industry is getting closer and closer to being what the customer is looking for in terms of rack density, horizontal scalability. In taking advantage of some of the new technologies like VI architecture, we're able to take deliver on the promise of volume economies of scale in large-scale applications.

Thanks a lot, David.

DAVID: My pleasure.

JOHN MINER: It's kind of interesting to think back. What you're seeing here is an integration of all of the ingredients we've been talking about for the past five IDFs in servers. I believe the first IDF I did four IDF's ago, we talked about the VI architecture for system area networking and how clustering would be a key ingredient to scaling standard high volume servers in the future. This is a great example of progress we made. I have another anecdote about progress. Cornell University decide today deploy a large scale database and they wanted to do it in the cluster environment. A combination of Intel, Microsoft, Dell, and IBM donated software and equipment to Cornell, and they built a 64 node cluster and had it completely operational, including the database, all management ingredients and running queries in ten hours.

So the industry has come an incredibly long way in the period of four IDFs with regard to making this technology a reality.

Now what I'd like to do is spend the rest of this morning talking about how we design systems for high availability and how we design systems for high scalability and performance. So I'm going to start with designing in availability.

If you go to -- all the way back to the days of the Wired for Management initiative, Intel has been talking about the importance of board instrumentation.

Board instrumentation has become a mainstay of all the server designers in the industry, and in fact the industry has begun to standardize around a single instrumentation framework and communications bus known as IPMI which has become widely adopted in the industry allowing systems to be managed in a very similar fashion by the management consoles that are available.

We've talked about how I/O is important, having a balanced I/O that matches the microprocessor performance capabilities of the system are key ingredients in designing a high available and reliable system, but in addition to that, Hot-Plug I/O has become a mainstay of servers regardless of the number of processors that they have.

It's just as important for that two-way Web server to be serviceable online as it is for that eight-way database server to be serviceable online in this Internet economy.

System design is one of the other concepts that we've talked about. All of the hot swap and redundant technologies that were introduced as standards or as core technology several years ago have now become commonplace in all of the products that are being developed by the industry. Again, this is no longer the domain of high-end equipment. Effectively, we have taken the volume economies of scale and applied those to the standard building block technology framework that we have that this whole industry is based upon.

Network design is a third ingredient to high availability. Some great examples of how network design and implementation can impact the availability of the application that the end user sees is by taking advantage of fail-over technology such as clustering, David Yeger's example that they can have a single system, taken off-line, whether it's for repair or software upgrades, without taking the entire application down is a byproduct of a clustered architecture.

In addition to that, you have load balancing technologies that allow the Internet infrastructure to direct the traffic to the systems that have available resources. Whether they're systems that have spare processing power and can provide the highest performance response to the end user or whether it's the systems that are off-line versus systems that are online, load balancing technology becomes a key ingredient in how networks are implemented for high availability.

Another key ingredient is application design. The VI architecture is becoming deployed in shrink-wrap, off-the-shelf, volume production applications such as a database environment, and is being supported in the operating systems as well via standard interfaces.

The VI architecture enables a clustered environment with very high performance and low latency, providing both scalable performance as well as fail-over type of performance. Again, supporting a system availability environment.

And last but not least, we've discovered that solution deployment is one of the key ingredients in a successful system implementation. There are a set of best practices that the industry has proven over and over again that if they're followed will ensure system availability.

Companies like Intel run their infrastructure on Intel-based servers, and have very high availability and very high application up time because of a disciplined copy exact type of processes that we use to stabilize a solution and deploy it effectively.

Management software plays into this environment because it is the only way that a solution provider can ensure a high service level agreement. The way that this is administered is via policy management, prioritizing traffic, prioritizing task and operations, as well as using the management software and the instrumentation that's built into the system to predict when and where failures might occur.

And then last but not least, the industry has adopted the practice of making service level agreements with its customers to ensure high availability by applying all of the techniques listed above.

Effectively, all of the ingredients that we've talked about over the past four IDFs are coming together to allow very high availability solutions to be deployed using these standard off-the-shelf building blocks at a very effective cost.

So let's switch gears. I told you the other key ingredient the customers are looking for is an instant response. How do we design in performance and scalability into this application environment?

Well, this is an Intel event, and the way we do this is we begin with the Pentium III Xeon processor. At the heart of performance, scalability and headroom, is a microprocessor that can give you the performance that's going to be required in today's and next week's and next month's environment. The Pentium III Xeon processor which we introduced I think it was March of this year, approximately six months ago, has effectively demonstrated that it can out perform all other architectures in virtually every industry-standard benchmark.

Microprocessor performance has been demonstrated over and over again to be one of the key enablers in system scalability, and Pentium III Xeon leads in this area.

Now, let's talk about scalability in a different sense. Let's talk about it in terms of user response time. In this graph, I'm comparing the Pentium III Xeon processor to the Pentium III processors. A lot of customers are wondering whether or not they should deploy a Pentium III-based system or a Pentium III Xeon-based system when it comes to deploying network applications. This particular benchmark is a Microsoft Exchange messaging platform, and we're measuring user response time in seconds.

If you go back to David Yeger's application example, he was looking for a one-second response time for 250 simultaneous transactions.

The kind of performance users are expecting cannot be delivered by using the Pentium III processor in this particular example. And the reason is it doesn't have the headroom to provide that response time.

There's a whole bunch of ways to solve this problem. But even if you threw multiple servers at the problem, multiple Pentium III servers, you wouldn't be able to get to a one-second response time.

In addition to that, instead of having one system to manage and administer one set of software licenses to acquire, you would have four sets of software to acquire, four systems to manage and administer. Expand that over a user population of 65,000 employees like Intel has and what you have is a big collection of servers that are providing less than adequate user response time.

Taking advantage of the Pentium III Xeon processor is the only way to assure your customers that they're going to have that response time and the headroom for future loads and future demands on the systems.

Now, the Pentium III Xeon today, this is a road map for our 32-bit family of processors, will continue to have enhancements as far into the future as we can see. We will take advantage of every design and process technology improvement that Intel can deliver to make our 32-bit family of processors faster and faster.

For the first time basically today what we're illustrating here is the speeds and the cache sizes we expect to support in the future generations of our 32-bit processors for servers.

You'll see today we're shipping 550 MHz processors with a half meg, one meg, and two meg level two caches. Middle of next year, early first half of next year, actually, we will increase that clock frequency to over 700 MHz and we will integrate the cache into the same chip as the microprocessor basically giving us significant improvement in level two cache performance known as the advanced transfer cache. This will have 256 kilobyte, 512 kilobyte, one megabyte, and two megabyte level two caches.

This will allow us to extend the Xeon family of processors into the more affordable, lower end ranges of the marketplace, and still provide the scalability and headroom that the Xeon socket only can provide.

And following this generation of Pentium III Xeon processors, in the year 2001 we'll introduce the Foster family of processors which will have similar levels of cache capabilities and exceed one Ghz of block frequency. So as you can see, the 32-bit family of processors have lots of legs, have lots of room for increased performance and can provide the scalability and headroom customers are going to require in this Internet economy.

Now, let's switch gears a little bit. Those of you that were here yesterday morning and saw for the first time real live demonstrations of Merced based systems running two different operating systems, Windows 64 and the Linux operating system.

When the 32-bit family of processors doesn't have the right capabilities to meet your needs, we have IA-64. Whether it's larger scale systems, greater memory addressability, or the performance characteristics and attributes of the EPIC architecture, IA-64 will take Intel-based systems into brand-new realms with regards to performance and scalability.

In addition to that, we'll apply the same volume economies of scale that we've applied to the IA-32 family of systems that basically have resulted in Intel-based servers out shipping non-Intel-based servers by approximately ten to one.

It is this combination of ingredients, the high volume economies of scale, the choice of operating system and platforms from multiple vendors, the availability of a long-term software road map and the price/performance proposition that will position IA-64 uniquely in high-end Internet and data center applications.

If you stop and think about it, there is not another architecture out there that can provide this choice of software and operating systems. You'll have to ask yourself before you deploy a high-end, back-end data center type of application or a front-end application, depending on the features of the EPIC architecture, you'll have to ask yourself whether or not some other architecture, a non-Intel architecture solution, is a dead-end solution when it comes to the context of choice, agility, and flexibility.

If you go back to the slide I just had up there a little while ago and you think about not only do I have a performance advantage, but I have a choice of operating systems, a choice of vendors, and a choice of applications that's unparalleled to any other architecture, you'd have to ask the question why anybody would deploy, beginning in the year 2000, a server based on anything other than IA-32 or IA-64 and Merced.

Now, the demonstration yesterday should have put the heat on the development community to get their products ready for Merced introduction. We're a little bit less than a year away, approximately, from the Merced introduction. Middle of next year. We're on schedule. I think all of the indicators that you've seen throughout the forum would basically reinforce that message that we're on schedule.

If you're a developer, you should be having systems ready for power on with Merced samples. If you're a software developer, you should have done all of your code checks to make sure your code is clean for 64 bit portation. If you're an operating system developer, you should have your software development tools out there and you've been running your operating system on the simulator for quite a bit of time now and you're beginning to see hardware systems, native hardware systems, on which you can begin testing your prim.

Effectively, the clock is ticking. There are going to be those of you that are on time with your applications and your peripherals and your systems, and you're the guys that are probably going to gain market share in this transition from non-Intel based platforms in the enterprise and the Internet to IA-64 based platforms in the enterprise and Internet.

So the heat is on. The clock is ticking. And to make it easy for you, Intel has set up a whole bunch of different development efforts.

We have, we announced in May, a $250 million equity investment fund that is co-funded by Intel, Dell, Hewlett-Packard, NEC, and Compaq along with approximately 11 Fortune 500 rapid adopters of information technology.

This fund will focus on enabling and encouraging the development of e-commerce types of solutions to run on the IA-64 architecture, and be available for solution stacks when we launch the product next year.

We have developers architecture guide for the software development community to provide details about the instruction set. We have the developer interface guides, DIG64 and UDIG64 for peripheral developers and operating system developers that are providing basically a standard framework in which these devices, tools and drivers can be developed.

In addition to that, we have 29 application solution centers worldwide that will assist in the porting and testing of code. And last but not least, the operating system vendors all have toolsets to enable development of applications and drivers on their specific operating systems.

The bottom line is this: As a product is coming, the infrastructure is in place for you to take advantage of the tools that are there to get your development under way, and there's still enough time left for everybody to be ready for the Merced introduction coming up.

Now, that was the microprocessor side of performance and scalability. The other thing that we've been talking about for two years of IDFs is I/O. About three years ago, Intel recognized that we were going to need to change the I/O architecture fairly significantly in order to provide a balanced system environment to support the capabilities that our microprocessors would have in the long term road map.

We also recognize that the current I/O, we have all kinds of limits with regard to this concept of horizontal scalability and building out the server farm type of concept or high scaling clusters.

As a result of that, we began an endeavor to launch a new channel-based I/O architecture. In addition to that, other industry leaders agreed on that same vision, that a channel-based I/O architecture is what the industry required for future system development and system deployment. And we began working on effectively competing efforts.

Yesterday, we announced that collectively we're going to unify our efforts to deliver one channel I/O specification and architecture to the industry. We will unify our efforts, and we're going to take the best ingredients from the two efforts underway, one was called NGIO and the other was called FIO, and combine them into a specification that we're temporarily referring to as system I/O. Take the best of those ingredients and provide a single specification for the industry with a target to have systems in production by the year 2001.

This architecture will be scalable from entry level servers all the way to data center class systems providing compatible scalability across a complete range of performance. It will have the legs to span multiple CPU generations. In fact, the architecture is such that we believe it will be a very long time before we have to completely reinvent the I/O architecture for servers.

As a development community, here is what we need to do. Number one is work to get the specification completed. The target is to have this new specification completed by the end of this year. There's an I/O session here at the forum today that I would highly recommend that all of you attend, or those of you that are interested attend to get more information. There's probably not room for all of you.

And the third thing is if you're an independent hardware vendor or system vendor or application vendor or operating system vendor, you need to get your I/O road map aligned, because this train is basically heading for departure in the year 2001, which means you have to begin the architecture work and the development work today to be on time.

Now, what I'd like to do is to give a little demonstration of channel I/O, channel I/O prototypes. I'd like to show you how far the industry has come in preparing for this transition to a channel-based I/O for servers.

The key thing that I want everybody to absorb in here is that we have come a long way in this development effort, and that if you were thinking we're starting from ground zero for this new system I/O specification, you're wrong. That the specification is going to build on the tremendous amount of work, approximately two years worth of work, done by the leaders in the industry, and will take advantage of that work to accelerate development of the new specification.

To help me demonstrate this, John Hawes {?} is going to come up on stage, and we were going to take a look at a couple of real live implementations of channel based I/Os. Those of you who were here alt the last IDF, we had emulation based. Today we have the real thing.

Hi, John.

JOHN: Actually, you hit it on the head, this is a real and accurate implementation of channel I/O.

JOHN MINER: Can we put the block diagram up on the screen?

JOHN: In real hardware and software and firmware. So briefly, let me walk through what we're seeing.

What you see on your screen on the left is an LSI prototype storage subsystem, and they basically have three prototypes we're going to demonstrate here today. We have SCSI JBOD storage subsystem, a SCSI RAID storage subsystem and a fiber channel JBOD subsystem. And they're all using prototype ASIC silicon that Intel developed to validate the architecture with firmware to emulate either a host end function or a target end function.

JOHN MINER: So this is a very architecturally accurate implementation of a channel based I/O; correct?

JOHN: Exactly.

JOHN MINER: And it's based on ASICs we've been developing. These happen to be prototype ASICs.

JOHN: Correct.

JOHN MINER: What we've got is three different types of storage I/O connected to a single server.

JOHN: Correct.

JOHN MINER: You don't see the wires back there but three wires are connected; is that right?

JOHN: Exactly. So we're running three different channel host adapters inside of the host and each provides a link to the storage subsystem that it supports.

JOHN MINER: Now for this particular implementation, we're starting at out at one and a quarter gigabit per second, and the production implementation will be for a single link, two and a half gigabit per second; correct?

JOHN: Exactly.

JOHN MINER: Give us an example of how it's working here.

JOHN: Let me point out one other thing. In the middle we have a Finisar analyzer, and it will show us traffic generated when I do a file copy. And the demo is I'm going to copy some video files over to the subsystems, and you'll see the subsystems linking come alive and then I'll run them. So you'll see traffic go one way on the top box as I'm copying files from the host and you'll see it go the other way as I run the files themselves. And after this we'll demonstrate Crossroads Communications technology, their prototypes of their products.

The other key thing to point out here, John, is we have NT file transparency. What you see here, these open windows here are actually the storage subsystems that are running here. And I'm just going to be able to grab the file and copy it over to each one of them. And so as I do this, you're seeing the drives come alive.

JOHN MINER: So you can see the drive lights bringing there and what are we looking at on the analyzer?

JOHN: It's showing us the traffic being generated on the links in the direction to the drives.

JOHN MINER: So the top link is the traffic going from the server to the drive; correct?

JOHN: Right. And as soon as these file companies get done, I'll be able to click on them and you'll see the traffic go the other way.

There's the second one. There's the third one.

So basically what you're seeing is native NGIO cells being transmitted or, excuse me, channel I/O cells being transmitted from the host to these analyzers.

JOHN MINER: So you're seeing traffic going in both directions and you've got an advertisement running for channel I/O here.

JOHN: Exactly.

JOHN MINER: That's pretty impressive. So what we have here is a prototyping platform from which developers can begin the development work to develop channel I/O based systems. And, in fact, what you see here is the work of Intel and the industry to make sure we have channel I/O target systems such as the storage systems that LSI has developed.

JOHN: Exactly.

JOHN MINER: Anything else we should cover?

JOHN: Only other thing is this is truly remote I/O. These storage subsystems could be placed anywhere within a data center and be shared amongst multiple hosts.

JOHN MINER: So the data, multiple servers could be sharing those three different storage arrays.

JOHN: Exactly. Before I leave this demo, I want to show one other thing, the Finisar analyzer. So we've seen the traffic generated. Actually, behind it you can actually see the contents of the cells as they're being transmitted. So it's a true functioning development tool for vendors in the industry.

JOHN MINER: Very good. Lots of progress. Good starting point for system I/O.

Why don't you show us what we have here.

JOHN: All right. This is a storage router from Crossroad Communications, and it provides some higher level of function element, layer three and four switching capabilities for applications like host free, backup, and things like that.

What we're going to see here is an iometer workload being run off of the host, and you have NGIO coming into the storage router and fiber channel coming out and driving this fiber channel system below.

JOHN MINER: So we're using iometer; is that right?

JOHN: We're using iometer. And the point here is it's not performance centric. It's about functionality. Basically we validated the architecture, the architectural concept that we had, to bring a channel-based I/O to the market prior to locking down all the things in the designs, et cetera.

JOHN MINER: Now, this is a key ingredient to enable system area networks and storage area networks to be interconnected effectively; correct?

JOHN: Exactly.

JOHN MINER: In a similar type of routing device that allowed direct attachment to a high-speed network or a high-speed backbone in the system area network as well.

JOHN: Exactly. Different flavors of processors for applications.

JOHN MINER: Right. Yeah, very good.

JOHN: So you're seeing real traffic being generated through iometer, and a lot of people really rely on iometer as a workload generator or as a measurement tool. So we're really excited that we have real prototypes of real industry products from vendors that helped us develop the architecture. And we also have the development vehicles, the prototyping vehicles to get the industry started.

JOHN MINER: Fantastic. So we have a great starting point for system I/O. Thanks a lot, John.

(Applause.)

JOHN MINER: Now what I'd like to do is very quickly hit the benefits of system I/O, the specification target that we have for the end of this year. And then we're going to begin to wrap up.

First of all, this channel based I/O allows connectivity to build modular systems. We basically can break through the physical limitations of PCI and have out of the box I/O, which is what we've demonstrated over here, as well as the support for a hot swap at the system level and at the node level. So systems and networks can be serviced without taking the application down.

So you stop and think about the design for availability at the network level, this is one of the key ingredients to enable that.

Flexible systems configurations are a key attribute of channel based I/O. Off-the-shelf rack systems, chassis. You can effectively think of the rack becoming an I/O back plane, taking advantage of channel based I/O in which you plug storage units, compute units, or whatever it is, including switching technology and routing technology, to build out your network infrastructure.

High availability, you have multiple fault domains versus a single point of failure that you have in shared bus I/O architectures, providing a much more resilient system and a much more easy to use and easier to administer system.

And it delivers outstanding performance. System I/O targets, link speed is two and a half gigabits per second and you'll be able to go from a single link up to an aggregation of 12 links giving you 30 gigabytes per second in the performance of the product.

In addition to that it is architected such that as wire speeds increase, as each ling speed increases, the overall system I/O will increase as well. So when ten gigabit link technology is available you can multiply all these numbers by four.

I can't encourage and emphasize enough for everybody to get rapidly involved and have your product lines in place to take advantage of this.

I want to wrap up with this picture. System I/O is more than a specification. We've been talking about this for a long time, but effectively, it is the key enabling technology behind the vision of the platforms that are being developed by customers like David Yeger. It is a technology that is going to enable these Web farms, server farms, whether it's an application server farm or an HTML server farm or a back-end database server farm to be built out and delivered to your customers, a 365 days a year, 24-hour by 7 days a week, nonstop, high available environment, at very affordable prices taking advantage of these standard high volume building blocks that this industry is available of building.

So with that, the call to action is very straightforward.

Continue to invest and design in your systems for high availability at the board level, the system level and at the level that interfaces the system into the network.

Design your systems for performance and scalability. Remember that the Xeon socket is the key socket to deliver performance and headroom to your customers and the kind of response time that their Internet users are expecting.

Make sure your Merced programs are on track, and engage in the system I/O industry effort for '01 products. With that I'd like to wrap up and turn the stage over to Mark Christensen.

Thank you very much.

(Applause.)

* Other names and brands may be claimed as the property of others.