Microsoft WinHEC 2002
Paul Otellini
Seattle, Wash., USA
April 18, 2002
PAUL OTELLINI: Today, I want to cover four different topics: The notion of anytime, anywhere and what we're trying to do about it; the notion of really driving conversions, making the combination of computing and communications truly happen in a seamless fashion; talk a little bit about how the world is changing in terms of our markets, the difference between what is now mature markets and emerging markets; and then talk a little bit about Gordon Moore's law and a little bit beyond that.
The goal that we are really working towards in terms of all laboratory work and all of our product work at Intel is to bring computing to everyone, anytime at anyplace. This is a very similar vision to that which you heard Bill Gates talk about a few minutes ago. We're aligned in this. We want to make this much more real and more pervasive.
What's really happening around us? Computing is becoming more pervasive. There are scores of microprocessors in every car. There are dozens of them in every house.
Distributed intelligence is getting cheaper, it's becoming pervasive. The microprocessor, as Gordon Moore predicted years and years ago, would move into every element of our lives.
But it's not just in the PC anymore. You see it in servers and handhelds and other kind of computing devices, but you also see it in cell phones, in network equipment. Increasingly, the architecture that is the underlying build of the computer is driving across the world.
Sometime this year, the billionth personal computer will be shipped. It will happen around the same time that there will be a billion wireless subscribers active in the world.
On these two sets of devices, people are doing 30 billion instant messages a month today, 20 billion SMS messages per month, 150 billion e-mails a month, probably most of them generated by the people here today. 100 million people today will look at and engage with 1.4 billion Web pages. It's happening. It's pervasive. It's only going to become more and more so.
So what's Intel's role in this? Well, first of all, we are a microprocessor company. We're never going to change that. We will always be fundamentally about microprocessors. But we are more than just a microprocessor company.
Much of our work now and in the last decade has been focused on making the platform better, enabling this intelligent notion of computing anytime and anywhere, in whatever form factor people want.
One thing that you may not realize is that Intel is also the world's leading supplier of Ethernet silicon. We're the third largest communication silicon supplier in the world.
So you think about this core technology and enablement, this core capability in microprocessors, and the fact that we are a very large company relative to communications, and you can start thinking about the possibilities in terms of where we're going.
Let me put forth a premise. We believe very clearly that all computers will communicate, and all communication devices will have an increasing degree of intelligence or computing inside them. It sounds very simple, it sounds relatively noble, but it's a really hard thing to do.
But when you think about this possibility, you really start seeing the potential.
I wanted to talk today about three trends. The first one is the one that most people would call convergence. Now, this has been the Holy Grail of the industry for 20 years. People have been saying it's around the corner for over two decades.
What's our role in this? You could easily define Intel's mission in four words. We are about "driving convergence through integration" -- moving it, over time, onto the chip, the chip that will allow computing and communications to physically converge and that will allow these models to really happen.
It's not that farfetched. If you go back not in the too distant future, there was the dream of convergence. There were computers, they grew up out of the glass houses and mainframes, and there was a communications industry growing up out of the voice business.
In 1979, a British magazine called The Globe and Mail talked about that year, 1979, being the year of convergence. It's the quote on the lower left-hand side. (Quote: "This is the year that converging communication and computer technologies started a cultural revolution.")
In 1992, John Young, who was then running HP, talked about 1992 being the year that convergence would start, and it would be done within a decade. That is this year. We're not there. We're far from there.
We started seeing the capability, the potential as we started connecting computers together, first over phones and then over wired networks. Ethernet started making it happen.
Today we are in a hybrid state. The Internet is here. There is some movement to broadband, though not as much as we'd like. But you start seeing, in the context of this, some devices emerging that are convergence devices. PDAs, cell phones, SMS, instant messaging, Napster. Napster is, in many ways, a very rebellious manifestation of convergence. Personal video recorders that allow us the ability to time phase our entertainment. .
Even in the telco industry, you're seeing the emergence of standard appliances that are essentially Intel-based computers running computing stacks and communication stacks on top of them. It's happening at the consumer level and at the infrastructure level.
So where are we going tomorrow? I think we are not that far from being able to collectively deliver the products that will give us this always connected, always aware set of devices, PCs, phones, PDAs. Much of this was consistent with what Bill Gates showed on his slides.
Over the next year to year and a half, some of these devices are going to get factors of 10 improvements in their compute capabilities being brought to them, through work that Intel is doing and work that others in the industry is doing. The amount of intelligence into the basic communication devices will be fundamentally different over the next year to year and a half.
As these devices become more pervasive and the opportunities are in front of you as developers, Intel wants to present a value proposition to you that we think is unique in the industry. That value proposition is, we will develop tailored silicon devices that go inside of all of these different classes of machines, that are optimized for those machines in terms of intelligence and communication capability, but we want to present to you a unified development environment. That is, to steal a phrase from our friends in the Java community, to be able to develop once in an Intel device, principally a PC, and to be able to take that application and seamlessly port it over to other Intel-based devices, not necessarily Intel architecture but XScale and other communication computing devices that we build, and be able to leverage your development in a very, very cost efficient and time efficient fashion. Right to it once, run it on many devices.
This allows for a much more natural ability to do the repurposing of data. No one wants to develop multiple databases and database types for different devices. You want to have that one database, have it accessed by servers, by workstations, by personal computers, by cell phones and by PDAs.
The ability to do this at the application layer is a conservation effort, from a development standpoint, and it's one that we are developing tools, compilers, cross-architecture compilers to help you in this work.
Now, this environment is not all that far off. In fact, one could argue that you'll start seeing the manifestations of this early next year.
In the first part of 2003, Intel will introduce a new product that's codenamed Banias. Banias is the first bottoms-up product we've ever designed for notebook computers that does not in any way compromise performance.
Banias is architected to do a number of things: to give you the best in class performance, to give you the absolute lowest power, to give you the longest battery life, to give you at the platform level very, very thin and light notebooks to aid in the portability, and to deliver seamless connected environment through 802.11.
We have had Banias silicon back from our fabs for about a week, and this morning we'd like to do some risk-taking and maybe show you if this product wiggles at all.
Let me ask Elise to tell us a little bit about Banias.
ELISE: Thank you, Paul. As Paul mentioned, Intel is driving the convergence of computing and communications through integration. The Banias processor is Intel's first microprocessor specifically designed to drive this.
What I'd like to show you today is the world's first Banias demo. This integrated solution consists of the Banias processor, the next-generation chipset Odem and 802.11 wireless connectivity.
To illustrate these capabilities, let me tell you a little bit about my life. Supporting these busy executives requires me to travel extensively, and that means lots of time sitting around in airports. After awhile that really loses its charm. But a lot of airports today have already incorporated wireless networks, like here in Seattle.
So imagine, I'm waiting for my next flight to London, and I'll use my Banias laptop and its 802.11 wireless connectivity to connect to Entertainer. This is a broadband pay-per-view entertainment site. It uses Windows media player, so it has its integrated digital rights management. I can seamlessly stream the latest content from Saturday Night Live or some classic content.
(Video playing and ends.)
ELISE: Well, as you know, sometimes things can change. As I'm enjoying my content here on my laptop, they announced the cancellation of my flight. That means I'm going to miss my demo tomorrow with Andy Grove. I better call my counterpart, Ricky, and see if he can cover for me.
But you know, they're already storming the pay phones, so I'm going to go ahead and use my Banias-based laptop to do a Voice over IP call. Although it seems like a lot of other people are trying that same thing. Maybe I'll have to try again later.
Well, in summary, what we've just shown you today is the very first Banias demo. When it comes out in 2003, it will emphasize the four vectors of mobility: performance, longer battery life, small form factor, and integrated wireless connectivity. Thank you.
PAUL OTELLINI: Thank you, Elise.
(Applause.)
PAUL OTELLINI: Not bad for a first silicon one week old that's up and running, with an 802.11 stack on it. 802.11 will be integrated in the overall platform architecture so that we can deliver a promise of seamless, wireless computing as these notebooks come out in the first part of next year. So we're very excited about this product, and we'll tell you more about that in the coming months.
Where are we going? Well, if Banias is a glimpse of tomorrow today, where are we going the day after tomorrow?
It's not that inconceivable to think about taking every device that Intel builds, every microprocessor, every microcontroller that Intel builds, and putting in a corner of them, using Moore's Law over time, a radio. A radio could handle all the protocols that will be around at that point in time. The ability to seamlessly move between PAN and LAN and WAN in an uninterrupted fashion, where the machine knows where you're at and who you are to be able to maintain identity and security protocols. But also to be able to give you this seamlessness as you walk from your office to the airport to Starbucks or to your home. Always be connected in a comparable environment, high-speed environment.
So we are looking at architectures that allow us to put this kind of radio technology in the corner of every die we build. From a die-size standpoint, it's trivial. From a cost standpoint, it's trivial. That's why we call it Radio Free Intel.
The logic part of this is a very, very small element of the radio. It's all that analog stuff that we have to integrate that's difficult and actually takes a fairly large chunk of silicon.
So the way we're getting around that is we're doing some very deep exploratory work in MEMS, micromachines, to be able to use these micromachines to handle the analog functionality integrated with the digital functionality that we certainly know how to do and be able to deliver this promise of a radio in every device.
While convergence is going to be continuing between now and the time we get to the single chip, the single chip environment really is important to making this pervasive and making it true.
Let me shift to the second trend. This is a much more practical one. This has to do with emerging markets versus mature markets.
If you just look at where PCs are sold today and where they're likely to be sold in terms of econometrics of affordability, desirability of computers over the next few years, we estimate that 50 percent of all incremental processors sold in the next five years will be sold in what we would now call emerging markets. That means that by 2006, the market segment share of emerging markets for all computers will be about 40 percent.
This is pretty profound. This means that on one hand we have to start dealing with different cultures much more aggressively, different needs much more aggressively, different price points much more aggressively than we had before. But at the same time we do this for emerging markets, we have to also be cognizant of the fact that in mature markets our customer base is increasingly sophisticated. As people move to their second and third and fourth generations of computers, they want more and more out of those machines. They want them to do more and make it easier for them. So this, I think, presents a dilemma or a dichotomy for us in the industry.
Let me start describing this by giving you a pop quiz first. What was the first country in the world to move from the Pentium(r) III to the Pentium(r) 4? It was China. Surprised me, too.
The surprising thing isn't that China moved fast. The surprising thing is that China now represents the third largest single country market for computers in the world and will likely pass Japan as the second largest single country market for computers within the next 12 months. It's large and it's sophisticated.
The new learning here is that emerging markets buy the latest technologies. There are mature users in emerging markets that we have to service as we go forward.
So I think that what we need to think about in terms of developing machines and software environments is increasingly segmenting our product lines.
Intel began the process of segmentation over five years ago when we realized that one chip could not meet all needs. We started developing chips with lower power for notebooks, chips for handheld devices, chips for servers, and of course the desktop product line.
We do a lot of these today. We're spending quite a bit of money and time on it.
I think that we need to start as an industry thinking about segmenting development activity towards these emerging markets. In one sense, if we were critical of ourselves, trickle-down computing is not going to work anymore. Mature markets and emerging markets are changing in terms of the definition that we would have applied to them a few short years ago. Emerging markets used to be where you sold your old technology.
Now what you need to start thinking about is mature users and first-time or emerging users. We'll have to think about developing machines to meet the needs for each of these people.
First-time buyers want a different set of attributes in their purchases than mature buyers. They want reliability, they want affordability, and they want ease of use, usability.
Sophisticated buyers, though, want more and more capabilities. They want the machine to do more for them, more of their life into that machine. They want them to be increasingly portable so they can take them with them, and they want them to be ubiquitous so that they're always connected.
These two sets of drivers mean that we have to start thinking differently about the kinds of devices that we will design and deliver to the market.
On the other hand, this is a tremendous opportunity. The growth in the industry is in these markets. At the same time, the mature markets will be demanding more and more sophisticated devices that also present an opportunity. So we have to somehow start getting our thinking coming around to come to grips with where the world is going.
The third trend is one that I call "Moore plus more" which is a play, obviously, on Moore's Law. And what I wanted to talk about is how we segment this cube, which I'll euphemistically represent as Intel, into the various component parts.
(first layer of cube: "Moore" Performance) On one hand, we will continue to drive to make Moore's Law a reality. And you could think about that simplistically in terms of gigahertz. We are shipping today in high volume 2.4 GHz processors, the industry's fastest processors. We'll pass the two and a half gigahertz barrier sometime this quarter, and we're on track to deliver at the 3 GHz barrier later this year.
So we will continue to give you Moore's Law in terms of the base level of performance unblinkingly going forward.
(second layer of cube: More Performance) But what you're going to increasingly see from Intel is that we'll also deliver more performance, with one "o," in different vectors. We will use architecture to improve the performance and the capabilities and the benefits of a lot of the products that we deliver.
You can think about this if you wind the clock back to the 486. The 486 was really a 386 where we integrated cache and floating-point units for the first time. That gave a new level of performance, a new level of capabilities, a baseline for developers to be able to write to and started changing computers. Another example of that in history was the MMX instruction set.
I think that the next architectural change that will really be probably even more important than what we did with MMX or the 486 integration is HyperThreading Technology. To give you a very graphical example of what HyperThreading Technology is, let me ask Kirk to take you through it.
KIRK: Thank you, Paul. What I'd first like to do is actually explain what I've got on stage with me today.
I basically have two identical unreleased 3 GHz air-cooled Pentium 4 desktop platforms with me. However, one of them, right here, is actually in this cool new small form factor design that we refer to as a Grand Isle.
In addition, the same machine here actually has Intel's HyperThreading Technology on it. So this is going to be the first public demonstration of HyperThreading Technology on the client.
I'm going to start off with a short example here. We're going to Microsoft's Movie Maker Version 1.2. I'm going to basically use it to encode a short movie clip, basically some home footage that I shot not too long ago. I'm going to run that same application on both machines and start them at the same time. We're going to show the non-hyperthreaded enabled machine on this side and the hyperthreaded enabled machine on this side. So be aware of the differences that you're looking at here. So why don't we take a look and get things going.
Now, what HyperThreading Technology actually does, it basically lets the operating system utilize the single physical processor in the system as if it were actually two processors. You can take a look on the screen, if you look at the performance monitor over there, you actually clearly see that we're indicating we have two processors in that client. Now, the benefit here is it actually gives us a nice performance boost.
Now, our video clip here, if you're looking at the visual performance differences, you might not think that's all that dramatic, but what's happened here is our client with the HyperThreading Technology actually encoded our video file almost 20 percent faster than our non-hyperthreaded machine on this side.
Now, if you extrapolate that out a bit, that means that if I'm working on a two-hour video encode process, that means I can almost save 30 minutes of my time, which is quite substantial if you really think about it.
Okay. So let's move on to a slightly different scenario. Now, I know in my daily work life, more often than not I'm actually running multiple applications at the same time rather than just one, so why don't we take a look at a scenario that involves that. So let me get things a little reset here.
Okay. Now, what we're going to do is we're going to start on our non-hyperthreaded machine, and basically we're going to be running two performance intensive applications, Windows Media Player and Windows Media Encoder.
So we're going to start on our non-hyperthreaded enabled machine. We're going to show it on both screens this time. I'm going to start by playing a little bit of a movie clip, basically because I love music and I'm sure a lot of people in the audience actually love music, too. What I don't love is actually sitting in front of my PC watching a progress status bar as I'm encoding some audio files to add to my digital music library. I can think of a lot of other things I'd rather be doing.
So again, I'm going to try to watch a movie while I'm actually doing some audio encoding. We're going to see what happens on our non-hyperthreaded 3 GHz machine.
So let's get the movie playing, and then I'll start my audio encode process at the same time. There we go.
Now, what you're seeing on both screens is basically how our non-hyperthreaded 3 GHz machine is handling it, this processor load.
If you look at the performance monitor, we're essentially bringing this 3 GHz machine to its knees, which is actually quite substantial.
Look at the video quality. It's jerky, it's unwatchable.
So let's stop this, stop that, and let's move on to our hyperthreaded machine and our small form factor here, and actually we're going to show it's actually quite a different situation I'm sure you're going to see.
So again, we're going to do the same scenario. I'm going to play the same high-quality movie clip, and I'm actually going to do the same audio encode, the same audio file at the same time. This time, however, I'm actually going to let the video file play a little bit longer because what I want you to do in the audience is actually focus on the performance monitor and keep an eye on what's going on there as well as the play back of the video quality as we go along. So let's get that going.
(Demonstrating.)
KIRK: So actually, what we saw there on our hyperthreaded machine that we were projecting on both screens is that our movie quality and play back was actually quite watchable, it was quite pleasant to view, it was smooth and nice. If you were watching the performance monitor like I was indicating, it also showed that we actually had some extra headroom available to possibly run another application or so.
So that's actually quite a substantial performance difference between essentially two identical 3 GHz Pentium 4 platforms outside the fact that the one actually had Intel's HyperThreading Technology.
So what we've shown, basically, are two clear examples of the benefit of HyperThreading Technology on the client.
PAUL OTELLINI: Appreciate it.
(Applause.)
PAUL OTELLINI: I just wanted to recap that what you really saw here was there were two 3 GHz, obviously unreleased, machines, both running air-cooled Pentium 4 versions of 3 GHz, one with HyperThreading Technology and one without.
What you saw is in a relatively simple application, the first video encoding thing, about a 20 percent performance improvement. And that's significant. It's sort of two-to-three bin splits, so in the current parlance, sort of 500 more megahertz capability being brought to the machine.
But more important, when you do a complex task like the second task, you couldn't even do that well at 3 GHz. So we had to be able to use the technology, HyperThreading Technology, to be able to make that happen.
In a very simple sense, what HyperThreading Technology is all about is giving you, as customers, two processors in one. Using the processor resources, the system resources to be able to get more performance out of a given cycle of a clock.
We first demonstrated this technology last fall at the Intel Developer Forum. We are now in production with this technology in Xeon(tm) for our server-based product line, and we'll be moving this to desktops and workstations in 2003.
The call to action for the software developers here is that we have a very large software enabling effort going on right now for not just server applications but, increasingly, for client applications to be able to take advantage of this technology. We have developer tool kits for HyperThreading Technology, we have compilers that are tuned for a threading environment, and we can make available to you developer hardware kits that have processors with HyperThreading Technology enabled to be able to do your debugging and make sure you get the stuff right for when we introduce it.
So we'd like to encourage you to get to our developer resources sites and really start focusing on this next very, very exciting technology.
(third layer: Safer computing) The third thing I'd like to talk about in the cube is one that Bill Gates touched on as well, which is safer computing. I'd like to talk about it in the context of creating a safer computing environment, because that's what we need to deliver to meet the needs of the people in the "Man on the Street" video in terms of what they were looking for in security and for protecting their assets.
This is something that I think is important and can be addressed at the software side, but also software is necessary but not sufficient. We have to have a platform-level hardware implementation to create the same kind of environment. Certainly for e-Business or for e-Commerce over the Web, but also to protect your personal assets in terms of photos or tax information that may be on your hard drive.
So Intel is currently driving a number of hardware capabilities into our future products. We have three generations of microprocessors for PCs under development. In each of those development teams, we have very large activities, putting capabilities for secure computing into the microprocessors, into the chipsets, multiple generations of chipsets doing it so that we can build this kind of safety and security at the platform level and deliver it.
Now, we recognize that this issue is one which is absolutely emotionally and, to some extent, politically charged, and we are committed to working with the industry, with the experts in the industry, with our partners like Microsoft, with the government, with the privacy advocates to ensure this is done in a very measured and very responsible manner. And stay tuned. You'll see more from us in the coming months in terms of where this is going.
(fourth and last layer: Platform enhancements) The last high part of the cube here is platform enhancements. This is some of the stuff that really you don't see. You certainly see the megahertz, you see some of the features like HyperThreading Technology, but when we work to enable things like a better graphics engine for integrated graphics chips, that is to some extent, below the radar screen for most developers.
We're also working on power optimizations, on Web services optimizations to make sure that as the Web service environments, like dot-net are employed, that dot-net runs best on Intel and that Intel processors are optimized for a dot-net environment. It's a two-way engagement to make sure we have the best capabilities delivered. And lots of stuff in terms of I/O and further advancements in USB. Things that are invisible but necessary for the platform advancement.
How does it manifest itself? Let me start with handhelds. Handhelds today, Intel is a large supplier of flash memory to cell phones, and a microprocessor supplier to many of the PDAs that are shipping today in the Win CE environment.
We do that today with multiple chips. What you'll see from us in 2003 is the first manifestation of the wireless Internet on a chip. We will deliver a single chip silicon that integrates the cellular functionality for 2.5 G or GPRS, the logical capability to run environments like Windows CE dot-net, and all of the memory required to embed those features into a single chip.
So we'll be able to reduce the cost of these devices, improve the performance, and again have a common development environment in terms of develop once for Intel, run on many devices, deliver to you.
In the notebook arena, we talked about Banias. What we're delivering right now is Pentium 4. It was launched last month into the notebook arena. That product will ramp over the next few quarters. We'll take multiple SKUs of that and make that a top-to-bottom orientation in terms of all of our notebook products.
But our development work is focused on products like Banias and their successors where we're integrating the things that users have told us are important in terms of longer battery life, lighter and more portable machines, the seamless connectivity and, of course, uncompromised performance.
And rather than tell you more about Banias, I thought I would treat you today to a different kind of video which shows just how it is to work on one of these projects right before it's released.
This is a video from our design site that developed Banias, and they'll speak to the product themselves. Can we roll that video, please.
(Video playing and ends.)
PAUL OTELLINI: I thought you might enjoy seeing what it's like to be a design engineer at Intel. That project I think is very interesting. It's not just another microprocessor. There was a tremendous amount of invention in terms of circuit design, new kinds of circuit techniques, new kinds of architectures to be able to deliver this product. And I think you can tell, we're very excited about it and can't wait till we introduce it.
Let me shift to the desktop. I won't spend a lot of time on this in the interest of time, but just to point out a couple of things.
In 2003, we will continue to advance the platform. We are delivering faster versions of Pentium 4, and also in second half of '03, we will introduce the next-generation microprocessor from Intel, and it will combine not just next-generation architecture but also next-generation silicon technology. It will be the first product we introduce, probably the industry's first product running on 90 nanometer technology in very, very high volume, bringing a lot of the new capabilities we've described over the course of today to it and then some platform-level capabilities as well.
In the enterprise, this is an area where we have focused for the last six or eight years very heavily, putting a great amount of R&D into both the IA-32 product line and the Itanium family, moving it from the desktop environment through segmentation up into servers.
Today, Intel architecture has done very well in servers. We have about 89 percent of all servers in the world, day in and day out, are running on our chips. And this is a market we like because it really is no compromise in terms of performance.
I wanted to talk a little bit about the sweet spot of the market here in terms of performance. I'm going to show you a number of statistics on four-way OLTP machines.
Today in the marketplace you can buy classes of machines from various vendors. On this chart, on the left-hand side I have the Intel existing Itanium(r) processor shipping at 800 MHz. The Sun top of the line 4P SPARC, UltraSPARC 3 at 900 MHz, and then our most recent shipping Xeon processor at 1.6 GHz. And you can see the fastest 4P machine out there today is the Xeon processor for four-way OLTP.
In the second half of this year, though, you'll see new machines from us and from our friends at Sun. Sun will likely introduce a newer version of the UltraSPARC 3 running at 1050 MHz, and we expect that will take their performance up about 15 percent above where they are today. So they're getting the clock speed improvements and some architectural improvements.
But we'll introduce McKinley. McKinley will go into production around mid year, and you see that with McKinley, while it's on the same silicon process as the first Itanium, has approximately a 70 percent performance jump over Itanium and puts it above our Xeon product line and well above Sun's.
So we're very excited about McKinley.
In terms of utilizing silicon technology with this architecture, though, we're very excited about a new product that is codenamed Madison. Madison is the next generation of Itanium beyond McKinley. What we're doing with Madison is we're using, again, not only microarchitectural advancements, but also using the silicon budget capability that we get from moving from 0.18-micron to 0.13-micron silicon technology.
Madison will have a six megabyte on-die cache, and it will have about a half a billion transistors, up from ~220 million transistors in McKinley today.
To show you this is not a figment of our imagination, this is one of the first Madison wafers off the line. We've had this silicon about a week. If any of you care to come up afterwards and count all half a billion transistors, I challenge you to do so. But this is real, and this will take that performance curve up another very, very large notch. And we're very excited about this.
To round out the cube, I really wanted to talk a little bit about the depth of Intel. You see us in terms of our products, in terms of our architectures, in terms of our advertising campaigns.
There's an unseen part of Intel that represents most people at Intel and it represents most of the money we spend year in and year out. And there's really three aspects to that. One would be validation, making sure that the products that we build work the way that they're supposed to work.
The notion of validating for compatibility in an increasingly diverse compute environment is a really, really hard one. We have over 3,000 engineers and spend hundreds of millions of dollars devoted solely to validation of our processors and our chipsets.
We have a large number of people also in terms of reliability and safety to make sure that these products work over their expected lifetime, so that there's not electromechanical problems or degeneration problems in terms of silicon, to make sure that these are reliable over time.
Then the thing that you sort of see is our manufacturing scale. Intel invested in factory capacity, between last year and this year, $13 billion. By far the largest investment in the industry, by far the largest investment we've ever made. And it's focused on bringing these factories on line to deliver these leading-edge products in very high volume.
The combination of all these things in terms of this cube really is the engine behind this notion of convergence that I talked about at the beginning of the speech.
I'd like to just summarize today by taking us back a little bit and reflecting on the last couple of years. These have been, by far, the most difficult two years in the industry's history. I've worked in this industry for 28 years, and I don't remember any time as tough as the last two.
But throughout this, we saw companies come and go, we saw the dot-com era morph in front of us, throughout all of this, Intel continued to invest. $4 billion a year in R&D, $13 billion over two years in capital.
Our commitment to you, our development partners, is unwavering. We will deliver to you the best-in-class performance in all of our products across the board, bar none. We are going to spend hundreds of millions of dollars per year solely on improving the platform at the initiative level and the product level.
We are focused on expanding the market for all of our products by working in emerging markets to create market demand, to create an environment where people want to buy computers, where people are cognizant about what computers can do for them and to grow the overall market.
Lastly, we are committed to accelerating this notion of convergence by continuing to work on integrating products at the platform level and ultimately at the chip level.
So thank you very much. Enjoy the rest of the conference.
* Other names and brands may be claimed as the property of others.
|