Keynote Transcript


INTEL DEVELOPER FORUM, FALL 2003

Paul Otellini

President, Chief Operating Officer, Intel Corporation
San Jose, Calif.
Sept. 16, 2003

PAUL OTELLINI: Good morning, and thank you. Let me add my welcome to IDF. As Pat said, we have an exciting week planned for you. But it's a week full of hard work.

I think that video was great testimony to the work that we've all been doing the last few years. What it showed and what I'm going to talk about, at least for the first part of my talk this morning, is convergence going mainstream.

I want to develop that theme on three different parts this morning. The first is the collection of tipping-point events that have happened that are driving this notion of convergence out of the laboratories and into people's lives in so many ways none of us really imagined.

The second part is to talk about a collection of technologies that will be augmented to the traditional Intel focus of performance of more gigahertz to bring even more exciting capabilities to the world of convergence.

Then the third thing, and perhaps most important to all of you who come to this forum, is what are the opportunities around it? What kinds of products, what kind of business opportunities can we collectively embrace to take advantage of this fundamental tipping point in technology? Why is it happening and why is it happening now?

Seven years ago, Andy Grove first postulated the notion of a billion connected PCs and what we could do in a pervasive, global Internet environment. Four years ago, Craig Barrett took that vision and added to it another notion, that not only would we have a billion connected PCs, but we'd also have a billion connected handsets.

How are we doing on that? At Intel, we measure everything. We're doing quite well. The billion-unit mark is the kind of blue area under the curve. So phones are essentially there today. PCs are just about there.

The vision of a billion connected phones and a billion connected handsets is happening.

So that allows us to look forward a little bit and say, "Well, where is it going next?" By 2010, those same trends will give us a billion and a half connected PCs, except this time, it's broadband.

Those same trends will give us two and a half billion connected handsets, except in this time frame, each of those handsets will have the performance capability of the fastest personal computer built today on Intel(r) Pentium(r) 4 technology.

This is revolutionary. It's not just the products that are changing; it is the nature of the business changing physically in front of us.

Today, China is the number one market in the world for telephone lines, land lines. It's the number one market in the world for mobile phones. It's the number one market in the world for cable television. It's the number four market in terms of installed base for computers.

By 2010, there's no question in my mind that this is the number-one market segment for computers as well. This means that this changes the notion of how all of us design our products in terms of which market segments we have to serve and what our go-to-market capabilities are over time.

It doesn't mean that the mature market segments that we all serve go away. But it means they become less important in terms of the overall market segment.

The North American market segment will go from about half of all computers installed base today to about a quarter. The growth will be in Asia and the rest of the world, the Americas, and Europe will shrink as a percent, although the overall numbers will continue to grow.

We need to think about this in terms of where our products are being designed and the income levels that people have in these emerging market segments for affordability of our new products.

A couple of years ago, we started talking about this notion of convergence, that all computers would communicate, all communication devices would compute.

I believe it was and it is still the right strategy for Intel and for many of us in our industry.

In so many ways, you can't fight Moore's Law. That fundamental building block of technology that allows us to integrate and innovate generation after generation after generation in this nice cycle brings the convergence of communication and computing together in single pieces of silicon. I believe Intel is a strong contender to take advantage of that.

This year, 2003, is a very important one for Intel, because it's the year where we ship our first manifestations, our first products that reflect convergence in the entirety of their design.

The first of those was Manitoba that we announced earlier this year. That will go into production later this year, in the fourth quarter. It is our first single chip Wireless-Internet-on-a-Chip. This device will power many of the high-end smart phones going into next year and build upon Intel's base as already being the number-one supplier of applications-processor silicon for the handset and PDA market segments. The chart here shows how wireless has really taken off in terms of the access devices, access devices and hot spots.

It's very interesting in terms of the scale here. This is external data, and it shows public hot spots, are expected to grow from 50,000 to 80,000. But public access points are growing from 10 million to 20 million. There's a two order of magnitude discrepancy between these two data points.

What's really happening? What's happening here is what you've seen on the bottom of the screen for the last few minutes, which are two real clocks that are actually counting down our estimate of the deployment of these kinds of devices.

There are 12.7 million wireless access points installed in the install base. They're deploying at one every four seconds.

There are over 43 million network interface cards or components out there in various devices, mostly PCs, but increasingly in PDAs and handsets. And they're deploying at four times that rate, one every second. Over 40 million today.

This is nothing short of viral growth and this is why Intel gets so excited about wireless technology in terms of what it can do to really change the world of computing and provide growth opportunities for us all.

So what is our wireless direction? It's really in three areas. In the wireless LAN arena, we're shipping 802.11b silicon today. We'll ship a/b this month. We'll ship g before year end, and we'll trip tri-band in the first half of next year.

In addition to that we are supporting and working on addressing the network capabilities to improve security in this environment. We also support Cisco extensions for security, and we will support 802.11i as that comes out.

In addition, though, we are designing and developing low power versions of 802.11 to be able to reduce the overall power in things like notebooks, but also move this capability increasingly into handheld devices like that prototype Universal Communicator I showed a few minutes ago.

The second area, and this is another terribly exciting area, is the wireless MAN, the wireless metro-area network. This is an area embraced by a technology called 802.16 or WiMax. We are in production of silicon of WiMax in 2004. It solves the last-mile problem, particularly for rural areas. You can very easily, very cheaply get broadband deployed to homes, 30 kilometer range, rural areas, with one WiMax antenna. We're very excited about this. Not just in emerging market segments, but also in the rural areas of mature market segments like the United States or Europe.

The third area that we are working is wireless WAN, the wide-area network. Here our focus is to expand our technologies to support the wireless Internet on a chip construct that I've described with Manitoba to GSM to edge next year, and ultimately to the WCDMA standard that we believe will be very, very pervasive on a worldwide basis.

There is a lot of debate in the industry about standards battles. Specifically, will the existence of Wi-Fi collide with GSM and its successive technologies? I don't think so. In fact, it's not a standards battle, it's a co-existence requirement.

End users require us to develop devices that make them independent of the network around them; to sniff out what network is there, what bandwidth is there, what structure is there and to choose that network based upon your user requirements.

This is the thinking that went behind Pat Gelsinger's articulation of Radio Free Intel a year ago in terms of our long-term direction to put wireless software-driven radios, multi-protocol radios onto every chip we build.

So if I look back at the motion of convergence and the tipping point, I think we're sort of there. We've built wonderful products together the last few years. End users are adopting them in ways that none of us really thought about. It's an entirely new market, particularly for software developers as you approach this notion of connected mobility. And for us, of course, it's an opportunity to extend our architectures and products into new growth areas.

So let me shift to that, the heart of the talk.

You've seen Intel show new products on this stage every year. The new products typically, for many generations, have been a faster and faster clock speed, and a new search of a microprocessor. We'll show you some of that over the next few days.

But what I will show you are some exciting new technologies that we call the T's inside, standing for technologies. A collection of technologies that we are imbedding into our microprocessor and platform silicon to change the way those platforms are used or to significantly increase the capabilities of those platforms. Today I want to talk about four industry-shaping technologies, some of which you've heard about before, some of which will be new to you.

If you think about convergence and the interconnection of convergence and these technologies, you start getting some idea of what we're thinking.

Hyper-Threading Technology is a technology that brings parallelism in terms of performance inside of a given microprocessor. We first talked about this two years ago at IDF.

Convergence by nature is a multi-task, multi-threaded environment. The answer to that is Hyper-Threading technology.

Convergence also needs seamless communications. Our first attempt to address that is Intel Centrino(tm) Mobile Technology where we integrate a number of new technologies, including seamless wireless connectivity into the platform.

The third one, which is new, has to do with trustworthy computing. I believe for convergence to really accelerate, these platforms need to be incredibly secure. Our answer to that is LaGrande Technology.

Lastly, as we look at the applications environment, the multitude of uses of these platforms over time, we believe convergence needs an additional capability associated with robustness, and the technology we've developed for that is code named VT or Vanderpool Technology, and we believe it will enable this and we'll show you that in a couple of minutes.

If I take you back two years to the IDF speech I gave in fall of 2001, the theme of that speech was "Moving Beyond Gigahertz." I said that we collectively had to change our investment pattern to not just focus on adding the incremental megahertz to our products but increasingly add end-user features that end customers wanted.

I talked about a number of key enabling technologies that we thought, two years ago, would start to change the world. Specifically, they were Banias, the microprocessor code name that became Intel Centrino Mobile Technology. Hyper-Threading technology, which is now embedded in our Pentium 4(r) processor product line. And Itanium.

I want to give you a bit of an update on progress of all three of those in the course of this talk this morning.

The first one is Hyper-Threading technology. We have now moved Hyper-Threading technology into essentially all of our IA-32, our 32 bit architecture for servers in the Intel(r) Xeon processor product line. You can see the ramp on the top of that chart.

What's more interesting, though, is how fast we've been able to move Hyper-Threading technology across the performance desktop product line. From announcing this technology and demo'ing it the first time two years ago, we began shipping it in November 2002. We will exit this year with over 50 percent of our performance desktop products incorporating Hyper-Threading technology, and we expect it to be pervasive by the end of next year. A very, very diffusion rate of this new technology.

Essentially in both our server market segments and in our core PC markets, we are driving performance increases through parallelism over time, and that is a big message to the software community.

It will be increasingly important as you look at our road maps over the next few years. In the enterprise product line, as I said, we've had threading and multi-threading orientations in our products and our chipsets for many, many years.

We are adding dual core and multi-core capabilities, and I'll talk a little bit more about these when I do a road map later on, into our server product lines.

But the exciting news today is we are adding the same approach, things that we have devolved from the server technology, that we've understood in server space and driving it down to PCs, both notebooks and desktops, over item.

We will go from putting Hyper-Threading technology in our products to bringing dual core capability in our mainstream client microprocessors over time.

For the software developers out there, you need to assume that threading is pervasive. We'll talk about the applications environment around that in just a few minutes.

Threading has been critical in server space for some time. Most server applications and most server operating systems are threaded already.

But you have to ask is it relevant in clients? Well, there are over 100 client applications that are already threaded. In fact, the mainstream client operating system, Windows*, is a threaded operating system for clients.

These hundred client apps, though, include things like Adobe Premier, Ulead Photo Impact, and Unreal 2 the game.

But if you look at a broader class of applications, it's very interesting. We took a thousand of the most popular applications and tested them for their propensity to be enhanced either through multi-tasking or through multi-threading and that's what those two pie charts show behind me.

Over 90 percent of the applications, as you would imagine, will benefit from multi-tasking, doing more things at once. How could you argue with that?

What we found interesting when we analyzed those applications is that over 90 percent of the applications will have a medium to high benefit from being threaded on the client. Huge swing in perception, and I think as we deploy our products, this becomes a very significant opportunity for those of you doing software development.

The second "T" that I mentioned is Intel Centrino Mobile Technology, and I won't spend a lot of time on this except to remind you that this was our first time that we did a bottoms-up design at the entire platform level for a notebook. And we re-thought the notebook in terms of what end users wanted. How revolutionary.

We focused on the things we thought mattered to them and tested, importantly, to them. Performance, low power for battery life, sleekness for form factor, and seamless connectivity, and built all of those into the product.

The next "T" that's coming down the line is LaGrande Technology. LaGrande is focused on bringing enhanced security to the platform, hardware level security to the computer platforms that we all use.

This is a series of enhancements in processors, chipsets and platforms to work with software to protect against software-based attacks. This technology will be broadly deployable. It will be a protected computing environment. It will be compatible with all of the existing applications and operating systems. It doesn't require a rethinking or rearchitecture of all of those.

We believe what it will do is it will enhance existing security measures while enabling a number of new protections

[Demo begins and ends.]

PAUL OTELLINI: So what did we just show you with LaGrande Technology? We showed you that it's general-purpose silicon and it can accommodate an entire range of environments and operating systems. It is engineered to protect you from all the known main causes for intrusion and incursion that tend to be causes for either theft or disturbance in the systems.

Second of all, as we go into this area, we are very cognizant as a company in terms of the issues surrounding privacy and user choice. And it's our intention as we bring this technology to market to have a number of products, some of which have the technology enabled, and some of which have the technology disabled. So end users, consumers can choose up-front what kind of product, what kind of technology they want.

Even if they purchase a computer that has the technology enabled, we intend to support user choice by having the ability for users to opt in or opt out of a LaGrande environment in terms of the security of their systems. We believe this addresses both their need for security and their need for privacy going forward.

Intel is working with Microsoft and a number of other industry leaders to bring this technology to market in computers in the next two to three years.

The second new technology is VT, or Vanderpool Technology. And what VT does is it's focused on enhancing the end user experience by partitioning inside the computer, inside a given chip, if you will, beyond software alone.

Now, today, you can get partitioning, virtualization in the software environment in the high-end servers. You can get it on PCs and workstations in a software-only mode.

What we intend to do is bring this technology to hardware-based virtualization through VT technology.

In one sense, this improves the performance of virtualization, because we have hardware-assist doing much of the overhead and work that the software is doing today. But I believe, more importantly, it improves the robustness and the flexibility of the configuration flexibility of computers. It allows users to use computers a lot differently and a lot more reliably than they're used today.

And we'll bring this technology to platforms within the next five years.

But I think we can probably show it better by demonstrating it to you today. So what I have here is a PC with a single microprocessor inside it that is running an early version of VT or Vanderpool Technology.

[Demo begins and ends.]

PAUL OTELLINI: This technology has a number of very exciting applications. You can think about it being used in a consumer space, where you run a protected environment inside that machine for, say, gaming applications or appliance applications like a PVR, personal video recorder. But you can also think about it as being a place where you can use to migrate systems or an easy way to migrate operating systems. So if you have a legacy environment on a version of Windows* and applications that you want to keep in that environment, you can run that in one partition while running a new version in a different partition with different applications.

Or you can construct a scenario where you keep your personal finances and so forth hidden away from your children by parking that set of applications in a different partition.

From a business user perspective, I think it's even more exciting. IT managers can put a separate stack into one of the partitions while they have the end user playground and applications in another. It gives them a wonderful tool for operating system migration and secure environments for migrating applications over time. So it's a very exciting technology, and we are working feverishly to bring it to market.

But what might be beyond that?

As we look out at the transistor budget that Moore's Law allows us, we start thinking about what else we can do with those incremental transistors on microprocessors over time. And one of the more exciting areas we see is media and graphics.

If you just look around you, there's an explosion of graphics and media coming off of a number of different feeds that really started with the CD-ROM in terms of computing. It's not just movies and rich content. It's also things like this presentation. This presentation, by the way this morning was north of 400 megabytes in terms of probably the largest PowerPoint presentation we've ever done. So you can get rich media into a number of different environments nowadays at work and at home.

If you think about what Intel has done in media, we really have been focused at using that processing power to enhance the media capabilities over generations of products, beginning with MMX, then SSE and SSE2, this has become, now, an integral part of our design and around your software applications work. And it perpetuates our history of innovation and then integration over time.

I wanted to show you a new product in this area that we announced this past week. This product is called the MXP5800. This is a developer board with the first silicon of that product on it.

And what this product is, is a first piece of silicon in a family of media processors that we'll be developing over time. Embedded in this chip are a number of programmable microengines. The first implementation for this is being done with Xerox for their digital imaging applications. But you can imagine that you can use this to do any kind of media processing in a programmable fashion.

What we have asked the team to do today, the engineering team that built this product, was sort of stretch it a bit. And to stretch it, what they did was put this card into standard PC. They've got Three A-to-D converters, they have three separate video feeds, high-capacity digital video feeds coming in here, one of which will be live, which you'll see from the audience, and two of which are off of media.

We'll see what this thing can do here. It happens to be streaming Pat Gelsinger right now. It runs for about 50 seconds, and it will show you a number of applications. Flipping video, it blends the alpha blend. We're alpha blend with "Intel Inside." You can see chroma key here with the IDF logo being put on top of the video. This is something called edge detection.

And then standard old picture in the picture. In this case we'll show you one, two, and three pictures in pictures. We'll move them to the corners. And then we'll fade it out to a "T" for technology.

This is very interesting technology, and you can see what we can do with this programmable media processor engine. But you can also imagine what we can do as we think about using these kinds of microengines inside of our chipsets and inside of our microprocessors over time. They really can be used to enhance the graphics and media capability of computers.

I wanted to talk a minute about our ability to diffuse or to bring new technology into the marketplace, because it's relevant to the technologies I just described.

The chart behind me talks about a number of technologies that we first introduced in our chipsets, chipset-based platforms, and then ramped into the market segments. What you see in virtually all of these curves for PCI, USB, AC'97, LAN, and USB2, is a hockey stick curve. Very fast absorption into the marketplace and moving to a pervasive position across all of our platforms.

We've done this now for a number of technologies and chipsets, getting these technologies to be 80 percent plus in the marketplace in the not so short amount of time, not so long amount of time.

So what's next? The technologies I described to you, like Hyper-Threading technology, like integrating Wi-Fi into all of our clients, like LaGrande Technology, Vanderpool Technology and ultimately other things we may do in media and graphics, will have that same diffusion curve because we'll use the power of the capacity of a microprocessor machine to be able to make these technologies pervasive, rapidly, essentially bring in new technologies to market in a square wave fashion.

Now, to do that, we need one more technology. And this technology is the core of Intel. This is what we are all about at the end of the day, and it's silicon technology.

This is where we are, I believe, the leader in the world, for silicon technology. It's what we do in the hundreds of millions of units per year, and allows us to build microprocessors today, the largest of which has a half a billion transistors on it.

But building high performance silicon has its challenges. What I've just picked here are a couple of metrics to show you what that entails.

For the non-chemistry majors out there, this is the periodic table and what we've done is colored in the elements of the periodic table that we use to build silicon in the '80s, in the '90s, and now in this decade.

For every generation of silicon, for every type of processing requirement we have, we end up having to do different technologies, embrace different technologies. And the combination of these elements becomes increasingly complex.

But what is the benefit? The benefit is it has allowed us to accelerate the feature size scaling. It's a 50-year trend, 1970 projected out through 2020. It looks at 50 years of silicon scaling from Intel.

What you see is that for the first 25 to 30 years, we were able to drive silicon density, feature size down by 0.7 every three years.

In the last decade, we've accelerated that. We've been able to produce a new generation of silicon on a pace that is much more like every two years, and that's allowed us to drive that same level of density improvement, more transistors, lower power, higher capability into every chip we do, 0.7X, every two years.

Where are we on this path? We're about to go into production on 90 nanometer, 300 millimeter-based silicon technology. We were the first to market and led the industry at 130 nanometers. We will lead the industry, and are leading the industry, at 90 nanometers. This is technology that's going into production, as I said, as we speak, and it's the first technology in production to incorporate strain silicon, which we think is a major advantage in terms of transistor performance for us over time.

To give you some idea of the dimensions we're dealing with, though, I put up a picture here of an influenza virus. The 90 meter silicon process we're ramping now has dimensions smaller than the influenza virus.

But we're not stopping there. The next technology after 90 nanometers is 65 nanometers. We're making very good progress on that. We're on track for introducing our next 65 nanometer products in 2005, and I want to show you a wafer off of our first 65 nanometer line. This happens to be an SRAM we're using for debug, and I think we're very well along the path of moving towards production and we're very excited about this.

If any of you happen to be thinking about joining us in the semiconductor industry, the picture on the upper right is a caveat emptor kind of slide.

That's a picture of the Ronler Acres campus where we built that 65 nanometer campus and the cumulative investment for that particular campus, one of about five or six we have of that scale, is about US$10 billion. So playing this game is not for the faint of heart.

We aren't stopping with 65. At 45 nanometers, we have actually produced transistor structures for that process. We're experimenting with a number of different structures, one of which we are very excited about is the tri-gate transistor. This essentially allows us to use the three dimensional structure of the transistor to get more surface area by using the sides and top for current, to be able to scale the process to even tighter dimensions and give us higher performance.

We're very, very excited about this opportunity as well.

But as they say on Home Shopping Network, but wait, that's not all. We have done silicon prototyping in the two generations beyond 45 nanometers, the 32 and 22 nanometer technologies, which will deploy in 2009 and 2010, and these are actual pictures of actual transistors we're working with in these dimensions today. And just like I correlated the 90 nanometer process to the influenza virus, the 22 nanometer process will have structures on it which are smaller than DNA molecules are wide, to give you some idea of the scale on which we're operating today.

So Gordon Moore gave this talk earlier this year where he's talking about his law, and he said an exponential can't go on forever, but you can delay it. What I'm showing you this morning is we're well on our way to delaying the inevitability of an exponential curve well into the next decade.

So, we talked about the mainstreaming of convergence, we've talked about new technologies and driving gigahertz and more over time. These are all essential to bringing convergence to the market segment.

In the last section I want to talk about what are the market segment opportunities around this. What kind of businesses can we build?

As we look at the market segments for our products, we tend to look in four major areas, in the enterprise and in all of the computing around the enterprise. In mobility, mobile Internet clients of all kinds, hand sets, PDAs and notebooks, and in the digital home.

The fourth area that I'm not talking about today is network infrastructure, and you'll hear that from Eric Mentzer later this week. But I'll cover these three.

Let me start first with the enterprise and beginning with a postulate. I think that as the Internet was to the edge in terms of driving demand for edge servers, we're going to see a similar explosion in servers associated with the deployment of Wi-Fi.

Databases and mobile applications will, as Wi-Fi gets deployed, be driven up in numbers just to handle the capacity of the new application types.

A couple of examples. DoCoMo. You're all familiar with DoCoMo and their SMS like system in Japan. DoCoMo today has 75,000 data transactions per second. They have 800 transmissions per day. They have over 400 servers installed in their iMode data center in Tokyo supporting 400 terabytes of data and 450 million users. All of that is an infrastructure supporting those messages and pictures being sent on the hand sets.

The Intel CIO recently told me that as we are moving to deploy Wi-Fi throughout all of our campuses at Intel, the cost associated with the wireless access points is small. But the real cost is buying 500 new VPN servers to handle the security and capacity around them. We're still going to do it, but there is an association between Wi-Fi and the server deployment that's required for the infrastructure.

Wireless is just one example of where I think server growth will continue. Another one, obviously, is in the market segment for high performance clustered computers.

In the server space, there's a lot of news since last IDF. The best way to show this is what's happened in the TPC-C Web site for the top 10 nonclustered machines. So what I'm plotting here are the number one machines on the RISC column, green bar, over the last three years. You can see from '01 to '03, there was only one machine on top, a Fujitsu SPARC-based machine, and its performance didn't increase for three years.

Then very rapidly there were a number of green dots that appeared on the curve this year, principally driven by IBM.

What is even more interesting is the blue curve, that's the Intel-based machine, this happens to be Itanium-based systems, coming from sort of half the performance in 2001 of the world's leader in TPC nonclustered machines to about parity in '02, and now you can see in very rapid fashion over the summer, starting first with NEC and then Hewlett-Packard, leapfrogging back and forth with IBM, that Itanium based systems are still the fastest nonclustered machines out there.

This is competition at its best. As we enter markets, as our customers take our technologies and deploy them, we change paradigms. We've changed the nature of high-end computer five times in six months already.

And it's not just one customer that's doing this. This is the top 10 list from the TPC Web site. We have no one on that list in '00. Today, half the machines in the top 10 list are based on Intel from a variety of hardware manufacturers.

Which leads me to maybe seeing is believing in terms of application deployment, and let me bring out Mike Graf who is the Itanium technology product line manager for Intel. Hi, Mike.

[Demo begins and ends.]

So what's next on the enterprise for us? Let me start first with our 32-bit family on the bottom of the chart. We began our server-based products with the Pentium Pro sort of half a decade or so, six or seven years ago. And today, the top-of-the-line 32-bit product we ship is the Intel Xeon processor with Hyper-Threading technology performance, which I showed you is now essentially across all of our Intel Xeon processor product lines.

Today we're announcing that there is a product two products beyond that existing Intel Xeon processors. The product that we have talked about before is called Potomac. The one after that is Tulsa. Tulsa is the next-generation 32-bit server product. It is our first 32-bit to be dual core, two core on one die. It will incorporate Hyper-Threading technology, so, essentially, four processors, you see four processors for software. And it's the follow-on, as I said, to Potomac.

On the Itanium processor line, we are shipping the Itanium 2 processor today, which is based on the Madison. That's the half-a-billion transistor chip I talked about. Following that chip is a product we talked about a little bit before that which is called Montecito. This is our first dual-core Itanium processor. And it's also our first one-billion transistor microprocessor.

But today for the first time, in this conference, we'll be talking about code name Tanglewood. And Tanglewood is our first multi-core, that is, we'll have a number of cores of Itanium on that die. It follows on from Montecito.

It is the first microprocessor at Intel that was architected and designed with the alpha microprocessor design team that we acquired from Compaq some three or four years ago. And our design objective with this product is to be at least seven times the performance of the Madison chips that are inside that SGI system today. So we are stretching the performance on 32 bits, we are stretching the performance in the Itanium processor family going forward.

I'm not going to give you a product update on mobile today because Ron Smith and Anand Chandrasekher are going to talk about that in lots of detail tomorrow. But I do want to talk about a key element of mobile technology and our mobile go-to-market plans that I first talked about a year ago at IDF.

That was the whole notion of cross-architecture software tools and tool sets. The vision, if you were here a year ago, you may remember, was to write once, have your applications written once to Intel Architecture to a PC and be able to be ported very seamlessly and quickly up and down to servers and/or to multitude of handsets all using Intel architectures.

In the last year, we've delivered quite a bit on that promise. We are now delivering compilers and tuning tools, notably, the tool VTune, that will allow you to develop and recompile and move seamlessly between Intel XScale technology, IA-32, and Itanium architecture. And the tool set will only get more robust over time.

We are delivering the MSI guide. MSI stands for "Mobile Software Initiative." This is the guide for how companies can move their applications very quickly and seamlessly to take advantage of mobility and to take advantage of different kinds of client types in a mobile environment.

Companies that have already committed to moving along the MSI direction are Seybold, Microsoft Office, Ariba, Sybase, Oracle, MacroMedia, Adobe, SAP, and IBM WebSphere Environment.

Beyond that, we're moving it into the real world. We have 75 pilots underway with major companies to actually port this software and deliver it to end users to take advantage of the kind of application, development, conservation theory that is associated with cross-architecture software development.

And at IDF this session, we have nine MSI-specific classes and a number of labs for you to chew into.

Last section, digital home. Many have been talking about the digital home for quite some time. There's a quote here from Business Week talking about it coming into being and out of the era of the unicorn. We've made a great deal of progress on the digital home since last IDF.

I think that one of the most significant things was the formulation and announcement of something we affectionately called DHWG, or D-H-W-G, which stands for digital home working group. This was announced, I believe, in January or February, and encompasses 17 companies, the who's who of leaders in computing, in content, in software, and in consumer electronics.

The aim of this group is to have all of those companies agree for interoperability of standards across a variety of devices in a very open fashion so that the digital home vision can happen and that we, as consumers, don't have to worry about this person's DVD talking to that person's computer. We intend to make this stuff seamless over time.

The guidelines1.0 will be out in the first half of next year. I expect products to come to market under the compliance of these guidelines in the second half of next year.

Now, the digital home needs more than interconnection and interoperability. At the end of the day, if we're going to make this vision happen, we need premium content to move into the home.

There is a body called five Cs, which is a collection of companies -- Intel, Hitachi, Toshiba, Sony, and Matsushita -- that have been working with the content industry to develop a new standard with a catchy name of DTCP-IP, that stands for Digital Transmission Content Protection Over IP. It's self-explanatory. What we're trying to do is come up with a very seamless way for high-value content owners to be able to move that content into the home and around the home and still have the protection they demand in order to release their content in the first place.

To do this requires some software and hardware changes. Today, I'd like to show you the first public demonstration of DTCP over IP.

And to do this, what I have here is not your mother's or grandmother's notebook. This is a Samsung notebook that is running on the 90-nanometer version of Intel Centrino technology, the product that we code named Dothan. And you can see on here we have MovieLink up and a number of movie options here to choose from.

But rather than just watch this once on my computer, I might like to watch it in the living room, as I did before with the "Simpsons." So today we have a demonstration of a new technology that allows that, which is the DTCP-IP protocol embedded in a Linksys media adapter to be able to run the same kind of thing again, end-to-end protected.

I have the same kind of capability to go through the videos here. I'm going to run a clip from AniMatrix.

(Video begins and ends.)

PAUL OTELLINI: If you want to see more, you have to pay for it.

(LAUGHTER.)

PAUL OTELLINI: We have done this in collaboration with Warner Brothers, who are also very excited about this technology. And to give you some of their perspective, I'd like to roll a video clip from Warner's CEO and chair, Barry Meyer.

(Video concludes.)

PAUL OTELLINI: Exciting. So, that's the Warner view of this.

(Phone ringing. Demo begins and ends.)

In summary, what I hope I showed you this morning was that convergence has gone mainstream, from smart phones to, unfortunately, flash mobs, which may or may not be good for society. But the technology is bringing us all kinds of change.

For convergence, performance matters, and it matters more than ever, but it's not just driving ever upward in gigahertz. We have to deliver the new capabilities that people want in their devices. The collection of technologies that I described today, and a bunch that we haven't described yet to you, so you come next year, are the technologies that will enable much of this convergence.

From my perspective, opportunities abound. There's a whole variety of product types that have not yet been invented.

Now, what do I mean by that? This is converged thinking, music plus TV combined became MTV. Bookstores and Internet combined to become Amazon. Cars and satellite infrastructure combined to invent On Star.

And the VCR and the hard drive came together to give us Tivo or a variety of personal video recorders. These are all examples of converged things, converged thinking, that creates new opportunities. And this is where I think we really want you to share our energy and our enthusiasm as you go through this IDF for what kinds of converged thinking we can make happen in terms of new devices.

Let me take you back to the IDF two years ago that I referenced earlier.

As I concluded that IDF, I put up two slides. And they're on either side of me here. On the left was a quote from Kettering, who ran Delco and much of the design at General Motors for 30 years, but gave a quote during the depths of the depression in the United States that talked about believing that customers would come back when his industry built new products that they wanted to buy.

I talked about what we collectively had to do as an industry to create excitement in our product lines again and get people buying. We've survived the depths of the recession in our industry. There are signs of life showing up around us, in all of our collective numbers.

I think it's very likely this year is the first year we'll see double-digit growth in PCs in the last few.

Things are starting to move again, and I believe it's because the strategy of all of us investing through the recession to bring out new products.

Specifically what we asked you to do at IDF to think about taking advantage of the MIPS we were throwing at computing over the last few years. Increasingly focus on utilizing parallelism to bring more and more performance and feature sets. Design for low power, that was Banias and what became Intel Centrino technology, and to drive Itanium architecture.

And I think that we showed today we're collectively doing a pretty good job on delivering of all three. So where are we asking you to place your bets with us this year?

If you aggregate what we're asking in IDF, we're really asking you to do four things. We're saying design for the T's. Make threading pervasive, embrace multi-core because it is the way we are going and it will be pervasive in every computer that's built. Embrace hardware-based security. Take advantage of that in terms of new applications and new opportunities. Embrace virtualization, and embrace where we're driving, collectively, media and graphics over time.

Secondly, deliver a collection of hardware and software products to the enterprise at the solution level that will increasingly recognize our work -- the fact that our work forces are more and more mobile every day.

Thirdly, there are a number of new standards between DHWG and DTCP-IP and a number of other standards coming out that are making the digital home a reality. We ask you to embrace those along with us and help drive that vision to fruition and drive yet another market segment.

And lastly, I think we all need to think about how we will design products and market them to an increasingly global customer base that is not quite the same as the customer base we marketed to for the last 30 years.

So with that, thank you very much. I appreciate your coming this morning, and have a great IDF.

(Applause)

About Intel
Intel (NASDAQ: INTC) is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the world’s computing devices. Additional information about Intel is available at www.intel.com/pressroom and blogs.intel.com.

Intel, Pentium, Itanium and Centrino are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.

Back to Top