Charles King of Pund-IT writes about how converged data center infrastructure—where all assets are based on Intel’s industry-standard x86 architecture—simplifies administration, improves data center efficiency, and better supports established as well as emerging systems such as cloud computing. The Intel x86 architecture is already in the vast majority of servers today, and its adoption is growing... rapidly in storage arrays and networking fabrics.
In the early 1990s, x86-based systems were firmly established as cost-effective options for edge-of-network web servers, file/print, and similar applications once dominated by proprietary reduced instruction set computer (RISC) servers, while developments in clustering, grid, and other areas led to x86 being leveraged in high-performance and supercomputing scenarios. Then in 1999, VMware introduced virtualization for x86, dramatically improving system utilization and providing organizations the means to consolidate multiple workloads and applications, and reduce the number of servers they managed. Intel’s increasing focus on developing innovative networking solutions, including its July 2011 acquisition of Fulcrum Microsystems, a leading player in semiconductors for high-performance, low-latency 10 GbE and 40 GbE fabrics, is likely to hasten the transition to fully converged data centers.