X hits on this document





15 / 30

Microsoft® Windows Server™ 2003 White Paper

These are not vague theoretical numbers, but are instead truly reliable benchmarks based on stringent testing. For example, the Sales and Distribution (SAD) benchmark standard defines a suite of sales and distribution transactions that correlate closely with the number of supportable simultaneous users. For example, Microsoft’s SAP r3 implementation supports 25,000 simultaneous users, one of the highest throughputs among all solutions tested. IT Professionals can confidently architect their own configuration based on these benchmarks. Because benchmarks represent a minimum assured transaction throughput rate, if the benchmark numbers tell you that a configuration addresses the problem, then real-world enterprise scalability can be achieved with the benchmark hardware and software, or possibly even at lower levels.

For more information, see the following topics:

Ideas International

The Top Ten TPC-C by Performance Version 5 Results

Choosing Scalable Hardware

You must choose your hardware to match your scalability requirements, although hardware has become largely a commodity for all but the most demanding situations. There are limits to hardware’s ability to scale an application, however.

Although you can add more memory and a faster CPU up to the limits offered by hardware manufacturers, many applications will benefit up to a point from a multi-processor architecture. But, even multiple processors in the same box can experience bottlenecks. Because they share the same pool of RAM, and both the CPUs and the memory are connected via same bus, the bus eventually constitutes a bottleneck. Empirical testing revealed that a maximum of four CPUs was optimal, after which adding more CPUs offered noticeably diminishing returns.

Memory performance increases have been mostly due to a wider bus, not because of improvements in memory’s latency characteristics. Memory response time, from the time of a RAM request until the data is available, remain in the 70 ns to 150 ns range, where a CPU clock cycle is only 1 ns for a 1 GHz processor.

Windows 2000 offers support for processor affinity, which lets you limit the amount of hardware resources given to various groups of processes. You can group processes into job objects, and then limit resources assigned to these job objects. In Windows Server 2003, the feature is enhanced to deal with additional processors and job objects are known as application pools. You can limit the group process’s access to CPU and memory. This guarantees hardware is available for mission critical and priority applications in two ways. First, it can be used to specify the processor affinity for high priority applications. Alternatively, by tying less reliable or less critical processes to a certain subset of resources, the remaining CPUs are available for high-priority applications. Therefore, mission-critical applications won’t be starved of resources by badly behaved low-priority application.

Windows Server 2003 also offers greater support for Storage Area Networks (SANs), which significantly increase the amount of storage available. Virtual Disk Service (VDS) lets you manage large amounts of storage the same way you manage a local disk. This impacts scalability because it lets you manage large SANs connected to a server system via fiber. Furthermore, you can use SAN for multiple servers at the same time. Multi-path I/O lets you refer to same disk

Implementing a Scalable Architecture13

Document info
Document views30
Page views30
Page last viewedFri Oct 21 22:03:18 UTC 2016