X hits on this document

Word document

SAP with Microsoft SQL Server 2005: Best Practices for High Availability, Maximum Performance, and ... - page 70 / 76





70 / 76

SAP with Microsoft SQL Server 2005: Best Practices for High Availability, Maximum Performance, and Scalability66

Parallelism for initial creation of aggregates

Aggregates in SAP BW are essential for good query performance. However, creating aggregates is typically a time- and resource-consuming process, especially when a scan and join is performed on a very large fact table. A large aggregate might involve a fact table with millions of rows. Alternatively, an aggregate might contain fewer than 100 rows, although SQL Server might have to scan and join 500 million rows to obtain this result. Note that creating an aggregate always includes a GROUP BY on the Transact-SQL statement level.

In particular, parallelism is useful when the creation of an aggregate requires a large query to scan and join the fact table of the cube with a number of dimension tables that are usually small. When parallelism is turned on, SQL Server uses all CPUs to process this query in parallel using multiple threads, depending on the specific setting. Using parallelism can achieve better performance than using only one CPU. However, as discussed earlier, the max degree of parallelism option is used for creating aggregates and indexes only. It is not used for online reporting.

Block size in SAP BW

Because creating an aggregate can be very resource intensive, SAP BW provides the option to split the process into <n> steps. For example, suppose the SAP BW F-Fact table has 10 million rows.

When the block size is set to a value larger than 10 million, SAP BW uses one step to create the aggregate.

When the block size is set to one million, SAP BW uses 10 steps to create the aggregate. In this case, SAP BW searches the query for a WHERE clause that contains an additional "artificial" filter that scans only one million rows per step.

The challenge is to define the artificial filter. Because SAP BW calculates the block size automatically, it might not be possible to enter some ranges manually, depending on the data distribution.

Use the SAP BW SPRO transaction to set the block size for an aggregate. To run SPRO, select SAP Reference IMG, then Business Information Warehouse, then General SAP BW Settings, and then Parameters for Aggregates. In SAP BW 3.5, the default block size for the fact table is 100 million rows.

Benefits of using a smaller block size. With a smaller block size, each step can run serially and use far fewer resources than when you create an aggregate using one large block size. The drawback is that a large fact table must be scanned multiple times. Smart table partitioning eliminates partitions in the scan process.

Setting the block size. There is no optimal setting for the block size. The size is dependent on the type of aggregate, the size of the fact table, and the available physical resources, such as the number of CPUs and amount of physical memory. One solution is to create the aggregate using one large block size to see if there are performance issues and/or increased tempdb usage. If this occurs, reduce the block size and create the aggregate in multiple steps.

Microsoft Corporation © 2005

Document info
Document views348
Page views348
Page last viewedFri Jan 20 14:35:55 UTC 2017