based on a realistic assessment of how much functionality can be built with a given time frame and resource profile.
It’s all very well to say that project managers should measure size but unless they have a method that is simple to use, repeatable, and above all practical, size estimation is unlikely to gain widespread acceptance within an organization. With the text based programming languages of the past, measuring system size was a fairly straightforward process. Source Lines of Code (or SLOC) were easily measured at the end of a project via text export and automated code counters. The downside of using SLOC as an estimation measure is that code counts have little meaning to nontechnical personnel and customers. Without an empirical baseline, it can be difficult to draw connections between the business requirements that are often the only convenient size measure at the time of estimation and final code counts.
This translation problem has only been exacerbated by the move from text- and procedure-based programming languages to today’s object oriented and GUI design environments. Nth-generation development tools don’t always lend themselves readily to SLOC-based sizing methods. These days, developers may never write a single line of code. They create software by configuring objects and fields or diagramming relationships with sophisticated graphical tools. Bridging the gap from more abstract software components to finished application size is best accomplished by breaking the work to be accomplished into a series of steps and then relating each set of steps back to a known quantity.
Depending on the technology chosen (or how the project team solves technical issues associated with the project) the “steps” needed to implement a given set of business or technical requirements can be represented by a variety of size measures: objects, function points, web pages, dialogs, reports, configurable database fields, scripts, diagrams, or SQL queries. Some steps involve writing actual code while others require development staff to drag and drop elements or set properties via a graphical interface.
A single project might be sized by decomposing it into a set of scripts that migrates data from an existing application to the new platform and performs needed transformations; a GUI front end designed by dragging, dropping, and configuring screen elements (screens); and a set of business rules, reports, and queries. Another estimator might size the same system with a single abstract size measure which maps to the entire system. Function points or objects are often used for this purpose. The estimator is free to choose the method that best suits the information he or she has on hand at the time the estimate is compiled.
Regardless of the method chosen, comparing or combining different sizing units would be meaningless without first identifying some sort of common denominator (or gearing factor) that tells the estimator how big they are, relative to each other. Decomposing system size into smaller, abstract size chunks and using a single conversion unit to “gear” these differing size units to a common point of reference allows estimators and project teams the flexibility to describe the project in terms of the work they will perform rather than dictating a rigid, one-size-fits-all approach. Once the project is completed, the conversion (or “gearing”) factor facilitates meaningful comparisons between projects measured in different functional size units.
© Copyright 2011 Quantitative Software Management. Inc. All Rights Reserved.