Part 1: Evaluating Accreditation’s Performance
The Four Eras of Accreditation
While we will not provide a comprehensive history, the structure of accreditation “is more historical than logical,”1 resulting in a series of “accidental transformation[s]”2 as more and more responsibilities have been placed upon it. As Robert C. Dickeson notes, accreditation today is like an overloaded “pack animal” that “has been burdened with expectations and duties far beyond either its design or its capabilities.”3 It is therefore necessary to quickly recap some of the highlights of the history behind accreditation.
We’ve identified four main eras of accreditation: pre-1936, 1936 to 1952, 1952 to 1985, and post-1985. Although there is some ambiguity concerning the exact dates of the four eras, the dates generally corre- spond with the time in which accreditation took on a major new role.
Pre-1936: A Voluntary System to Inform the Public. Accreditation developed from a need in the late 19th century to define what a college-level education was and to distinguish institutions that possessed adequate capabilities for undertaking such studies. Prior to its development, there was no generally accepted criteria for what should be considered a college. Furthermore, there was widespread unfamiliar- ity with educational institutions beyond one’s own small geographic area. This lack of information com- bined to make it difficult for the better institutions to distinguish themselves and difficult for students to decide which institution to attend.4
The better colleges thus formed regional, voluntary membership associations and established com- mon definitions and admissions processes.5 In the early 20th century, these regional associations began to establish institutional standards, such as faculty size, length of educational programs, library size, and size of endowments which aspiring colleges were required to meet in order to gain accreditation.6 Accred- itation decisions were based on information provided by the institutions themselves, a process that for the most part continues today.7 Accreditation soon became a marketable asset as a means of distinguish- ing colleges from the competition and provided a signal to the public that an institution was of high qual- ity. This provided an incentive for colleges to voluntarily seek accreditation and for accreditors to maintain high standards.8
This system would remain intact for the post-secondary education market throughout much of the 1930s as accreditation expanded its reach while the government continued to remain largely uninvolved in the sector.9 However, criticism of the accreditation process would begin to surface around this time as college officials began to complain that the quantitative, uniform standards used in accreditation were too rigid and superficial. Some critics said that the data gathered for accreditation decisions, while meas- urable, did not account for the diversity of institutions and their missions, placing too much emphasis on resource inputs and not enough on outputs. Many institutions believed that despite providing a high quality education to their students they were denied voluntary accreditation because, on paper, they did not measure up to these quantitative standards.10