X hits on this document

PDF document

Modeling Intrusion Detection Systems Using Linear Genetic Programming Approach - page 5 / 10

28 views

0 shares

0 downloads

0 comments

5 / 10

experiments. Tables 1,2 and 3 summarize the overall classification accuracy of LGPs, RBP and SVMs.

4.1

Linear Genetic Programming Training

LGP manipulates and evolves program at the machine code level [10]. The settings of various LGP parameters are of utmost importance for successful performance of the system. This section discusses the different parameter settings used for the experiment, justification of the choices and the significances of these parameters. The population space has been subdivided into multiple subpopulation or demes. Migration of individuals among the subpopulations causes evolution of the entire population. It helps to maintain diversity in the population, as migration is restricted among the demes. Moreover, the tendency towards a bad local minimum in one deme can be countered by other demes with better search directions. The various LGP search parameters are the mutation frequency, crossover frequency and the reproduction frequency: The crossover operator acts by exchanging sequences of instructions between two tournament winners. A constant crossover rate of 90% has been used for all the simulations. After a trial and error approach, the parameter settings in Table 1 are used to develop IDS. Five LGP models are employed to perform five class classifications (normal, probe, denial of service, user to root and remote to local). We partition the data into the two classes of “Normal” and “Rest” (Probe, DoS, U2Su, R2L) patterns, where the Rest is the collection of four classes of attack instances in the data set. The objective is to separate normal and attack patterns. We repeat this process for all classes. Table 1 summarizes the results of the experiments using LGPs.

Table 1. Performance of LGPs

Class

Population

Program

Crossover

Mutation

Testing

Size

Size

Rate

Rate

Accuracy (%)

Normal

1024

256

72.24

90.51

99.89

Probe

2048

512

51.96

83.30

99.85

DoS

2048

512

71.15

78.14

99.91

U2Su

2048

256

72.24

90.51

99.80

R2L

2048

512

71.15

78.14

99.84

4.2

RBP Experiments

In our study we used two hidden layers with 20 and 30 neurons each and the network is trained using resilient backpropogation neural network. As multi-layer feed forward networks are capable of multi-class classifications, we partition the data into 5 classes (Normal, Probe, Denial of Service, and User to Root and Remote to Local).We used testing data set (6890), network architecture [41 20 30 1] to measure the peformace of the RBP. The top-left entry of Table 2 shows that 1394 of the actual “normal” test set were detected to be normal; the last column indicates that 99.6 % of the actual “normal” data points were detected correctly. In the same way, for “Probe” 649 of the actual “attack” test set were correctly detected; the last column indicates that 92.7% of the actual “Probe” data points were detected correctly. The bottom row shows that 96.4% of the test set said to be “normal” indeed were “normal” and 85.7% of the test

Document info
Document views28
Page views28
Page last viewedTue Dec 06 12:44:10 UTC 2016
Pages10
Paragraphs417
Words4082

Comments