X hits on this document

PDF document

IP Covert Timing Channels: Design and Detection - page 8 / 10

24 views

0 shares

0 downloads

0 comments

8 / 10

t

0.005

0.008

0.01

0.02

0.03

0.1

>0.1

250

34.17

45.17

51.23

67.38

75.29

90.75

9.25

100

34.12

45.77

52.78

67.53

75.54

90.50

9.50

50

34.22

46.87

53.68

67.68

75.09

89.89

10.11

10

34.87

46.37

51.83

67.58

76.19

90.65

9.35

250

36.51

48.02

53.47

68.30

76.20

90.49

9.51

10

35.21

46.88

52.55

68.29

75.67

90.28

9.72

39.92

52.83

58.58

72.79

79.74

91.85

8.15

  • -

    Similarity Score

Method Sequential

Random

Original

Table 2:

  • -

    Similarity scores for Covert Channel II. For each window of t packets, the interval is selected to

Random)

and

(Sequential

methods

be from the set (0.04, 0.06, 0.08). and for the original covert channel

Results are shown for both selection that employs a single interval (0.04).

FTP-data, UDP and the covert channel. The reported val- ues are averaged over ten runs. The results show a striking difference between the covert channel and non-covert flows for the NZIX-II data. For example, 40% of the covert traf-

fic

has

a

difference

of

less

than

= 0.005.

Whereas

for

the

non-covert channel less than 15% is interesting is that although the

are this

similar.

trend is

similar

What for the

DARPA dataset,

there is

far

more

regularity

in

the DARPA

data

than

in

the

NZIX-II data.

Indeed,

studies

have

shown

that because synthetically

the normal traffic in the DARPA dataset was generated, it is not entirely representative of

real traffic [24, 22]. examined the specific

Although previous studies have not inter-arrival times, they have illus-

trated that 1) many are more predictable

attributes of DARPA network traffic than the real traffic, and 2) the syn-

thetic

dataset

shows

different

statistical

characteristics

than

real

data.

Hence

we

conjecture

that

the

regularity

shown

in

Figure 6 for consequence

  • -

    Similarity for the DARPA dataset

of the nature of the synthetic data.

is

a

direct

4.2.4 Covert Channel III: Injecting noise:

Our third experiment examines how our measures fare when we explicitly introduce irregularity into the covert channel. We inject noise into the channel as follows. For a covert channel operating on a port typically associated with a particular application X, we insert portions of inter- arrival times from a non-covert traffic stream for application X. For example, if the covert channel runs on Port 80, we use WWW traffic. We then break the covert channel into blocks of 100 packets, and randomly replace blocks of the covert traffic with the non-covert traffic of application X un- til we achieve the desired noise level (e.g., for 10% noise, the IA times for two randomly selected blocks of 100 packets would be replaced in our 2000 packet stream).

This scheme again impacts our first measure because of the random nature of noise injection. Because a window may include components from the noisy traffic, the windows are no longer comparable and our regularity measure fails to discriminate covert from non-covert traffic.

4.2.3 Covert Channel II: Varying the timing interval:

To understand how our metrics work when the sender tries to hide the covert channel, we first experimented with covert channels where the sender alternates between differ- ent intervals. The motivation from the sender’s viewpoint is to obfuscate the regularity. In our experiment, we chose three different interval values 0.04, 0.06, and 0.08. After t packets, we switch to a new interval. We experimented with two different methods of specifying the new interval: cycling through them sequentially or random selection.

Varying the interval impacts Measure I (regularity) be- cause the variance of the windows are no longer comparable unless t is much smaller than w. In this case, all three inter- vals would be observed several times in each window of w packets, and therefore the variance for each window would be similar. However, for cases where t approaches or exceeds w this metric cannot detect covert timing channels and hence due to space we do not show the actual numbers.

On the other hand, our second metric ( -Similarity) still shows differences in values for the covert versus the non- covert traffic. In Table 2 we show the results for the original single-interval covert channel, and for several choices of t for both methods for selecting a new interval period after t frames. Note that the results are averaged over ten runs for each parameter setting. Looking at each of the seven values of , we see little difference for either the sequential or random method. These results show that the -Similarity metric is robust for this method of hiding covert traffic.

Our second measure, however, fares better. In Table 3 we show the -Similarity values for the original covert channel

(Covert Channel I noise levels of 10, the values for the

  • shown in the 0% noise row), and for

25 and 50%. In addition, we include non-covert traffic in the bottom three

rows of the table. Note the covert traffic begins to the non-covert traffic.

that as the noise level increases to have -Similarity values close However, a drawback from the

sender/receiver’s viewpoints is that the decreases linearly with the noise level.

covert

bandwidth

4.3

Automatic Detection of IP Covert Timing Channels

In this section, we present the results of an experiment designed to evaluate our metrics’ ability to be used to auto- matically detect covert timing channels. Both of our meth- ods require that we set a threshold. For -Similarity, we

need to choose a regularity metric,

threshold for each value of

1

.

For our

values below the threshold are considered

to have been generated by covert traffic. rameters, we first ran experiments with ten

To set the pa- flows from each

protocol type. Note that we and FTPd traffic, as in the

experimented only with WWW NZIX-II dataset there is insuf-

ficient

data

for

the

other

protocols

to

find

ten

flows

of

2000

packets.

After

we

collected

the

data

from

the

ten

train-

1Note that for values of

< 0.1 observations

threshold are considered covert traffic and for

above our

  • >

    0.1 val-

ues below our threshold are considered covert, because the

majority of covert traffic has a similarity 0.1

185

Document info
Document views24
Page views24
Page last viewedSat Dec 03 05:06:37 UTC 2016
Pages10
Paragraphs840
Words9153

Comments