Abstract-The are scarce resource since they are stored

Abstract-The increase of interconnectivity has given rise to a
huge amount of IoT enabled devices in botnets. The botnets are being used for  DDoS attacks. Honeypots have proven to be a
vital tool to keep track of malicious activities. A honeypot is a computer system set up to act as a bait to attract cyber
attackers,  to detect, deflect and study
attempts by attackers to gain unauthorized access to our computer systems.

traffic is blocked on the Internet via filtering. The filtering is done via
access control lists. ACL filters are available at the routers today, but are
scarce resource since they are stored in the ternary content addressable memory.
Filtering source prefixes instead
of individual IP addresses, also known as Aggregation helps reduce the number
of filters, but comes also blocks legitimate traffic originating from the
filtered prefixes. For different realistic attack scenarios and operators’
policies, we show how to optimally select which source prefixes to filter. In
each scenario, we design optimal and efficient algorithms.

Keywords-DDOS, filtering, Aggregation, ACL,TCAM.



Protecting our network infrastructure from
malicious traffic, including malicious code propagation, spam, scanning, and
DDoS (distributed denial-of-service) attacks is very important. Such activities
cause problems on the networks, from simple annoyance to severe operational,
financial, and political damage to organizations, companies, and critical
infrastructure. These attacks have increased in volume, automation and
sophistication, and are largely enabled by botnets, which are used as the
platform for launching these attacks. Providing protection to a host or
network(victim) from malicious traffic is a hard problem that requires the
coordination of several complementary components, including technical solutions
(at the application and/or network level) and nontechnical (e.g., business and
legal) .

The most fundamental building block in
blocking malicious traffic is the filtering support provided by the network.
For example, for countering an ongoing DDoS attack , the DDoS traffic is
blocked  by the Internet service provider
(ISP) before it reaches its clients by using the method of filtering. ISP(s)
may also proactively identify and block the traffic carrying malicious code
before it reaches and compromises the vulnerable hosts. In both case, filtering
is a essential operation that should be carried out within the network. Routers
today are available with filtering capabilities via the access control lists
(ACLs). The routers are enabled with ACLs to match a packet header against
pre-set rules and conduct predefined actions on the matching packets. This
technique is  used for enforcing a
policies of different verities, like infrastructure protection. For the purpose
of blocking malicious traffic, a filter is a simple ACL rule that denies or
allows access to a source IP address or prefix.

Filtering is implemented in hardware, since
modern routers have high forwarding rates. ACLs are stored in ternary content
addressable memory (TCAM), which allows parallel access and reduces the number
of lookups per forwarded packet . TCAM is expensive and consumes more space
than conventional memory. Hence TCAM puts a limit on the number of filters,
which will not change in near future. With many thousands of filters per path,
an ISP alone cannot block the currently witnessed attacks, and attacks from
multimillion-node botnets expected in the near future.

 A framework
is generated for studying source prefix filtering as a resource allocation
problem. To the best of our knowledge, optimal filter selection has not been
explored so far, as most related work on filtering has focused on protocol and
architectural aspects. In the framework, we generate and solve five practical
source-address filtering problems, depending on the attack scenario and the
operator’s policy and constraints. The framework will exploit the special
structure of each problem and design optimal and computationally efficient

Packet filtering helps in enhancing the security of a network by
examining network packets while they pass through a firewall or routers.
Packets filtered based on IP address suffixes and prefixes provide help in
determining which IP address is malicious and which is not, by developing an
efficient algorithm.

The proposed system can be used to protect
the network infra-structure from malicious traffic, such as scanning, malicious
code propagation, spam, and distributed denial-of-service (DDoS) attacks.




Numerous experimentations are conducted in
the field of preventing DDOS attacks by implementing honeypots. Some of the
experimentations that are in relevance with the proposed work are discussed as

Cisco systems 1 has provided details that ACEs in the same ACL that do not require logging are still processed in
hardware; Supervisor 2 with PFC2 and Supervisor 720 with PFC3 support
rate-limiting of packets redirected to the MSFC for ACL logging. Cicso systems 2
also researched on Risk Assessment, by Considering two areas of key risk when
you deploy infrastructure
protection ACLs: Ensure that the appropriate permit/deny statements
are in place. For the ACL to
be effective, all required protocols must be permitted and the correct address
space must be protected by the deny statements. M.Collins et al. 3 has
an approach effectively predicting future bot locations by testing unclean
networks for dangerous properties by collating data from multiple indicators,
providing evidence for cross-relationship between the various datasets, showing
that botnet activity predicts spamming and scanning, while phishing activity
appears to be unrelated to the other indicators. Z. Chen et  al.4
presented a study on of the spatial and temporal features of malicious source
addresses by tracing over 7 billion Internet intrusion attempts provided by
DShield.org, which includes 160 million unique source addresses, focusing on
spatial distributions and temporal characteristics of malicious sources. Z
Mao et al.5 presented a measurement study analysing DDoS attacks from
multiple data sources, relying on both direct measurements of flow-level
information, and more traditional indirect measurements using backscatter
analysis.. the results suggest little use of address spoofing by attackers,
which imply that such attacks will be invisible to indirect backscatter
measurement techniques. The author suggests that network providers can reduce a
substantial volume of malicious traffic with targeted deployment of DDoS defences.

A Ramachandran et al.6 studied on the
network-level behaviour of spammers, including: IP address ranges that send the
most spam, common spamming modes (e.g., BGP route hijacking, bots), how
persistent across time each spamming host is, and characteristics of spamming
botnets. trends suggest that developing algorithms to identify botnet
membership, filtering email messages based on network-level properties and
improving the security of the Internet routing infrastructure, may prove to be
extremely effective for combating spam. S Venkataraman et al.7 performed an
extensive analysis of IP addresses and IP aggregates given by network-aware
clusters in order to investigate properties that can distinguish the bulk of
the legitimate mail and spam. Their analysis indicates that the bulk of the
legitimate mail comes from long-lived IP addresses. They also find that the
bulk of the spam comes from network clusters that are relatively long-lived. Their
analysis suggests that network-aware clusters may provide a good aggregation
scheme for exploiting the history and structure of IP addresses. Y. Xie, F et
al.8 presented a paper that  introduces
a novel algorithm, UDmap, to identify dynamically assigned IP addresses and analyse
their dynamics pattern. They establish that 95.6% of mail servers’ setup on the
dynamic IP addresses in their trace sent out solely spam emails. Moreover,
these mail servers sent out a large amount of spam-amounting to 42.2% of all
spam emails received by Hot-mail. These results highlight the importance of
being able to accurately identify dynamic IP addresses for spam filtering.

J. Zhang et al.9 implemented a new system
to generate blacklists for contributors to a large-scale security-log sharing
infrastructure. They Experimented on a large corpus of real DShield data, and
demonstrate that their blacklists have higher attacker hit rates, better new
attacker prediction quality, and long-term performance stability. Dshield.org10
assembled lists based on tracking and malware
lists from different sources, collecting and categorizing various lists
associated with a certain level of sensitivity. Ankur et al11
completed research work that concentrates on two kinds of DDoS attacks namely
UDP flood attack and ping of death attack. These two attacks flood the victim
nodes with unnecessary packets resulting in channel congestion and denial of
service. G Varghese12  implements
bottlenecks that are most often encountered at four disparate levels of
implementation: protocol, OS, hardware, and architecture. He then derives 15
solid principles—ranging from the commonly recognized to the ground-breaking—that
are key to breaking these bottlenecks. Ashok et al. 13, have employed statistical analysis,
supervised learning and ensemble based dynamic reputation of domains, IP
addresses and name servers to distinguish benign and abnormal domains with very
low false positives .

Kanungo et al14 presents a simple and efficient implementation of Lloyd’s k-means
clustering algorithm, which they call the filtering algorithm. They establish
the practical efficiency of the filtering algorithm in two ways. First, they
present a data-sensitive analysis of the algorithm’s running time, which shows
that the algorithm runs faster as the separation between clusters increases.
Second, they present a number of empirical studies both on synthetically
generated data and on real data sets from applications in colour quantization,
data compression, and image segmentation. G. Pack et al15 explores a technique that can be used as part of DDoS defences:
using ACL rules that distinguish the attack packets from the legitimate traffic
based on source addresses in packets. This technique can reduce the attack size
by a factor of 3 while also dropping between 2% and 10% of the legitimate
traffic. E. Kohler16 
investigates the structure of addresses contained in IP traffic.
Specifically, they analyse the structural characteristics of destination IP
addresses seen on Internet links, considered as a subset of the address space.
The model may be useful for simulations where realistic IP addresses are

Z. Chen et al.17 investigates three aspects: (a) a network vulnerability as the
non-uniform vulnerable-host distribution, (b) threats, i.e., intelligent worms
that exploit such a vulnerability, and (c) defence, i.e., challenges for
fighting the threats. They first study five data sets and observe consistent
clustered vulnerable-host distributions. They then analytically and empirically
measure the infection rate and the propagation speed of network-aware worms.
They show that a representative network-aware worm can increase the spreading
speed by exactly or nearly a non-uniformity factor when compared to a
random-scanning worm at the early stage of worm propagation. This implies that
when a worm exploits an uneven vulnerable-host distribution as a network-wide
vulnerability, the Internet can be infected much more rapidly. Furthermore,
they analyse the effectiveness of defence strategies on the spread of
network-aware worms. Their results demonstrate that counteracting network-aware
worms is a significant challenge for the strategies that include host-based defence
and IPv6.R. Xu et al.18 Have presented
a paper that introduces concepts and algorithms related to clustering, a
concise survey of existing (clustering) algorithms as they’ll as providing a
comparison.. The effectiveness of the candidate clustering algorithms is
measured through a number of internal and external validity metrics, stability,
runtime, and scalability tests. R. Mahajan et al.19  proposes a paper on mechanisms for detecting
and controlling  high bandwidth
aggregates ,a well-defined subset of the traffic  on DoS attacks and flash crowds attacks. the
design involves both a local mechanism for detecting and controlling an
aggregate at a single router, and a cooperative pushback mechanism in which a
router can ask upstream routers to control an aggregate. these mechanisms
provide some needed relief from flash crowds and flooding-style DoS attacks. K.
Argyraki et al20 presented a paper on Active Internet Traffic Filtering
(AITF), a network-layer defence mechanism against flash crowding attacks. AITF
enables a receiver to contact misbehaving sources and ask them to stop sending
it traffic; each source that has been asked to stop is policed by its own
Internet service provider (ISP), which ensures its compliance. They also show
that the network-layer of the Internet can provide an effective, scalable, and
incrementally deployable solution against bandwidth-flooding attacks.

In present literature, optimal filter
selection has not been explored so far, as most related work on filtering has
focused on protocol and architectural aspects. Within the  framework we generate, we formulate and solve
five practical source-address filtering problems, depending on the attack
scenario and the operator’s policy and constraints. Filter selection
optimization leads to novel variations of the multidimensional knapsack




I'm Mary!

Would you like to get a custom essay? How about receiving a customized one?

Check it out