0% found this document useful (0 votes)
21 views

Module-3-The Network Layer

Uploaded by

Safa Hamza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Module-3-The Network Layer

Uploaded by

Safa Hamza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

The Network Layer

Module III
The network layer is concerned with
getting packets from the source all the
way to the destination.
Getting to the destination may require
making many hops at intermediate
routers along the way.
The network layer is the lowest layer
that deals with end-to-end
transmission.
To achieve its goals, the network layer
must know about the topology of the
communication subnet i.e., the set of
all routers and choose appropriate
paths through it.
It must also take care to choose routes
to avoid overloading some of the
communication lines and routers while
leaving others idle.
When the source and destination are
in different networks, new problems
occur. It is up to the network layer to
deal with them.
Design Issues
The design issues include the service provided
to the transport layer and the internal design of
the subnet.
The network layer is responsible for the source
to destination delivery of a packet across
multiple networks.
The logical addressing of the packets is done by
the network layer with the help Internet
Protocol.
The routing of packets, error reporting and
congestion control are the duties of the
network layer.
Hence the network layer is designed to
accomplish the following goals.
Services Provided to the Transport Layer

1.Services independent of router technology.


2.Transport layer shielded from number,
type, topology of routers.
3.Network addresses available to transport
layer use uniform numbering plan even
across LANs and WANs
Provide flow control
Service to the Transport Layer
Given the above goals, the designers of the network
layer try to focus on the determination of whether
the network layer must provide service-oriented or
connection oriented services.
To have a reliable data transfer, network layer
provides connection oriented services to the
transport layer and also in some situation a
connectionless service.
The network layer service is defined by a set of
primitives - these primitives are similar to
programming language procedures. Because the
network layer must provide two types of service,
namely connection-oriented and connectionless,
For the connection oriented services the
primitives used are
 • Making the connection - CONNECT
 •Closing the connection - DISCONNECT

• Sending information (ie., using the


connection) DATA, DATA-
ACKNOWLEDGE, EXPEDITED-DATA.
 • Resetting the connection - RESET.
The Connectionless service primitives are
divided into two groups:
• Send a packet of data - UNITDATA
•Enquire into the performance of the
network - FACILITY, REPORT.
Packets are sent using UNITDATA.
FACILITY allows us to inquire to the
network about things like average delivery
statistics.
REPORT is used by the network to tell the
host if there is a problem with the network,
for example, if a machine has gone down.
Network Layer Design Issues
• Store-and-forward packet switching
• Services provided to transport layer
• Implementation of connectionless service
• Implementation of connection-oriented
service
• Comparison of virtual-circuit and datagram
networks
Store And Forward Packet Switching

The environment of the network layer protocols


Packet are frequently called DATA
GRAMS
No advance setup is needed
Virtual circuit is not established
Transport layer code runs in os of the
H1
every router has an internal table
telling it where to send packet
each table entry is pair consisting of
destination and the outgoing line to use
for the destination
Implementation of Connectionless Service

A’s table (initially) A’s table (later) BB C’s Table


E’s Table
CONNECTION ORIENTED
need virtual circuit to avoid
having to choose new route for
every packet
route from source to destination
is chosen as part of the setup
and store in tables inside the
router.
Connection-Oriented Service

A’s table C’s Table


E’s Table
Routing Algorithms
The main function of network layer is
routing packets from the source machine to
the destination machine
Most subnets packets will require multiple
hops to make it journey.
The RA is that part of the network layer
responsible for deciding which line an
incoming should transmitted on.
If the subnet uses virtual circuits internally
routing decisions are made only when a new
virtual circuits internally routing decisions
are made only when a new virtual circuit is
There after data packets just fellow the
previously established route.
It also known as session routing.
It is sometime useful to make a
distinction between routing which is
making the decision which routes to use
and forwarding which is what happens
when a packet arrives.
Processor 1 of the router looks up the
outgoing line to use for it in the routing
tables. (is known as forwarding) . 2nd
processor update the routing tables.
Desired Properties

Correctness
Simplicity
Robustness: ability to handle
failures
Stability: converge to fixed set of path
Fairness
Optimality
Fairness and Optimality
Non Adaptive & Adaptive
Non adp algo do not base their
routing decisions on measurements
or estimates of the current traffic
and topology.
Instead the choice of the route to
use to get from I to J is computed in
advance, offline and downloaded to
the routers when the network
activated.
Adaptive
Change their routing decisions to
reflect changes in the topology and
usually the traffic as well.
This algorithm differ in where they
get their information (locally, nearby
routers)

Routing Algorithms
• Shortest path algorithm
• Flooding algorithm
Hierarchical Routing Algorithm
The Optimality Principle

(a) A network. (b)Sink tree of best paths to router B


Without network topology and traffic
.
It state that if router J is on the
optimal path from router I to K then
the optimal path from J to K also falls
along the same route.
The route from I to J is r1 and the.
rest is r2 if a route better than r2
existed from J to K it could be
concatenated with r1 to improve the
Sink tree
Optimal route from all sources to a
given destination form a tree rooted
at the destination.
Such a tree is called sink tree.
Where distance metric is the
number of hope
Sink tree is not necessarily unique.
The goal of all routing algorithms is
to discover and use the sink tree for
all routers.
Shortest Path Algorithm

The first five steps used in computing the shortest path from A to D. The
arrows indicate the working node
Shortest Path Algorithm
Assign to every node a tentative distance value:
set it to zero for our initial node and to infinity for
all other nodes.
2. Mark all nodes unvisited. Set the initial node as
current. Create a set of the unvisited nodes called
the unvisited set consisting of all the nodes except
the initial node.
3. For the current node, consider all of its
unvisited neighbors and calculate their tentative
distances. For example, if the current node A is
marked with a tentative distance of 6, and the
edge connecting it with a neighbor B has length 2,
then the distance to B (through A) will be 6+2=8.
Shortest Path Algorithm
If this distance is less than the previously
recorded tentative distance of B, then overwrite
that distance. Even though a neighbor has
been examined, it is not marked as "visited" at
this time, and it remains in the unvisited set.
4. When we are done considering all of the
neighbors of the current node, mark the
current node as visited and remove it from the
unvisited set. A visited node will never be
checked again; its distance recorded now is
final and minimal.
Shortest Path Algorithm
5.if the destination node has been
marked visited (when planning a route
between two specific nodes) or if the
smallest tentative distance among the
nodes in the unvisited set is infinity
(when planning a complete traversal),
then stop. The algorithm has finished.
6. Set the unvisited node marked with
the smallest tentative distance as the
next "current
Shortest Path Algorithm
Cont…

Dijkstra’s algorithm to compute the shortest path through a graph


Cont…
Cont…
Flow based Routing
Flow based routing considers the flow
in the network.
In some networks, the mean data flow
between each pair of nodes is
relatively stable and predictable.
Under conditions in which the average
traffic from i to j is known in advance
and to a reasonable approximation,
constant in time, it is possible to
analyze the flows mathematically to
optimize the routing
Flow based Routing
The idea behind the analysis is that for a given
line, if the capacity and average flow are
known, it is possible to compute the mean
packet delay on that line from queuing theory.
From the mean delays on all the lines, it is
straightforward to calculate a flow-weighted
average to get the mean packet delay for the
whole subnet.
The routing problem then reduces to finding
the routing algorithm that produces the
minimum average delay for the subnet
This technology demands certain information
in advance. First the subnet topology, second
the traffic matrix, third the capacity matrix
and finally a routing algorithm
Hierarchical Routing
Congestion Control Algorithms
When too many packets are present in
the subnet performance degrades. This
situation is called congestion.
As traffic increases too far the routers
are no longer able to hope and they begin
losing packets.
Increase memory
Improve CPU
 low bandwidth
Flow control vs congestion control
Congestion control concerns controlling
traffic entry into a telecommunications
network, so as to avoid congestive
collapse by attempting to avoid over
subscription of any of the processing or
link capabilities of the intermediate
nodes and networks and taking resource
reducing steps, such as reducing the rate
of sending packets.
 It should not be confused with flow
control, which prevents the sender from
Congestion in a network may occur if the
load on the network is greater than the
capacity of the network.
It is due to the fact that routers and
switches have queues (buffers) that hold
the packets before and after processing.
 If the rate of packet arrival is higher
than the packet processing rate, the
input queue becomes longer and longer.
If the packet departure rate is less than
the packet processing rate, the output
queues becomes longer and longer.
Congestion control mechanisms
can be divided into two broad
categories such as open-loop
control (prevention) and closed
loop control (removal).
Open- Loop Congestion Control
In open-loop congestion control, policies
are applied to prevent congestion before
it happens. Here, the congestion is
handled by either the source or the
destination.
Retransmission is unavoidable in some
situations. If the sender feels that a sent
packet is lost or corrupted, the packet
needs to be retransmitted. It may in
general increase congestion, but a good
The retransmission policy and the
retransmission timers need to be
designed to optimize efficiency and
at the same time prevent the
congestion.
Window Policy
A. To implement window policy, selective
reject window method is used for
congestion control.
b. Selective Reject method is preferred
over Go-back-n window as in Go-back-n
method, when timer for a packet times
out, several packets are resent, although
some may have arrived safely at the
receiver. Thus, this duplication may make
congestion worse.
c. Selective reject method sends only the
Acknowledgement policy
The acknowledgement policy imposed by the receiver
may also affect congestion.
b. If the receiver does not acknowledge every packet
it receives it may slow down the sender and help
prevent congestion.
c. Acknowledgments also add to the traffic load on the
network. Thus, by sending fewer acknowledgements
we can reduce load on the network.
d. To implement it, several approaches can be used:
A receiver may send an acknowledgement only if it
has a packet to be sent.
A receiver may send an acknowledgement when a
timer expires.
A receiver may also decide to acknowledge only N
packets at a time.
Discarding policy

 A good discarding policy by the routers may


prevent congestion and at the same time may not
harm the integrity of the transmission.
In audio transmission the less sensitive packets
are discarded to control the congestion
Admission policy
An admission policy is a quality of service
mechanism which can prevent congestion in
virtual circuit networks.
Switches in a network first check the
resource requirement of a flow before
admitting it to the network.
A router can deny establishing a virtual
circuit connection if there is congestion in the
network or there is a chance for future
congestion
Closed- Loop Congestion Control
Closed loop congestion control
mechanisms try to remove the
congestion after it happens.
Different protocols used
different mechanisms.
Back pressure
Back pressure
a. Backpressure is a node-to-node congestion control
that starts with a node and propagates, in the opposite
direction of data flow.

b. The backpressure technique can be applied only to


virtual circuit networks. In such virtual circuit each node
knows the upstream node from which a data flow is
coming.

c. In this method of congestion control, the congested


node stops receiving data from the immediate upstream
node or nodes.

d. This may cause the upstream node on nodes to


become congested, and they, in turn, reject data from
their upstream node or nodes.
e. As shown in fig node 3 is congested and it
stops receiving packets and informs its
upstream node 2 to slow down. Node 2 in
turns may be congested and informs node 1
to slow down. Now node 1 may create
congestion and informs the source node to
slow down. In this way the congestion is
alleviated. Thus, the pressure on node 3 is
moved backward to the source to remove the
congestion.
Back pressure
The back pressure refers to a congestion
control mechanism in which a congested
node stops receiving data from the
immediate upstream node or nodes.
This may cause the upstream node or
nodes to become congested and reject
data from their upstream nodes.
 Hence back pressure is a node to node
congestion control that starts with a node
and propagates in the opposite direction of
data flow to the source.
Choke packet
in this method of congestion control, congested router
or node sends a special type of packet called choke
packet to the source to inform it about the congestion.

b. Here, congested node does not inform its upstream


node about the congestion as in backpressure method.

c. In choke packet method, congested node sends a


warning directly to the source station i.e. the
intermediate nodes through which the packet has
traveled are not warned..(internet control message
protocol)
Implicit signaling
A. In implicit signaling, there is no
communication between the congested node
or nodes and the source.
b. The source guesses that there is
congestion somewhere in the network when it
does not receive any acknowledgment.
Therefore the delay in receiving an
acknowledgment is interpreted as congestion
in the network.
c. On sensing this congestion, the source
slows down.
d. This type of congestion control policy is
used by TCP..
Explicit signal
.
. In this method, the congested nodes
explicitly send a signal to the source or
destination to inform about the congestion.

b. Explicit signaling is different from the


choke packet method. In choke packed
method, a separate packet is used for this
purpose whereas in explicit signaling
method, the signal is included in the packets
that carry data .

c. Explicit signaling can occur in either the


d. In backward signaling, a bit is set in a
packet moving in the direction opposite to the
congestion. This bit warns the source about
the congestion and informs the source to slow
down.

e. In forward signaling, a bit is set in a packet


moving in the direction of congestion. This bit
warns the destination about the congestion.
The receiver in this case uses policies such as
slowing down the acknowledgements to
remove the congestion.
The leaky bucket algorithm
Consider a Bucket with a small hole
at the bottom, whatever may be the
rate of water pouring into the bucket,
the rate at which water comes out
from that small hole is constant. This
scenario is depicted in figure 1(a).
Once the bucket is full, any additional
water entering it spills over the sides
and is lost (i.e. it doesn’t appear in the
output stream through the hole
underneath).
The same idea of leaky bucket can be applied to
packets, as shown in Fig. 1(b). Conceptually each
network interface contains a leaky bucket. And the
following steps are performed:
When the host has to send a packet, the packet is
thrown into the bucket.
The bucket leaks at a constant rate, meaning the
network interface transmits packets at a constant
rate.
Bursty traffic is converted to a uniform traffic by
the leaky bucket.
In practice the bucket is a finite queue that
outputs at a finite rate.
This arrangement can be simulated
in the operating system or can be
built into the hardware.
Implementation of this algorithm is
easy and consists of a finite queue.
Whenever a packet arrives, if there
is room in the queue it is queued up
and if there is no room then the
packet is discarded
Figure 1.(a) Leaky bucket (b) Leaky bucket implementation
The token bucket algorithm

The leaky bucket algorithm described


above, enforces a rigid pattern at the output
stream, irrespective of the pattern of the
input. For many applications it is better to
allow the output to speed up somewhat
when a larger burst arrives than to loose the
data. Token Bucket algorithm provides such
a solution. In this algorithm leaky bucket
holds token, generated at regular intervals.
Main steps of this algorithm can be
In regular intervals tokens are
thrown into the bucket.
The bucket has a maximum capacity.
If there is a ready packet, a token is
removed from the bucket, and the
packet is send.
If there is no token in the bucket,
the packet cannot be send.
Cont..

Figure 2.1(a) Token bucket holding two tokens, before packets are send out, (b) Token bucket after two packets are send, o
ne packet still remains as no token is left
Cont…
Figure 2.1 shows the two scenarios before
and after the tokens present in the bucket
have been consumed. In Fig. 2.1(a) the
bucket holds two tokens, and three packets
are waiting to be sent out of the interface, in
Fig. 2.1(b) two packets have been sent out by
consuming two tokens, and 1 packet is still
left.
The token bucket algorithm is less restrictive
than the leaky bucket algorithm, in a sense
that it allows bursty traffic. However, the
limit of burst is restricted by the number of
tokens available in the bucket at a particular
instant of time.
The implementation of basic token
bucket algorithm is simple; a
variable is used just to count the
tokens. This counter is incremented
every t seconds and is decremented
whenever a packet is sent. Whenever
this counter reaches zero, no further
packet is sent out as shown in Fig.
1.2
Cont…

Figure 2.2 Implementation of the Token bucket algorithm


Thanks you very
much

You might also like