cdxQosIfRateLimitAlgm
1.3.6.1.4.1.9.9.116.1.1.2.1.1
To ensure fairness, the CMTS will throttle the rate for
bandwidth request (upstream)/packet sent (downstream) at
which CMTS issues grants(upstream) or allow packet to be
send(downstream) such that the flow never gets more than
its provisioned peak rate in bps.
There are two directions for every Service Id (Sid) traffic:
downstream and upstream. Each direction is called a service
flow here and assigned one token bucket with chosen
algorithm.
The enumerations for the rate limiting algorithm are:
noRateLimit(1): The rate limiting is disabled. No rate
limiting.
oneSecBurst(2): Bursty 1 second token bucket algorithm.
carLike(3) : Average token usage (CAR-like) algorithm
wtExPacketDiscard(4) : Weighted excess packet discard
algorithm.
shaping(5): token bucket algorithm with shaping
Upstream supports the following:
No rate limiting (1),
Bursty 1 second token bucket algorithm(2),
Average token usage (CAR-like) algorithm(3),
Token bucket algorithm with shaping(5).
Downstream supports the following:
No rate limiting (1),
Bursty 1 second token bucket algorithm(2),
Average token usage (CAR-like) algorithm(3),
Weighted excess packet discard algorithm(4), and
Token bucket algorithm with shaping(5).
Token bucket algorithm with shaping is the
default algorithm for upstream if CMTS is in DOCSIS 1.0 mode
or DOCSIS 1.1 mode.
Bursty 1 second token bucket algorithm is the
default algorithm for downstream if the CMTS is in
DOCSIS 1.0 mode. If it is in DOCSIS 1.1 mode the default
algorithm for downstream is Token bucket algorithm with
shaping .
Each algorithm is described as below:
No rate limiting:
The rate limiting process is disabled and no checking
against the maximum bandwidth allowed.
Bursty 1 second token bucket rate limiting algorithm:
In this algorithm, at the start of every 1 second
interval, a service flow's token usage is reset to 0,
and every time the modem for that service flow sends a
request (upstream) / packet (downstream) the
upstream/downstream bandwidth token usage is incremented
by the size of the request/packet sent. As long as the
service flow's bandwidth token usage is less than the
maximum bandwidth in bits per second (peak rate limit)
its QoS service class allows, the request/packets will
not be restricted.
Once the service flow has sent more than its peak rate
in the one second interval, it is prevented from sending
more data by rejecting request (upstream) or dropping
packets (downstream). This is expected to slow down
the higher layer sources. The token usage counter gets
reset to 0 after the 1 second interval has elapsed. The
modem for that service flow is free to send more data
up to the peak rate limit in the new 1 second interval
that follows.
Average token usage (Cisco CAR like) algorithm:
This algorithm maintains a continuous average of the
burst token usage of a service flow. There is no sudden
refilling of tokens every 1 second interval. Every time
a request/packet is to be handled, the scheduler tries
to see how much time has elapsed since last transmission
, and computes the number of tokens accumulated by this
service flow at its QoS class peak rate. If burst usage
of the service flow is less than tokens accumulated,
the burst usage is reset to 0 and request/packet is
forwarded. If the service flow has accumulated fewer
tokens than its burst usage, the burst usage shows an
outstanding balance usage after decrementing by the
tokens accumulated. In such cases, the request/packet
is still forwarded, provided the service flow's
outstanding usage does not exceed peak rate limit of its
QoS class. If outstanding burst usage exceeds the peak
rate of the class, the service flow is given some token
credit up to a certain maximum credit limit and the
request/packet is forwarded. The request/packet is
dropped when outstanding usage exceeds peak rate and
maximum credit has been used up by this service flow.
This algorithm tracks long term average bandwidth usage
of the service flow and controls this average usage at
the peak rate limit.
Weighted excess packet discard algorithm:
This rate limiting algorithm is only available as an
option for downstream rate limiting. The algorithm is
to maintain an weighted exponential moving average of
the loss rate of a service flow over time. The loss
rate, expressed in packets, represents the number of
packets that can be sent from this service flow in a
one second interval before a packet will be dropped.
At every one second interval, the loss rate gets
updated using the ratio between the flow peak rate (in
bps) in its QoS profile and the service flow actual
usage (in bps). If the service flow begins to send more
than its peak rate continuously, the number of packets
it can send in an one second interval before
experiencing a drop will slowly keep reducing until
cable modem for that service flow slows down as
indicated by actual usage less or equal to the peak
rate.
Token bucket algorithm with shaping:
If there is no QoS class peak rate limit, forward the
request/packet without delay. If there is a QoS peak
rate limit, every time a request/packet is to be
handled, the scheduler determines the number of
bandwidth tokens that this service flow has
accumulated over the elapsed time at its QoS class peak
rate and increments the tokens counter of the service
flow accordingly. The scheduler limits the token
count to the maximum transmit burst (token bucket
depth). If token count is greater than the number of
tokens required to handle current request/packet,
decrement token count by size of request/packet and
forwards the request/packet without delay. If token
count is less than the size of request/packet, compute
the shaping delay time after which the deficit number
of tokens would be available. If shaping delay time is
less than the maximum shaping delay, decrement tokens
count by size of request/packet and forward this
request/packet with the shaping delay in the shaping
delay queue. When the delay time expires, the
request/packet is forwarded. If shaping delay time is
greater than the maximum shaping delay that the
subsequent shaper can handle, the request/packet is
dropped. Users can use cdxQosIfRateLimitShpMaxDelay to
configure the the maximum shaping delay and
cdxQosIfRateLimitShpGranularity to configure the
shaping granularity.