<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.30 (Ruby 3.4.8) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-ccwg-bbr-05" category="exp" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="BBR">BBR Congestion Control</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-ccwg-bbr-05"/>
    <author initials="N." surname="Cardwell" fullname="Neal Cardwell" role="editor">
      <organization>Google</organization>
      <address>
        <email>ncardwell@google.com</email>
      </address>
    </author>
    <author initials="I." surname="Swett" fullname="Ian Swett" role="editor">
      <organization>Google</organization>
      <address>
        <email>ianswett@google.com</email>
      </address>
    </author>
    <author initials="J." surname="Beshay" fullname="Joseph Beshay" role="editor">
      <organization>Meta</organization>
      <address>
        <email>jbeshay@meta.com</email>
      </address>
    </author>
    <date year="2026" month="March" day="02"/>
    <area>IETF</area>
    <workgroup>CCWG</workgroup>
    <keyword>Congestion Control</keyword>
    <abstract>
      <?line 201?>

<t>This document specifies the BBR congestion control algorithm. BBR ("Bottleneck
Bandwidth and Round-trip propagation time") uses recent measurements of a
transport connection's delivery rate, round-trip time, and packet loss rate
to build an explicit model of the network path. BBR then uses this model to
control both how fast it sends data and the maximum volume of data it allows
in flight in the network at any time. Relative to loss-based congestion control
algorithms such as Reno <xref target="RFC5681"/> or CUBIC <xref target="RFC9438"/>, BBR offers
substantially higher throughput for bottlenecks
with shallow buffers or random losses, and substantially lower queueing delays
for bottlenecks with deep buffers (avoiding "bufferbloat"). BBR can be
implemented in any transport protocol that supports packet-delivery
acknowledgment. Thus far, open source implementations are available
for TCP <xref target="RFC9293"/> and QUIC <xref target="RFC9000"/>. This document
specifies version 3 of the BBR algorithm, BBRv3.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Congestion Control Working Group Working Group mailing list (ccwg@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/ccwg/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/ietf-wg-ccwg/draft-cardwell-ccwg-bbr"/>.</t>
    </note>
  </front>
  <middle>
    <?line 219?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>The Internet has traditionally used loss-based congestion control algorithms
like Reno (<xref target="Jac88"/>, <xref target="Jac90"/>, <xref target="WS95"/>  <xref target="RFC5681"/>) and CUBIC (<xref target="HRX08"/>,
<xref target="RFC9438"/>). These algorithms worked well for many years because
they were sufficiently well-matched to the prevalent range of bandwidth-delay
products and degrees of buffering in Internet paths. As the Internet has
evolved, loss-based congestion control is increasingly problematic in several
important scenarios:</t>
      <ol spacing="normal" type="1"><li>
          <t>Shallow buffers: In shallow buffers, packet loss can happen even when a link
  has low utilization. With high-speed, long-haul links employing commodity
  switches with shallow buffers, loss-based congestion control can cause abysmal
  throughput because it overreacts, making large multiplicative decreases in
  sending rate upon packet loss (by 50% in Reno <xref target="RFC5681"/> or 30%
  in CUBIC <xref target="RFC9438"/>), and only slowly growing its sending rate
  thereafter. This can happen even if the packet loss arises from transient
  traffic bursts when the link is mostly idle.</t>
        </li>
        <li>
          <t>Deep buffers: At the edge of today's Internet, loss-based congestion control
  can cause the problem of  "bufferbloat", by repeatedly filling deep buffers
  in last-mile links and causing high queuing delays.</t>
        </li>
        <li>
          <t>Dynamic traffic workloads: With buffers of any depth, dynamic mixes of
  newly-entering flows or flights of data from recently idle flows can cause
  frequent packet loss. In such scenarios loss-based congestion control can
  fail to maintain its fair share of bandwidth, leading to poor application
  performance.</t>
        </li>
      </ol>
      <t>In both the shallow-buffer (1.) or dynamic-traffic (3.) scenarios mentioned
above it is difficult to achieve full throughput with loss-based congestion
control in practice: for CUBIC, sustaining 10Gbps over 100ms RTT needs a
packet loss rate below 0.000003% (i.e., more than 40 seconds between packet losses),
and over a 100ms RTT path a more feasible loss rate like 1% can only sustain
at most 3 Mbps <xref target="RFC9438"/>. These limitations apply no matter what
the bottleneck link is capable of or what the connection's fair share
is. Furthermore, failure to reach the fair share can cause poor throughput
and poor tail latency for latency-sensitive applications.</t>
      <t>The BBR ("Bottleneck Bandwidth and Round-trip propagation time") congestion
control algorithm is a model-based algorithm that takes an approach different
from loss-based congestion control: BBR uses recent measurements of a transport
connection's delivery rate,  round-trip time, and packet loss rate to build
an explicit model of the network path, including its estimated available
bandwidth, bandwidth-delay product, and the maximum volume of data that the
connection can place in flight in the network without causing excessive queue
pressure. It then uses this model in order to guide its control behavior
in seeking high throughput and low queue pressure.</t>
      <t>This document describes the current version of the BBR algorithm, BBRv3.
The original version of the algorithm, BBRv1, was described previously at a
high level <xref target="CCGHJ16"/><xref target="CCGHJ17"/>. The implications of BBR
in allowing high utilization of high-speed networks with shallow buffers
have been discussed in other work <xref target="MM19"/>. Active work on the BBR
algorithm is continuing.</t>
      <t>This document is organized as follows. Section 2 provides various definitions
that will be used throughout this document. Section 3 provides an overview
of the design of the BBR algorithm, and section 4 describes the BBR algorithm
in detail, including BBR's network path model, control parameters, and state
machine. Section 5 describes the implementation status, section 6 describes
security considerations, section 7 notes that there are no IANA considerations,
and section 8 closes with Acknowledgments.</t>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>This document defines state variables and constants used by the BBR algorithm.</t>
      <t>Constant values have CamelCase names and are used by BBR throughout
its operation for a given connection. Variables have snake_case names.
All names are prefixed with the context they
belong to: (C) for connection state, (P) for per-packet state, (RS) for
per-ack rate sample, or (BBR) for the algorithm's internal state.
Variables that are not defined below are defined in
<xref target="delivery-rate-samples"/>, "Delivery Rate Samples".</t>
      <t>In the pseudocode in this document, all functions have implicit access to the
(C) connection state and (BBR) congestion control algorithm state for that
connection. All functions involved in ACK processing additionally have implicit
access to the the (RS) for for the rate sample populated processing that ACK.</t>
      <t>In this document, the unit of all volumes of data is bytes, the unit of
all times is seconds, and the unit of all data rates is bytes per second.
Implementations MAY use other units, such as bits and bits per second,
or packets and packets per second, as long as the implementation applies
conversions as appropriate. However, since packet sizes can vary
due to changes in MTU or application message sizes, data rates
computed in packets per second can be inaccurate, and thus it is
RECOMMENDED that BBR implementations use bytes and bytes per second.</t>
      <t>In this document, "acknowledged" or "delivered" data means any transmitted
data that the remote transport endpoint has confirmed that it has received,
e.g., via a QUIC ACK Range <xref target="RFC9000"/>, TCP cumulative acknowledgment
<xref target="RFC9293"/>, or TCP SACK ("Selective Acknowledgment") block <xref target="RFC2018"/>.</t>
      <section anchor="transport-connection-state">
        <name>Transport Connection State</name>
        <t>C.SMSS: The Sender Maximum Send Size in bytes. The maximum
size of a single transmission, including the portion
of the packet that the transport protocol implementation tracks for
congestion control purposes. C.SMSS MUST include transport protocol
payload data. C.SMSS MAY include only the transport protocol payload
data; for example, for TCP BBR implementations the C.SMSS SHOULD be
the Eff.snd.MSS defined in <xref section="3.7.1" sectionFormat="comma" target="RFC9293"/>, which includes
only the TCP transport protocol payload data, but not TCP or IP headers.
C.SMSS MAY include the transport protocol payload data plus the
transport protocol headers; for example, for QUIC BBR implementations
the C.SMSS SHOULD be the QUIC "maximum datagram size"
<xref section="14" sectionFormat="comma" target="RFC9000"/>, which includes the QUIC payload data plus
the QUIC headers, but not UDP or IP headers. In addition to including
transport protocol payload and headers, implementations MAY include
in C.SMSS the size of other headers, such as network-layer or
link-layer headers.</t>
        <t>C.has_selective_acks: True if the connection has the capability to receive
selective acknowledgments and thus is able to detect more than one
packet loss per round trip in fast recovery. For example, this is true
for all QUIC connections by virtue of the QUIC ACK Range <xref target="RFC9000"/>
mechanism, and is true for TCP connections that have negotiated support
for the TCP SACK ("Selective Acknowledgment") <xref target="RFC2018"/> mechanism.</t>
        <t>C.InitialCwnd: The initial congestion window set by the transport protocol
implementation for the connection at initialization time.</t>
        <t>C.delivered: The total amount of data
delivered so far over the lifetime of the transport connection C.
This MUST NOT include pure ACK packets. It SHOULD include spurious
retransmissions that have been acknowledged as delivered.</t>
        <t>C.inflight: The connection's best estimate of the number of bytes
outstanding in the network. This includes the number of bytes that
have been sent and have not been acknowledged or
marked as lost since their last transmission
(e.g. "pipe" from <xref target="RFC6675"/> or "bytes_in_flight" from <xref target="RFC9002"/>).
This MUST NOT include pure ACK packets.</t>
        <t>C.is_cwnd_limited: True if the connection has fully utilized C.cwnd at any
point in the last packet-timed round trip.</t>
        <t>C.next_send_time: The earliest pacing departure time another packet can be
sent.</t>
      </section>
      <section anchor="per-ack-rate-sample-state">
        <name>Per-ACK Rate Sample State</name>
        <t>RS.delivered: The volume of data delivered between the transmission of the
packet that has just been ACKed and the current time.</t>
        <t>RS.delivery_rate: The delivery rate (aka bandwidth) sample obtained from
the packet that has just been ACKed.</t>
        <t>RS.rtt: The RTT sample calculated based on the most recently-sent packet
of the packets that have just been ACKed.</t>
        <t>RS.newly_acked: The volume of data cumulatively or selectively acknowledged
upon the ACK that was just received. (This quantity is referred to as
"DeliveredData" in <xref target="RFC6937"/>.)</t>
        <t>RS.newly_lost: The volume of data newly marked lost upon the ACK that was
just received.</t>
        <t>RS.tx_in_flight: C.inflight at
the time of the transmission of the packet that has just been ACKed (the
most recently sent packet among packets ACKed by the ACK that was just
received).</t>
        <t>RS.lost: The volume of data that was declared lost between the transmission
and acknowledgment of the packet that has just been ACKed (the most recently
sent packet among packets ACKed by the ACK that was just received).</t>
      </section>
      <section anchor="output-control-parameters">
        <name>Output Control Parameters</name>
        <t>C.cwnd: The transport sender's congestion window. When transmitting data,
the sending connection ensures that C.inflight does not exceed C.cwnd.</t>
        <t>C.pacing_rate: The current pacing rate for a BBR flow, which controls
inter-packet spacing.</t>
        <t>C.send_quantum: The maximum size of a data aggregate scheduled and transmitted
together as a unit, e.g., to amortize per-packet transmission overheads.</t>
      </section>
      <section anchor="pacing-state-and-parameters">
        <name>Pacing State and Parameters</name>
        <t>BBR.pacing_gain: The dynamic gain factor used to scale BBR.bw to produce
C.pacing_rate.</t>
        <t>BBR.StartupPacingGain: A constant specifying the minimum gain value for
calculating the pacing rate that will allow the sending rate to double each
round (4 * ln(2) ~= 2.77) <xref target="BBRStartupPacingGain"/>; used in
Startup mode for BBR.pacing_gain.</t>
        <t>BBR.DrainPacingGain: A constant specifying the pacing gain value used in
Drain mode, to attempt to drain the estimated queue at the bottleneck link
in one round-trip or less. As noted in <xref target="BBRDrainPacingGain"/>, any
value at or below 1 / BBRStartupCwndGain = 1 / 2 = 0.5 will theoretically
achieve this. BBR uses the value 0.5, which has been shown to offer good
performance when compared with other alternatives.</t>
        <t>BBR.PacingMarginPercent: The static discount factor of 1% used to scale BBR.bw
to produce C.pacing_rate.</t>
      </section>
      <section anchor="cwnd-state-and-parameters">
        <name>cwnd State and Parameters</name>
        <t>BBR.cwnd_gain: The dynamic gain factor used to scale the estimated BDP to
produce a congestion window (C.cwnd).</t>
        <t>BBR.DefaultCwndGain: A constant specifying the minimum gain value that allows
the sending rate to double each round (2) <xref target="BBRStartupCwndGain"/>.
Used by default in most phases for BBR.cwnd_gain.</t>
      </section>
      <section anchor="general-algorithm-state">
        <name>General Algorithm State</name>
        <t>BBR.state: The current state of a BBR flow in the BBR state machine.</t>
        <t>BBR.round_count: Count of packet-timed round trips elapsed so far.</t>
        <t>BBR.round_start: A boolean that BBR sets to true once per packet-timed round
trip, on ACKs that advance BBR.round_count.</t>
        <t>BBR.next_round_delivered: P.delivered value denoting the end of a
packet-timed round trip.</t>
        <t>BBR.idle_restart: A boolean that is true if and only if a connection is
restarting after being idle.</t>
        <t>BBR.drain_start_round: The value of round_count when Drain state started.</t>
      </section>
      <section anchor="core-algorithm-design-parameters">
        <name>Core Algorithm Design Parameters</name>
        <t>BBR.LossThresh: A constant specifying the maximum tolerated per-round-trip
packet loss rate when probing for bandwidth (the default is 2%).</t>
        <t>BBR.Beta: A constant specifying the default multiplicative decrease to
make upon each round trip during which the connection detects packet
loss (the value is 0.7).</t>
        <t>BBR.Headroom: A constant specifying the multiplicative factor to
apply to BBR.inflight_longterm when calculating a volume of free headroom
to try to leave unused in the path
(e.g. free space in the bottleneck buffer or free time slots in the bottleneck
link) that can be used by cross traffic (the value is 0.15).</t>
        <t>BBR.MinPipeCwnd: The minimal C.cwnd value BBR targets, to allow pipelining with
endpoints that follow an "ACK every other packet" delayed-ACK policy:
4 * C.SMSS.</t>
      </section>
      <section anchor="network-path-model-parameters">
        <name>Network Path Model Parameters</name>
        <section anchor="data-rate-network-path-model-parameters">
          <name>Data Rate Network Path Model Parameters</name>
          <t>The data rate model parameters together estimate both the sending rate required
to reach the full bandwidth available to the flow (BBR.max_bw), and the maximum
pacing rate control parameter that is consistent with the queue pressure
objective (BBR.bw).</t>
          <t>BBR.max_bw: The windowed maximum recent bandwidth sample, obtained using
the BBR delivery rate sampling algorithm in <xref target="delivery-rate-samples"/>,
measured during the current or previous bandwidth probing cycle (or during
Startup, if the flow is still in that state). (Part of the long-term
model.)</t>
          <t>BBR.bw_shortterm: The short-term maximum sending bandwidth that the algorithm
estimates is safe for matching the current network path delivery rate, based
on any loss signals in the current bandwidth probing cycle. This is generally
lower than max_bw. (Part of the short-term model.)</t>
          <t>BBR.bw: The maximum sending bandwidth that the algorithm estimates is
appropriate for matching the current network path delivery rate, given all
available signals in the model, at any time scale. It is the min() of max_bw
and bw_shortterm.</t>
        </section>
        <section anchor="data-volume-network-path-model-parameters">
          <name>Data Volume Network Path Model Parameters</name>
          <t>The data volume model parameters together estimate both the inflight
required to reach the full bandwidth available to the flow
(BBR.max_inflight), and the maximum inflight that is consistent with the
queue pressure objective (C.cwnd).</t>
          <t>BBR.min_rtt: The windowed minimum round-trip time sample measured over the
last BBR.MinRTTFilterLen = 10 seconds. This attempts to estimate the two-way
propagation delay of the network path when all connections sharing a bottleneck
are using BBR, but also allows BBR to estimate the value required for a BBR.bdp
estimate that allows full throughput if there are legacy loss-based Reno
or CUBIC flows sharing the bottleneck.</t>
          <t>BBR.bdp: The estimate of the network path's BDP (Bandwidth-Delay Product),
computed as: BBR.bdp = BBR.bw * BBR.min_rtt.</t>
          <t>BBR.extra_acked: A volume of data that is the estimate of the recent degree
of aggregation in the network path.</t>
          <t>BBR.offload_budget: The estimate of the minimum volume of data necessary
to achieve full throughput when using sender (i.e., TSO/GSO) and
receiver (i.e., LRO, GRO) host offload mechanisms.</t>
          <t>BBR.max_inflight: The estimate of C.inflight required to
fully utilize the bottleneck bandwidth available to the flow, based on the
BDP estimate (BBR.bdp), the aggregation estimate (BBR.extra_acked), the offload
budget (BBR.offload_budget), and BBR.MinPipeCwnd.</t>
          <t>BBR.inflight_longterm: The long-term maximum inflight that the
algorithm estimates will produce acceptable queue pressure, based on signals
in the current or previous bandwidth probing cycle, as measured by loss. That
is, if a flow is probing for bandwidth, and observes that sending a particular
inflight causes a loss rate higher than the loss rate
threshold, it sets inflight_longterm to that volume of data. (Part of the long-term
model.)</t>
          <t>BBR.inflight_shortterm: Analogous to BBR.bw_shortterm, the short-term maximum
inflight that the algorithm estimates is safe for matching the
current network path delivery process, based on any loss signals in the current
bandwidth probing cycle. This is generally lower than max_inflight or
inflight_longterm. (Part of the short-term model.)</t>
        </section>
      </section>
      <section anchor="state-for-responding-to-congestion">
        <name>State for Responding to Congestion</name>
        <t>RS: The rate sample calculated from the most recent acknowledgment.</t>
        <t>BBR.bw_latest: a 1-round-trip max of delivered bandwidth (RS.delivery_rate).</t>
        <t>BBR.inflight_latest: a 1-round-trip max of delivered volume of data
(RS.delivered).</t>
      </section>
      <section anchor="estimating-bbrmaxbw">
        <name>Estimating BBR.max_bw</name>
        <t>BBR.max_bw_filter: A windowed max filter for RS.delivery_rate
samples, for estimating BBR.max_bw.</t>
        <t>BBR.MaxBwFilterLen: A constant specifying the filter window length for
BBR.max_bw_filter = 2 (representing
up to 2 ProbeBW cycles, the current cycle and the previous full cycle).</t>
        <t>BBR.cycle_count: The virtual time used by the BBR.max_bw filter window. Note
that BBR.cycle_count only needs to be tracked with a single bit, since the
BBR.max_bw_filter only needs to track samples from two time slots: the previous
ProbeBW cycle and the current ProbeBW cycle.</t>
      </section>
      <section anchor="estimating-bbrextraacked">
        <name>Estimating BBR.extra_acked</name>
        <t>BBR.extra_acked_interval_start: The start of the time interval for estimating
the excess amount of data acknowledged due to aggregation effects.</t>
        <t>BBR.extra_acked_delivered: The volume of data marked as delivered since
BBR.extra_acked_interval_start.</t>
        <t>BBR.extra_acked_filter: A windowed max filter for tracking the degree of
aggregation in the path.</t>
        <t>BBR.ExtraAckedFilterLen: A constant specifying the window length of
the BBR.extra_acked_filter max
filter window in steady-state = 10 (in units of packet-timed round trips).</t>
      </section>
      <section anchor="startup-parameters-and-state">
        <name>Startup Parameters and State</name>
        <t>BBR.full_bw_reached: A boolean that records whether BBR estimates that it
has ever fully utilized its available bandwidth over the lifetime of the
connection.</t>
        <t>BBR.full_bw_now: A boolean that records whether BBR estimates that it has
fully utilized its available bandwidth since it most recetly started looking.</t>
        <t>BBR.full_bw: A recent baseline BBR.max_bw to estimate if BBR has "filled
the pipe" in Startup.</t>
        <t>BBR.full_bw_count: The number of non-app-limited round trips without large
increases in BBR.full_bw.</t>
      </section>
      <section anchor="probertt-and-minrtt-parameters-and-state">
        <name>ProbeRTT and min_rtt Parameters and State</name>
        <section anchor="parameters-for-estimating-bbrminrtt">
          <name>Parameters for Estimating BBR.min_rtt</name>
          <t>BBR.min_rtt_stamp: The wall clock time at which the current BBR.min_rtt sample
was obtained.</t>
          <t>BBR.MinRTTFilterLen: A constant specifying the length of the BBR.min_rtt min
filter window, BBR.MinRTTFilterLen is 10 secs.</t>
        </section>
        <section anchor="parameters-for-scheduling-probertt">
          <name>Parameters for Scheduling ProbeRTT</name>
          <t>BBR.ProbeRTTCwndGain = A constant specifying the gain value for calculating
C.cwnd during ProbeRTT: 0.5 (meaning that ProbeRTT attempts to reduce in-flight
data to 50% of the estimated BDP).</t>
          <t>BBR.ProbeRTTDuration: A constant specifying the minimum duration for which ProbeRTT
state holds C.inflight to BBR.MinPipeCwnd or fewer packets: 200 ms.</t>
          <t>BBR.ProbeRTTInterval: A constant specifying the minimum time interval between
ProbeRTT states: 5 secs.</t>
          <t>BBR.probe_rtt_min_delay: The minimum RTT sample recorded in the last
ProbeRTTInterval.</t>
          <t>BBR.probe_rtt_min_stamp: The wall clock time at which the current
BBR.probe_rtt_min_delay sample was obtained.</t>
          <t>BBR.probe_rtt_expired: A boolean recording whether the BBR.probe_rtt_min_delay
has expired and is due for a refresh with an application idle period or a
transition into ProbeRTT state.</t>
          <t>The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to
be interpreted as described in <xref target="RFC2119"/>.</t>
        </section>
      </section>
    </section>
    <section anchor="design-overview">
      <name>Design Overview</name>
      <section anchor="high-level-design-goals">
        <name>High-Level Design Goals</name>
        <t>The high-level goal of BBR is to achieve both:</t>
        <ol spacing="normal" type="1"><li>
            <t>The full throughput (or approximate fair share thereof) available to a flow  </t>
            <ul spacing="normal">
              <li>
                <t>Achieved in a fast and scalable manner
(using bandwidth in O(log(BDP)) time).</t>
              </li>
              <li>
                <t>Achieved with average packet loss rates of up to 1%.</t>
              </li>
            </ul>
          </li>
          <li>
            <t>Low queue pressure (low queuing delay and low packet loss).</t>
          </li>
        </ol>
        <t>These goals are in tension: sending faster improves the odds of achieving
(1) but reduces the odds of achieving (2), while sending slower improves
the odds of achieving (2) but reduces the odds of achieving (1). Thus the
algorithm cannot maximize throughput or minimize queue pressure independently,
and must jointly optimize both.</t>
        <t>To try to achieve these goals, and seek an operating point with high throughput
and low delay <xref target="K79"/> <xref target="GK81"/>, BBR aims to adapt its sending process to
match the network delivery process, in two dimensions:</t>
        <ol spacing="normal" type="1"><li>
            <t>data rate: the rate at which the flow sends data should ideally match the
  rate at which the network delivers the flow's data (the available bottleneck
  bandwidth)</t>
          </li>
          <li>
            <t>data volume: the amount of data in flight in the network
  should ideally match the bandwidth-delay product (BDP) of the path</t>
          </li>
        </ol>
        <t>Both the control of the data rate (via the pacing rate) and data volume
(directly via the congestion window; and indirectly via the pacing
rate) are important. A mismatch in either dimension can cause the sender to
fail to meet its high-level design goals:</t>
        <ol spacing="normal" type="1"><li>
            <t>volume mismatch: If a sender perfectly matches its sending rate to the
  available bandwidth, but its C.inflight exceeds the BDP, then
  the sender can maintain a large standing queue, increasing network latency
  and risking packet loss.</t>
          </li>
          <li>
            <t>rate mismatch: If a sender's C.inflight matches the BDP
  perfectly but its sending rate exceeds the available bottleneck bandwidth
  (e.g. the sender transmits a BDP of data in an unpaced fashion, at the
  sender's link rate), then up to a full BDP of data can burst into the
  bottleneck queue, causing high delay and/or high loss.</t>
          </li>
        </ol>
      </section>
      <section anchor="algorithm-overview">
        <name>Algorithm Overview</name>
        <t>Based on the rationale above, BBR tries to spend most of its time matching
its sending process (data rate and data volume) to the network path's delivery
process. To do this, it explores the 2-dimensional control parameter space
of (1) data rate ("bandwidth" or "throughput") and (2) data volume ("in-flight
data"), with a goal of finding the maximum values of each control parameter
that are consistent with its objective for queue pressure.</t>
        <t>Depending on what signals a given network path manifests at a given time,
the objective for queue pressure is measured in terms of the most strict
among:</t>
        <ul spacing="normal">
          <li>
            <t>the amount of data that is estimated to be queued in the bottleneck buffer
(data_in_flight - estimated_BDP): the objective is to maintain this amount
at or below 1.5 * estimated_BDP</t>
          </li>
          <li>
            <t>the packet loss rate: the objective is a maximum per-round-trip packet loss
rate of BBR.LossThresh=2% (and an average packet loss rate considerably lower)</t>
          </li>
        </ul>
      </section>
      <section anchor="state-machine-overview">
        <name>State Machine Overview</name>
        <t>BBR varies its control parameters with a state machine that aims for high
throughput, low latency, low loss, and an approximately fair sharing of
bandwidth, while maintaining an up-to-date model of the network path.</t>
        <t>A BBR flow starts in the Startup state, and ramps up its sending rate quickly,
to rapidly estimate the maximum available bandwidth (BBR.max_bw). When it
estimates the bottleneck bandwidth has been fully utilized, it enters the
Drain state to drain the estimated queue. In steady state a BBR flow mostly
uses the ProbeBW states, to periodically briefly send faster to probe for
higher capacity and then briefly send slower to try to drain any resulting
queue. If needed, it briefly enters the ProbeRTT state, to lower the sending
rate to probe for lower BBR.min_rtt samples. The detailed behavior for each
state is described below.</t>
      </section>
      <section anchor="network-path-model-overview">
        <name>Network Path Model Overview</name>
        <section anchor="high-level-design-goals-for-the-network-path-model">
          <name>High-Level Design Goals for the Network Path Model</name>
          <t>At a high level, the BBR model is trying to reflect two aspects of the network
path:</t>
          <ul spacing="normal">
            <li>
              <t>Model what's required for achieving full throughput: Estimate the data rate
(BBR.max_bw) and data volume (BBR.max_inflight) required to fully utilize the
fair share of the bottleneck bandwidth available to the flow. This
incorporates estimates of the maximum available bandwidth, the BDP of the
path, and the requirements of any offload features on the end hosts or
mechanisms on the network path that produce aggregation effects.</t>
            </li>
            <li>
              <t>Model what's permitted for achieving low queue pressure: Estimate the maximum
data rate (BBR.bw) and data volume (C.cwnd) consistent with the queue pressure
objective, as measured by the estimated degree of queuing and packet loss.</t>
            </li>
          </ul>
          <t>Note that those two aspects are in tension: the highest throughput is available
to the flow when it sends as fast as possible and occupies as many bottleneck
buffer slots as possible; the lowest queue pressure is achieved by the flow
when it sends as slow as possible and occupies as few bottleneck buffer slots
as possible. To resolve the tension, the algorithm aims to achieve the maximum
throughput achievable while still meeting the queue pressure objective.</t>
        </section>
        <section anchor="time-scales-for-the-network-model">
          <name>Time Scales for the Network Model</name>
          <t>At a high level, the BBR model is trying to reflect the properties of the
network path on two different time scales:</t>
          <section anchor="long-term-model">
            <name>Long-term model</name>
            <t>One goal is for BBR to maintain high average utilization of the fair share
of the available bandwidth, over long time intervals. This requires estimates
of the path's data rate and volume capacities that are robust over long time
intervals. This means being robust to congestion signals that may be noisy
or may reflect short-term congestion that has already abated by the time
an ACK arrives. This also means providing a robust history of the best
recently-achievable performance on the path so that the flow can quickly and
robustly aim to re-probe that level of performance whenever it decides to probe
the capacity of the path.</t>
          </section>
          <section anchor="short-term-model">
            <name>Short-term model</name>
            <t>A second goal of BBR is to react to every congestion signal, including loss,
as if it may indicate a persistent/long-term increase in congestion and/or
decrease in the bandwidth available to the flow, because that may indeed
be the case.</t>
          </section>
          <section anchor="time-scale-strategy">
            <name>Time Scale Strategy</name>
            <t>BBR sequentially alternates between spending most of its time using short-term
models to conservatively respect all congestion signals in case they represent
persistent congestion, but periodically using its long-term model to robustly
probe the limits of the available path capacity in case the congestion has
abated and more capacity is available.</t>
          </section>
        </section>
      </section>
      <section anchor="control-parameter-overview">
        <name>Control Parameter Overview</name>
        <t>BBR uses its model to control the connection's sending behavior. Rather than
using a single control parameter, like the C.cwnd parameter that limits
C.inflight in the Reno and CUBIC congestion control algorithms,
BBR uses three distinct control parameters: C.pacing_race, C.send_quantum,
and C.cwnd, defined in (<xref target="output-control-parameters"/>):</t>
      </section>
      <section anchor="environment-and-usage">
        <name>Environment and Usage</name>
        <t>BBR is a congestion control algorithm that is agnostic to transport-layer
and link-layer technologies, requires only sender-side changes, and does
not require changes in the network. Open source implementations of BBR are
available for the TCP <xref target="RFC9293"/> and QUIC <xref target="RFC9000"/> transport
protocols, and these implementations have been used in production
for a large volume of Internet traffic. An open source implementation of
BBR is also available for DCCP <xref target="RFC4340"/>  <xref target="draft-romo-iccrg-ccid5"/>.</t>
      </section>
      <section anchor="ecn">
        <name>ECN</name>
        <t>This experimental version of BBR does not specify a specific response to
Classic <xref target="RFC3168"/>, Alternative Backoff with ECN (ABE) <xref target="RFC8511"/> or
L4S <xref target="RFC9330"/> style ECN. However, if the connection claims ECN support
by marking packets using either the ECT(0) or ECT(1) code point,
the congestion controller response MUST treat any CE marks as congestion.</t>
        <t><xref section="4.1" sectionFormat="comma" target="RFC8311"/> relaxes the requirement from RFC3168 that the
congestion response to CE marks be identical to packet loss.
The congestion response requirements of L4S are detailed in
<xref section="4.3" sectionFormat="comma" target="RFC9330"/>.</t>
      </section>
      <section anchor="experimental-status">
        <name>Experimental Status</name>
        <t>This draft is experimental because there are some known aspects of BBR
for which the community is encouraged to conduct experiments and develop
algorithm improvements, as described below.</t>
        <t>As noted above in <xref target="ecn"/>, BBR as described in this draft does not
specify a specific response to ECN, and instead leaves it as an area for
future work.</t>
        <t>The design of ProbeRTT in <xref target="probertt-design-rationale"/> specifies a ProbeRTT
interval that sacrifices no more than roughly 2% of a flow's available
bandwidth. The impact of using a different interval or making adjustments
for triggering ProbeRTT on specific link types is a subject of
further experimentation.</t>
        <t>The delivery rate sampling algorithm in <xref target="delivery-rate-samples"/>
has an ability to over-estimate delivery rate, as described in
<xref target="compression-and-aggregation"/>. When combined with BBR's windowed
maximum bandwidth filter, this can cause BBR to send too quickly.
BBR mitigates this by limiting any bandwidth sample by the sending rate,
but that still might be higher than the available bandwidth,
particularly in STARTUP.</t>
        <t>BBR does not deal well with persistently application limited traffic
<xref target="detecting-application-limited-phases"/> , such as low latency audio or
video flows.  When unable to fill the pipe for a full round trip,
BBR will not be able to measure the full link bandwidth, and will mark
a bandwidth sample as app-limited. In cases where an application enters
a phase where all bandwidth samples are app-limited, BBR will not
discard old max bandwidth samples that were not app-limited.</t>
      </section>
    </section>
    <section anchor="input-signals">
      <name>Input Signals</name>
      <t>BBR uses estimated delivery rate and RTT as two critical inputs.</t>
      <section anchor="delivery-rate-samples">
        <name>Delivery Rate Samples</name>
        <t>This section describes a generic algorithm for a transport protocol sender to
estimate the current delivery rate of its data on the fly. This technique is
used by BBR to get fresh, reliable, and inexpensive delivery rate information.</t>
        <t>At a high level, the algorithm estimates the rate at which the network
delivered the most recent flight of outbound data packets for a single flow. In
addition, it tracks whether the rate sample was application-limited, meaning
the transmission rate was limited by the sending application rather than the
congestion control algorithm.</t>
        <t>Each acknowledgment that cumulatively or selectively acknowledges that the
network has delivered new data produces a rate sample which records the amount
of data delivered over the time interval between the transmission of a data
packet and the acknowledgment of that packet. The samples reflect the recent
goodput through some bottleneck, which may reside either in the network or
on the end hosts (sender or receiver).</t>
        <section anchor="delivery-rate-sampling-algorithm-overview">
          <name>Delivery Rate Sampling Algorithm Overview</name>
          <section anchor="requirements">
            <name>Requirements</name>
            <t>This algorithm can be implemented in any transport protocol that supports
packet-delivery acknowledgment (so far, implementations are available for TCP
<xref target="RFC9293"/> and QUIC <xref target="RFC9000"/>). This algorithm requires a small amount of
added logic on the sender, and requires that the sender maintain a small amount
of additional per-packet state for packets sent but not yet delivered. In the
most general case it requires high-precision (microsecond-granularity or
better) timestamps on the sender (though millisecond-granularity may suffice
for lower bandwidths).  It does not require any receiver or network
changes. While selective acknowledgments for out-of-order data (e.g.,
<xref target="RFC2018"/>) are not required, such a mechanism is highly recommended for
accurate estimation during reordering and loss recovery phases.</t>
          </section>
          <section anchor="estimating-delivery-rate">
            <name>Estimating Delivery Rate</name>
            <t>A delivery rate sample records the estimated rate at which the network delivered
packets for a single flow, calculated over the time interval between the
transmission of a data packet and the acknowledgment of that packet. Since
the rate samples only include packets actually cumulatively and/or selectively
acknowledged, the sender knows the amount of data that was delivered to the
receiver (not lost), and the sender can compute an estimate of a bottleneck
delivery rate over that time interval.</t>
            <section anchor="ack-rate">
              <name>ACK Rate</name>
              <t>First, consider the rate at which data is acknowledged by the receiver. In
this algorithm, the computation of the ACK rate models the average slope
of a hypothetical "delivered" curve that tracks the cumulative quantity of
data delivered so far on the Y axis, and time elapsed on the X axis. Since
ACKs arrive in discrete events, this "delivered" curve forms a step function,
where each ACK causes a discrete increase in the "delivered" count that causes
a vertical upward step up in the curve. This "ack_rate" computation is the
average slope of the "delivered" step function, as measured from the "knee"
of the step (ACK) preceding the transmit to the "knee" of the step (ACK)
for packet P.</t>
              <t>Given this model, the ack rate sample "slope" is computed as the ratio between
the amount of data marked as delivered over this time interval, and the time
over which it is marked as delivered:</t>
              <artwork><![CDATA[
  ack_rate = data_acked / ack_elapsed
]]></artwork>
              <t>To calculate the amount of data ACKed over the interval, the sender records in
per-packet state "P.delivered", the amount of data that had been marked
delivered before transmitting packet P, and then records how much data had been
marked delivered by the time the ACK for the packet arrives (in "C.delivered"),
and computes the difference:</t>
              <artwork><![CDATA[
  data_acked = C.delivered - P.delivered
]]></artwork>
              <t>To compute the time interval, "ack_elapsed", one might imagine that it would
be feasible to use the round-trip time (RTT) of the packet. But it is not
safe to simply calculate a bandwidth estimate by using the time between the
transmit of a packet and the acknowledgment of that packet. Transmits and
ACKs can happen out of phase with each other, clocked in separate processes.
In general, transmissions often happen at some point later than the most
recent ACK, due to processing or pacing delays. Because of this effect, drastic
over-estimates can happen if a sender were to attempt to estimate bandwidth
by using the round-trip time.</t>
              <t>The following approach computes "ack_elapsed". The starting time is
"P.delivered_time", the time of the delivery curve "knee" from the ACK
preceding the transmit.  The ending time is "C.delivered_time", the time of the
delivery curve "knee" from the ACK for P. Then we compute "ack_elapsed" as:</t>
              <artwork><![CDATA[
  ack_elapsed = C.delivered_time - P.delivered_time
]]></artwork>
              <t>This yields our equation for computing the ACK rate, as the "slope" from
the "knee" preceding the transmit to the "knee" at ACK:</t>
              <artwork><![CDATA[
  ack_rate = data_acked / ack_elapsed
  ack_rate = (C.delivered - P.delivered) /
             (C.delivered_time - P.delivered_time)
]]></artwork>
            </section>
            <section anchor="compression-and-aggregation">
              <name>Compression and Aggregation</name>
              <t>For computing the delivery_rate, the sender prefers ack_rate, the rate at which
packets were acknowledged, since this usually the most reliable metric.
However, this approach of directly using "ack_rate" faces a challenge when used
with paths featuring aggregation, compression, or ACK decimation, which are
prevalent <xref target="A15"/>.  In such cases, ACK arrivals can temporarily make it appear
as if data packets were delivered much faster than the bottleneck rate. To
filter out such implausible ack_rate samples, we consider the send rate for
each flight of data, as follows.</t>
            </section>
            <section anchor="send-rate">
              <name>Send Rate</name>
              <t>The sender calculates the send rate, "send_rate", for a flight of data as
follows. Define "P.first_send_time" as the time of the first send in a flight
of data, and "P.send_time" as the time the final send in that flight of data
(the send that transmits packet "P"). The elapsed time for sending the flight
is:</t>
              <artwork><![CDATA[
  send_elapsed = (P.send_time - P.first_send_time)
]]></artwork>
              <t>Then we calculate the send_rate as:</t>
              <artwork><![CDATA[
  send_rate = data_acked / send_elapsed
]]></artwork>
              <t>Using our "delivery" curve model above, the send_rate can be viewed as the
average slope of a "send" curve that traces the amount of data sent on the Y
axis, and the time elapsed on the X axis: the average slope of the transmission
of this flight of data.</t>
            </section>
            <section anchor="delivery-rate">
              <name>Delivery Rate</name>
              <t>Since it is physically impossible to have data delivered faster than it is
sent in a sustained fashion, when the estimator notices that the ack_rate
for a flight is faster than the send rate for the flight, it filters out
the implausible ack_rate by capping the delivery rate sample to be no higher
than the send rate.</t>
              <t>More precisely, over the interval between each transmission and corresponding
ACK, the sender calculates a delivery rate sample, "delivery_rate", using
the minimum of the rate at which packets were acknowledged or the rate at
which they were sent:</t>
              <artwork><![CDATA[
  delivery_rate = min(send_rate, ack_rate)
]]></artwork>
              <t>Since ack_rate and send_rate both have data_acked as a numerator, this can
be computed more efficiently with a single division (instead of two), as
follows:</t>
              <artwork><![CDATA[
  delivery_elapsed = max(ack_elapsed, send_elapsed)
  delivery_rate = data_acked / delivery_elapsed
]]></artwork>
            </section>
          </section>
          <section anchor="tracking-application-limited-phases">
            <name>Tracking application-limited phases</name>
            <t>In application-limited phases the transmission rate is limited by the
sending application rather than the congestion control algorithm. Modern
transport protocol connections are often application-limited, either due
to request/response workloads (e.g., Web traffic, RPC traffic) or because the
sender transmits data in chunks (e.g., adaptive streaming video).</t>
            <t>Knowing whether a delivery rate sample was application-limited is crucial
for congestion control algorithms and applications to use the estimated delivery
rate samples properly. For example, congestion control algorithms likely
do not want to react to a delivery rate that is lower simply because the
sender is application-limited; for congestion control the key metric is the
rate at which the network path can deliver data, and not simply the rate
at which the application happens to be transmitting data at any moment.</t>
            <t>To track this, the estimator marks a bandwidth sample as application-limited
if there was some moment during the sampled time when congestion control
would have allowed data packets to be sent, and yet there was no data to send.</t>
            <t>More specifically, the algorithm detects that an application-limited phase has
started when the sending application requests to send new data,
or the connection's retransmission mechanisms decide to retransmit data,
and the connection meets the following conditions: the congestion window and
pacing rate would have allowed the connection to send data, and yet the
connection is not currently sending data and has no data to send
(i.e., no unsent data or retransmissions of previously sent data).
The precise determination of this condition depends on how the
connection uses mechanisms to implement pacing, batching, GSO/TSO/offload,
etc.</t>
            <t>If these conditions are met, then the sender has run out of data to feed the
network. This effectively creates a gap, a “bubble” of idle time, within the data
pipeline, analogous to an empty segment in a pipe carrying liquid. This idle
time means that any delivery rate sample obtained from this data packet, and
any rate sample from a packet that follows it in the next round trip, is an
application‑limited sample that potentially underestimates the true available
bandwidth. Thus, when the algorithm marks a transport flow as
application‑limited, it marks all bandwidth samples for the next round trip as
application‑limited (at which point, the bubble, the idle gap, can be said to
have exited the data pipeline).</t>
            <section anchor="considerations-related-to-receiver-flow-control-limits">
              <name>Considerations Related to Receiver Flow Control Limits</name>
              <t>In some cases receiver flow control limits (such as the TCP <xref target="RFC9293"/>
advertised receive window, RCV.WND) are the factor limiting the
delivery rate. This algorithm treats cases where the delivery rate was constrained
by such conditions the same as it treats cases where the delivery rate is
constrained by in-network bottlenecks. That is, it treats receiver bottlenecks
the same as network bottlenecks. This has a conceptual symmetry and has worked
well in practice for congestion control and telemetry purposes.</t>
            </section>
          </section>
        </section>
        <section anchor="detailed-delivery-rate-sampling-algorithm">
          <name>Detailed Delivery Rate Sampling Algorithm</name>
          <section anchor="variables">
            <name>Variables</name>
            <section anchor="per-connection-c-state">
              <name>Per-connection (C) state</name>
              <t>This algorithm requires the following new state variables for each transport
connection:</t>
              <t>C.delivered_time: The wall clock time when C.delivered was last updated.</t>
              <t>C.first_send_time: If packets are in flight, then this holds the send time of
the packet that was most recently marked as delivered. Else, if the connection
was recently idle, then this holds the send time of most recently sent packet.</t>
              <t>C.app_limited: The index of the last transmitted packet marked as
application-limited, or 0 if the connection is not currently
application-limited.</t>
              <t>We also assume that the transport protocol sender implementation tracks the
following state per connection. If the following state variables are not
tracked by an existing implementation, all the following parameters MUST
be tracked to implement this algorithm:</t>
              <t>C.pending_transmissions: The number of bytes queued for transmission on the
sending host at layers lower than the transport layer (i.e. network layer,
traffic shaping layer, network device layer).</t>
              <t>C.lost_out: The amount of data in the current outstanding window that
is marked as lost.</t>
              <t>C.retrans_out: The amount of data in the current outstanding window that
are being retransmitted.</t>
              <t>C.min_rtt: The minimum observed RTT over the lifetime of the connection.</t>
              <t>C.srtt: The smoothed RTT, an exponentially weighted moving average of the
observed RTT of the connection.</t>
            </section>
            <section anchor="per-packet-p-state">
              <name>Per-packet (P) state</name>
              <t>This algorithm requires the following new state variables for each packet that
has been transmitted but has not been acknowledged. As noted in the
<xref target="offload-mechanisms">Offload Mechanisms</xref> section, if a connection uses an
offload mechanism then it is RECOMMENDED that the packet state be tracked
for each packet "aggregate" rather than each individual packet.  For simplicity this
document refers to such state as "per-packet", whether it is per "aggregate" or
per IP packet.</t>
              <t>P.delivered: C.delivered when the packet was sent from transport connection
C.</t>
              <t>P.delivered_time: C.delivered_time when the packet was sent.</t>
              <t>P.first_send_time: C.first_send_time when the packet was sent.</t>
              <t>P.send_time: The pacing departure time selected when the packet was scheduled
to be sent.</t>
              <t>P.is_app_limited: true if C.app_limited was non-zero when the packet was
sent, else false.</t>
              <t>P.tx_in_flight: C.inflight immediately after the transmission of packet P.</t>
              <t>P.packet_id: A monotonically increasing unique identifier of packet P.
This is protocol specific. E.g., for TCP this is the ending sequence
number of the packet, and for QUIC it is the packet number.</t>
            </section>
            <section anchor="rate-sample-rs-output">
              <name>Rate Sample (rs) Output</name>
              <t>This algorithm provides its output in a RateSample structure rs, containing
the following fields:</t>
              <t>RS.delivery_rate: The delivery rate sample (in most cases RS.delivered /
RS.interval).</t>
              <t>RS.is_app_limited: The P.is_app_limited from the most recently delivered packet;
indicates whether the rate sample is application-limited.</t>
              <t>RS.interval: The length of the sampling interval.</t>
              <t>RS.delivered: The amount of data marked as delivered over the sampling interval.</t>
              <t>RS.prior_delivered: The P.delivered count from the most recently delivered packet.</t>
              <t>RS.prior_time: The P.delivered_time from the most recently delivered packet.</t>
              <t>RS.send_elapsed: Send time interval calculated from the most recent packet
delivered (see the "Send Rate" section above).</t>
              <t>RS.ack_elapsed: ACK time interval calculated from the most recent packet
delivered (see the "ACK Rate" section above).</t>
              <t>RS.last_acked_packet_id: The packet identifier of the most recently delivered packet.</t>
            </section>
          </section>
          <section anchor="transmitting-a-data-packet">
            <name>Transmitting a data packet</name>
            <t>Upon transmitting a data packet, the sender snapshots the current delivery
information in per-packet state. This will allow the sender
to generate a rate sample later, in the UpdateRateSample() step, when the
packet is (S)ACKed.</t>
            <t>If there are packets already in flight, then we need to start delivery rate
samples from the time we received the most recent ACK, to try to ensure that
we include the full time the network needs to deliver all in-flight data.
If there is no data in flight yet, then we can start the delivery rate
interval at the current time, since we know that any ACKs after now indicate
that the network was able to deliver that data completely in the sampling
interval between now and the next ACK.</t>
            <t>After each packet transmission, the sender executes the following steps:</t>
            <artwork><![CDATA[
  OnPacketSent(Packet P):
    if (C.inflight == 0)
      C.first_send_time  = C.delivered_time = P.send_time

    P.first_send_time = C.first_send_time
    P.delivered_time  = C.delivered_time
    P.delivered       = C.delivered
    P.is_app_limited  = (C.app_limited != 0)
    P.tx_in_flight    = C.inflight    /* includes data in P */
]]></artwork>
          </section>
          <section anchor="upon-receiving-an-ack">
            <name>Upon receiving an ACK</name>
            <t>When an ACK arrives, the connection first calls InitRateSample() to initialize
the per-ACK RateSample RS:</t>
            <artwork><![CDATA[
  /* Initialize the rate sample
   * generated using the ACK being processed. */
  InitRateSample():
    RS.rtt           = -1
    RS.has_data      = false
    RS.prior_time    = 0
    RS.interval      = 0
    RS.delivery_rate = 0
]]></artwork>
            <t>Next, for each newly acknowledged packet, the connection calls
UpdateRateSample() to update the per-ACK rate sample based on a snapshot of
connection delivery information from the time at which the packet was
transmitted. The connection invokes UpdateRateSample() multiple times when a
stretched ACK acknowledges multiple data packets. The connection uses the
information from the most recently sent packet to update the rate sample:</t>
            <artwork><![CDATA[
  /* Update RS when a packet is acknowledged. */
  UpdateRateSample(Packet P):
    if (P.delivered_time == 0)
      return /* P already acknowledged */

    C.delivered += P.data_length
    C.delivered_time = Now()

    /* Update info using the newest packet: */
    if (!RS.has_data || IsNewestPacket(P))
      RS.has_data         = true
      RS.prior_delivered  = P.delivered
      RS.prior_time       = P.delivered_time
      RS.is_app_limited   = P.is_app_limited
      RS.send_elapsed     = P.send_time - P.first_send_time
      RS.ack_elapsed      = C.delivered_time - P.delivered_time
      RS.last_acked_packet_id = P.packet_id
      C.first_send_time   = P.send_time

    /* Mark the packet as delivered once it's acknowleged. */
    P.delivered_time = 0

  /* Is the given Packet the most recently sent packet
   * that has been delivered? */
  IsNewestPacket(Packet P):
    return (P.send_time > C.first_send_time ||
            (P.send_time == C.first_send_time &&
             P.packet_id > RS.last_packet_id))
]]></artwork>
            <t>Finally, after the connection has processed all newly acknowledged packets for this
ACK by calling UpdateRateSample() for each packet, the connection invokes
GenerateRateSample() to finish populating the rate sample, RS:</t>
            <artwork><![CDATA[
  /* Upon receiving ACK, fill in delivery rate sample RS. */
  GenerateRateSample():
    /* Clear app-limited field if bubble is ACKed and gone. */
    if (C.app_limited && C.delivered > C.app_limited)
      C.app_limited = 0

    if (RS.prior_time == 0)
      return /* nothing delivered on this ACK */

    /* Use the longer of the send_elapsed and ack_elapsed */
    RS.interval = max(RS.send_elapsed, RS.ack_elapsed)

    RS.delivered = C.delivered - RS.prior_delivered

    /* Normally we expect interval >= MinRTT.
     * Note that rate may still be overestimated when a spuriously
     * retransmitted skb was first (s)acked because "interval"
     * is under-estimated (up to an RTT). However, continuously
     * measuring the delivery rate during loss recovery is crucial
     * for connections that suffer heavy or prolonged losses.
     */
    if (RS.interval <  C.min_rtt)
      return  /* no reliable rate sample */

    if (RS.interval != 0)
      RS.delivery_rate = RS.delivered / RS.interval

    return    /* filled in RS with a rate sample */
]]></artwork>
          </section>
          <section anchor="detecting-application-limited-phases">
            <name>Detecting application-limited phases</name>
            <t>An application-limited phase starts when the connection decides to send more
data, at a point in time when the connection had previously run out of data.
Some decisions to send more data are triggered by the application writing
more data to the connection, and some are triggered by loss detection (during
ACK processing or upon the triggering of a timer) estimating that some sequence
ranges need to be retransmitted. To detect all such cases, the algorithm
calls CheckIfApplicationLimited() to check for application-limited behavior in
the following situations:</t>
            <ul spacing="normal">
              <li>
                <t>The sending application asks the transport layer to send more data; i.e.,
upon each write from the application, before new application data is enqueued
in the transport send buffer or transmitted.</t>
              </li>
              <li>
                <t>At the beginning of ACK processing, before updating the estimated
amount of data in flight, and before congestion control modifies C.cwnd or
C.pacing_rate.</t>
              </li>
              <li>
                <t>At the beginning of connection timer processing, for all timers that might
result in the transmission of one or more data packets. For example: RTO
timers, TLP timers, RACK reordering timers, or Zero Window Probe timers.</t>
              </li>
            </ul>
            <t>When checking for application-limited behavior, the connection checks all the
conditions previously described in the "Tracking application-limited phases"
section, and if all are met then it marks the connection as
application-limited:</t>
            <artwork><![CDATA[
  CheckIfApplicationLimited():
    if (NoUnsentData() &&
        C.pending_transmissions == 0 &&
        C.inflight < C.cwnd &&
        C.lost_out <= C.retrans_out)
      MarkConnectionAppLimited()

  MarkConnectionAppLimited():
    C.app_limited = max(C.delivered + C.inflight, 1)
]]></artwork>
          </section>
        </section>
        <section anchor="delivery-rate-sampling-discussion">
          <name>Delivery Rate Sampling Discussion</name>
          <section anchor="offload-mechanisms">
            <name>Offload Mechanisms</name>
            <t>If a transport sender implementation uses an offload mechanism (such as TSO,
GSO, etc.) to combine multiple C.SMSS of data into a single packet "aggregate"
for the purposes of scheduling transmissions, then it is RECOMMENDED that the
per-packet state described in Section <xref target="per-packet-p-state">Per-packet (P) state</xref> be
tracked for each packet "aggregate" rather than each IP packet.</t>
          </section>
          <section anchor="impact-of-ack-losses">
            <name>Impact of ACK losses</name>
            <t>Delivery rate samples are generated upon receiving each ACK; ACKs may contain
both cumulative and selective acknowledgment information. Losing an ACK results
in losing the delivery rate sample corresponding to that ACK, and generating a
delivery rate sample at later a time (upon the arrival of the next ACK). This
can underestimate the delivery rate due the artificially inflated
"RS.interval". The impact of this effect is mitigated using the BBR.max_bw
filter.</t>
          </section>
          <section anchor="impact-of-packet-reordering">
            <name>Impact of packet reordering</name>
            <t>This algorithm is robust to packet reordering; it makes no assumptions about
the order in which packets are delivered or ACKed. In particular, for a
particular packet P, it does not matter which packets are delivered between the
transmission of P and the ACK of packet P, since C.delivered will be
incremented appropriately in any case.</t>
          </section>
          <section anchor="impact-of-packet-loss-and-retransmissions">
            <name>Impact of packet loss and retransmissions</name>
            <t>There are several possible approaches for handling cases where a delivery
rate sample is based on a retransmitted packet.</t>
            <t>If the transport protocol supports unambiguous ACKs for retransmitted data
(as in QUIC <xref target="RFC9000"/>) then the algorithm is perfectly robust to retransmissions,
because the starting packet, P, for the sample can be unambiguously retrieved.</t>
            <t>If the transport protocol, like TCP <xref target="RFC9293"/>, has ambiguous ACKs for
retransmitted sequence ranges, then the following approaches MAY be used:</t>
            <ol spacing="normal" type="1"><li>
                <t>The sender MAY choose to filter out implausible delivery rate samples, as
  described in the GenerateRateSample() step in the "Upon receiving an ACK"
  section, by discarding samples whose RS.interval is lower than the minimum
  RTT seen on the connection.</t>
              </li>
              <li>
                <t>The sender MAY choose to skip the generation of a delivery rate sample for
  a retransmitted sequence range.</t>
              </li>
            </ol>
            <section anchor="connections-without-sack">
              <name>TCP Connections without SACK</name>
              <t>Whenever possibile, TCP connections using BBR as a congestion controller SHOULD
use both SACK and timestamps. Failure to do so will cause BBR's RTT and
bandwidth measurements to be much less accurate.</t>
              <t>When using TCP without SACK (i.e., either or both ends of the connections do
not accept SACK), this algorithm can be extended to estimate approximate
delivery rates using duplicate ACKs (much like Reno and <xref target="RFC5681"/> estimates
that each duplicate ACK indicates that a data packet has been delivered).</t>
            </section>
          </section>
        </section>
      </section>
      <section anchor="rtt-samples">
        <name>RTT Samples</name>
        <t>Upon transmitting each packet, BBR or the associated transport protocol
stores in per-packet data the wall-clock scheduled transmission time of the
packet in P.send_time (see "Pacing Rate: C.pacing_rate" in
<xref target="pacing-rate-bbrpacingrate"/> for how this is calculated).</t>
        <t>For every ACK that newly acknowledges data, the sender's BBR implementation
or the associated transport protocol implementation attempts to calculate an
RTT sample. The sender MUST consider any potential retransmission ambiguities
that can arise in some transport protocols. If some of the acknowledged data
was not retransmitted, or some of the data was retransmitted but the sender
can still unambiguously determine the RTT of the data (e.g. QUIC or TCP with
timestamps <xref target="RFC7323"/>), then the sender calculates an RTT sample, RS.rtt,
as follows:</t>
        <artwork><![CDATA[
  RS.rtt = Now() - P.send_time
]]></artwork>
      </section>
    </section>
    <section anchor="detailed-algorithm">
      <name>Detailed Algorithm</name>
      <section anchor="state-machine">
        <name>State Machine</name>
        <t>BBR implements a state machine that uses the network path model to guide
its decisions, and the control parameters to enact its decisions.</t>
        <section anchor="state-transition-diagram">
          <name>State Transition Diagram</name>
          <t>The following state transition diagram summarizes the flow of control and
the relationship between the different states:</t>
          <artwork><![CDATA[
             |
             V
    +---> Startup  ------------+
    |        |                 |
    |        V                 |
    |     Drain  --------------+
    |        |                 |
    |        V                 |
    +---> ProbeBW_DOWN  -------+
    | ^      |                 |
    | |      V                 |
    | |   ProbeBW_CRUISE ------+
    | |      |                 |
    | |      V                 |
    | |   ProbeBW_REFILL  -----+
    | |      |                 |
    | |      V                 |
    | |   ProbeBW_UP  ---------+
    | |      |                 |
    | +------+                 |
    |                          |
    +---- ProbeRTT <-----------+
]]></artwork>
        </section>
        <section anchor="state-machine-operation-overview">
          <name>State Machine Operation Overview</name>
          <t>When starting up, BBR probes to try to quickly build a model of the network
path; to adapt to later changes to the path or its traffic, BBR must continue
to probe to update its model. If the available bottleneck bandwidth increases,
BBR must send faster to discover this. Likewise, if the round-trip propagation
delay changes, this changes the BDP, and thus BBR must send slower to get
C.inflight below the new BDP in order to measure the new BBR.min_rtt. Thus,
BBR's state machine runs periodic, sequential experiments, sending faster
to check for BBR.bw increases or sending slower to yield bandwidth, drain
the queue, and check for BBR.min_rtt decreases. The frequency, magnitude,
duration, and structure of these experiments differ depending on what's already
known (startup or steady-state) and application sending behavior (intermittent
or continuous).</t>
          <t>This state machine has several goals:</t>
          <ul spacing="normal">
            <li>
              <t>Achieve high throughput by efficiently utilizing available bandwidth.</t>
            </li>
            <li>
              <t>Achieve low latency and packet loss rates by keeping queues bounded and small.</t>
            </li>
            <li>
              <t>Share bandwidth with other flows in an approximately fair manner.</t>
            </li>
            <li>
              <t>Feed samples to the model estimators to refresh and update the model.</t>
            </li>
          </ul>
        </section>
        <section anchor="state-machine-tactics">
          <name>State Machine Tactics</name>
          <t>In the BBR framework, at any given time the sender can choose one of the
following tactics:</t>
          <ul spacing="normal">
            <li>
              <t>Acceleration: Send faster than the network is delivering data: to probe the
maximum bandwidth available to the flow</t>
            </li>
            <li>
              <t>Deceleration: Send slower than the network is delivering data: to reduce
the amount of data in flight, with a number of overlapping motivations:  </t>
              <ul spacing="normal">
                <li>
                  <t>Reducing queuing delay: to reduce queuing delay, to reduce latency for
request/response cross-traffic (e.g. RPC, web traffic).</t>
                </li>
                <li>
                  <t>Reducing packet loss: to reduce packet loss, to reduce tail latency for
request/response cross-traffic (e.g. RPC, web traffic) and improve
coexistence with Reno/CUBIC.</t>
                </li>
                <li>
                  <t>Probing BBR.min_rtt: to probe the path's BBR.min_rtt</t>
                </li>
                <li>
                  <t>Bandwidth convergence: to aid bandwidth fairness convergence, by leaving
unused capacity in the bottleneck link or bottleneck buffer, to allow other
flows that may have lower sending rates to discover and utilize the unused
capacity</t>
                </li>
                <li>
                  <t>Burst tolerance: to allow bursty arrivals of cross-traffic (e.g. short web
or RPC requests) to be able to share the bottleneck link without causing
excessive queuing delay or packet loss</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Cruising: Send at the same rate the network is delivering data: try to match
the sending rate to the flow's current available bandwidth, to try to achieve
high utilization of the available bandwidth without increasing queue pressure</t>
            </li>
          </ul>
          <t>Throughout the lifetime of a BBR flow, it sequentially cycles through all
three tactics, to measure the network path and try to optimize its operating
point.</t>
          <t>BBR's state machine uses two control mechanisms: the BBR.pacing_gain and the
C.cwnd. Primarily, it uses BBR.pacing_gain (see the "Pacing Rate" section), which
controls how fast packets are sent relative to BBR.bw. A BBR.pacing_gain &gt; 1
decreases inter-packet time and increases C.inflight. A BBR.pacing_gain &lt; 1 has the
opposite effect, increasing inter-packet time and while aiming to decrease
C.inflight. C.cwnd is sufficiently larger than the BDP to allow the higher
pacing gain to accumulate more packets in flight. Only if the state machine
needs to quickly reduce C.inflight to a particular absolute value, it uses
C.cwnd.</t>
        </section>
      </section>
      <section anchor="algorithm-organization">
        <name>Algorithm Organization</name>
        <t>The BBR algorithm is an event-driven algorithm that executes steps upon the
following events: connection initialization, upon each ACK, upon the
transmission of each quantum, and upon loss detection events. All of the
sub-steps invoked referenced below are described below.</t>
        <section anchor="initialization">
          <name>Initialization</name>
          <t>Upon transport connection initialization, BBR executes its initialization
steps:</t>
          <artwork><![CDATA[
  BBROnInit():
    InitWindowedMaxFilter(filter=BBR.max_bw_filter, value=0, time=0)
    BBR.min_rtt = C.srtt ? C.srtt : Infinity
    BBR.min_rtt_stamp = Now()
    BBR.probe_rtt_done_stamp = 0
    BBR.probe_rtt_round_done = false
    BBR.prior_cwnd = 0
    BBR.idle_restart = false
    BBR.extra_acked_interval_start = Now()
    BBR.extra_acked_delivered = 0
    BBR.full_bw_reached = false
    BBRResetCongestionSignals()
    BBRResetShortTermModel()
    BBRInitRoundCounting()
    BBRResetFullBW()
    BBRInitPacingRate()
    BBREnterStartup()
]]></artwork>
        </section>
        <section anchor="per-transmit-steps">
          <name>Per-Transmit Steps</name>
          <t>Before transmitting, BBR merely needs to check for the case where the flow
is restarting from idle:</t>
          <artwork><![CDATA[
  BBROnTransmit():
    BBRHandleRestartFromIdle()
]]></artwork>
        </section>
        <section anchor="per-ack-steps">
          <name>Per-ACK Steps</name>
          <t>On every ACK, the BBR algorithm executes the following BBRUpdateOnACK() steps
in order to update its network path model, update its state machine, and
adjust its control parameters to adapt to the updated model:</t>
          <artwork><![CDATA[
  BBRUpdateOnACK():
    GenerateRateSample()
    BBRUpdateModelAndState()
    BBRUpdateControlParameters()

  BBRUpdateModelAndState():
    BBRUpdateLatestDeliverySignals()
    BBRUpdateCongestionSignals()
    BBRUpdateACKAggregation()
    BBRCheckFullBWReached()
    BBRCheckStartupDone()
    BBRCheckDrainDone()
    BBRUpdateProbeBWCyclePhase()
    BBRUpdateMinRTT()
    BBRCheckProbeRTT()
    BBRAdvanceLatestDeliverySignals()
    BBRBoundBWForModel()

  BBRUpdateControlParameters():
    BBRSetPacingRate()
    BBRSetSendQuantum()
    BBRSetCwnd()
]]></artwork>
        </section>
        <section anchor="per-loss-steps">
          <name>Per-Loss Steps</name>
          <t>On every packet loss event where the transport protocol marks some packet P as
lost, the BBR algorithm calls BBRHandleLostPacket(P) to update its network path
model (see <xref target="probing-for-bandwidth-in-probebw"/>).</t>
        </section>
        <section anchor="spurious-loss-steps">
          <name>Spurious Loss Recovery Steps</name>
          <t>If the transport protocol detects that a loss recovery episode was spurious,
BBR calls BBRHandleSpuriousLossDetection() to update its network path model
(see <xref target="updating-the-model-upon-spurious-packet-loss"/>).</t>
        </section>
      </section>
      <section anchor="state-machine-operation">
        <name>State Machine Operation</name>
        <section anchor="startup">
          <name>Startup</name>
          <section anchor="startup-dynamics">
            <name>Startup Dynamics</name>
            <t>When a BBR flow starts up, it performs its first (and most rapid) sequential
probe/drain process in the Startup and Drain states. Network link bandwidths
currently span a range of at least 11 orders of magnitude, from a few bps
to hundreds of Gbps. To quickly learn BBR.max_bw, given this huge range to
explore, BBR's Startup state does an exponential search of the rate space,
doubling the sending rate each round. This finds BBR.max_bw in O(log_2(BDP))
round trips.</t>
            <t>To achieve this rapid probing smoothly, in Startup BBR uses the minimum gain
values that will allow the sending rate to double each round: in Startup BBR
sets BBR.pacing_gain to BBR.StartupPacingGain (2.77) <xref target="BBRStartupPacingGain"/>
and BBR.cwnd_gain to BBR.DefaultCwndGain (2) <xref target="BBRStartupCwndGain"/>.</t>
            <t>When initializing a connection, or upon any later entry into Startup mode,
BBR executes the following BBREnterStartup() steps:</t>
            <artwork><![CDATA[
  BBREnterStartup():
    BBR.state = Startup
    BBR.pacing_gain = BBR.StartupPacingGain
    BBR.cwnd_gain = BBR.DefaultCwndGain
]]></artwork>
            <t>As BBR grows its sending rate rapidly, it obtains higher delivery rate
samples, BBR.max_bw increases, and the C.pacing_rate and C.cwnd both adapt by
smoothly growing in proportion. Once the pipe is full, a queue typically
forms, but the BBR.cwnd_gain bounds any queue to (BBR.cwnd_gain - 1) * estimated_BDP,
which is approximately (2 - 1) * estimated_BDP = estimated_BDP.
The immediately following Drain state is designed to quickly drain that queue.</t>
            <t>During Startup, BBR estimates whether the pipe is full using two estimators.
The first looks for a plateau in the BBR.max_bw estimate. The second looks
for packet loss. The following subsections discuss these estimators.</t>
            <artwork><![CDATA[
  BBRCheckStartupDone():
    BBRCheckStartupHighLoss()
    if (BBR.state == Startup && BBR.full_bw_reached)
      BBREnterDrain()
]]></artwork>
          </section>
          <section anchor="exiting-acceleration-based-on-bandwidth-plateau">
            <name>Exiting Acceleration Based on Bandwidth Plateau</name>
            <t>In phases where BBR is accelerating to probe the available bandwidth -
Startup and ProbeBW_UP - BBR runs a state machine to estimate whether an
accelerating sending rate has saturated the available per-flow bandwidth
("filled the pipe") by looking for a plateau in the measured
RS.delivery_rate.</t>
            <t>BBR tracks the status of the current full-pipe estimation process in the
boolean BBR.full_bw_now, and uses BBR.full_bw_now to exit ProbeBW_UP. BBR
records in the boolean BBR.full_bw_reached whether BBR estimates that it
has ever fully utilized its available bandwidth (over the lifetime of the
connection), and uses BBR.full_bw_reached to decide when to exit Startup
and enter Drain.</t>
            <t>The full pipe estimator works as follows: if BBR counts several (three)
non-application-limited rounds where attempts to significantly increase the
delivery rate actually result in little increase (less than 25 percent),
then it estimates that it has fully utilized the per-flow available bandwidth,
and sets both BBR.full_bw_now and BBR.full_bw_reached to true.</t>
            <t>Upon starting a full pipe detection process (either on startup or when probing
for an increase in bandwidth), the following steps are taken:</t>
            <artwork><![CDATA[
  BBRResetFullBW():
    BBR.full_bw = 0
    BBR.full_bw_count = 0
    BBR.full_bw_now = 0
]]></artwork>
            <t>While running the full pipe detection process, upon an ACK that acknowledges
new data, and when the delivery rate sample is not application-limited
(see <xref target="delivery-rate-samples"/>), BBR runs the "full pipe" estimator:</t>
            <artwork><![CDATA[
  BBRCheckFullBWReached():
    if (BBR.full_bw_now || !BBR.round_start || RS.is_app_limited)
      return  /* no need to check for a full pipe now */
    if (RS.delivery_rate >= BBR.full_bw * 1.25)
      BBRResetFullBW()       /* bw is still growing, so reset */
      BBR.full_bw = RS.delivery_rate  /* record new baseline bw */
      return
    BBR.full_bw_count++   /* another round w/o much growth */
    BBR.full_bw_now = (BBR.full_bw_count >= 3)
    if (BBR.full_bw_now)
      BBR.full_bw_reached = true
]]></artwork>
            <t>BBR waits three packet-timed round trips to have reasonable evidence that the
sender is not detecting a delivery-rate plateau that was temporarily imposed by
congestion or receive-window auto-tuning. This three-round threshold was
validated by experimental data to allow the receiver the chance to grow its
receive window.</t>
          </section>
          <section anchor="exiting-startup-based-on-packet-loss">
            <name>Exiting Startup Based on Packet Loss</name>
            <t>A second method BBR uses for estimating the bottleneck is full in Startup
is by looking at packet losses.</t>
            <t>For connections that can detect more than one packet loss per round trip
(i.e., a connection where C.has_selective_acks is true),
BBRCheckStartupHighLoss() exits Startup based on packet loss if the following
criteria are all met:</t>
            <ul spacing="normal">
              <li>
                <t>The connection has been in fast recovery for at least one full packet-timed
round trip.</t>
              </li>
              <li>
                <t>The loss rate over the time scale of a single full round trip exceeds
BBR.LossThresh (2%).</t>
              </li>
              <li>
                <t>There are at least BBRStartupFullLossCnt=6 discontiguous sequence ranges
lost in that round trip.</t>
              </li>
            </ul>
            <t>For connections for which C.has_selective_acks is false and thus the connection
can only detect one packet loss per round trip, BBRCheckStartupHighLoss() exits
Startup based on packet loss if any packet loss is detected during fast
recovery.</t>
            <t>If BBRCheckStartupHighLoss() exits Startup based on packet loss, it takes the
following steps. First, it sets BBR.full_bw_reached = true. Then it sets
BBR.inflight_longterm to its estimate of a safe level of in-flight data suggested
by these losses, which is max(BBR.bdp, BBR.inflight_latest), where
BBR.inflight_latest is the max delivered volume of data (RS.delivered) over
the last round trip. Finally, it exits Startup and enters Drain.</t>
            <t>The algorithm waits until all three criteria are met to filter out noise
from burst losses, and to try to ensure the bottleneck is fully utilized
on a sustained basis, and the full bottleneck bandwidth has been measured,
before attempting to drain the level of in-flight data to the estimated BDP.</t>
          </section>
        </section>
        <section anchor="drain">
          <name>Drain</name>
          <t>Upon exiting Startup, BBR enters its Drain state. In Drain, BBR aims to quickly
drain any queue at the bottleneck link that was created in Startup by switching
to a pacing_gain well below 1.0, until any estimated queue has been drained. It
uses a pacing_gain of BBR.DrainPacingGain = 0.5, chosen via analysis
<xref target="BBRDrainPacingGain"/> and experimentation to try to drain the queue in less
than one round-trip:</t>
          <artwork><![CDATA[
  BBREnterDrain():
    BBR.state = Drain
    BBR.pacing_gain = BBR.DrainPacingGain    /* pace slowly */
    BBR.cwnd_gain = BBR.DefaultCwndGain      /* maintain cwnd */
    BBR.drain_start_round = BBR.round_count
]]></artwork>
          <t>In Drain, when the amount of data in flight is less than or equal to the
estimated BDP, meaning BBR estimates that the queue at the bottleneck link
has been fully drained, then BBR exits Drain and enters ProbeBW. Normally, this
condition should be met within one round-trip of entering the drain state.
However, it could take longer if the bandwdith was overestimated during Startup
due to interactions with competing flows. In that case, BBR enters ProbeBW
after 3 round-trips, allowing the bandwidth max filter to advance during the
next probing cycle. To implement this, upon every ACK BBR executes:</t>
          <artwork><![CDATA[
  BBRCheckDrainDone():
    if (BBR.state == Drain &&
        (C.inflight <= BBRInflight(1.0) ||
         BBR.round_count > BBR.drain_start_round + 3))
      BBREnterProbeBW()
]]></artwork>
        </section>
        <section anchor="probebw">
          <name>ProbeBW</name>
          <t>Long-lived BBR flows tend to spend the vast majority of their time in the
ProbeBW states. In the ProbeBW states, a BBR flow sequentially accelerates,
decelerates, and cruises, to measure the network path, improve its operating
point (increase throughput and reduce queue pressure), and converge toward a
more fair allocation of bottleneck bandwidth. To do this, the flow sequentially
cycles through all three tactics: trying to send faster than, slower than, and
at the same rate as the network delivery process. To achieve this, a BBR flow
in ProbeBW mode cycles through the four Probe bw states (DOWN, CRUISE, REFILL,
and UP) described below in turn.</t>
          <section anchor="probebwdown">
            <name>ProbeBW_DOWN</name>
            <t>In the ProbeBW_DOWN phase of the cycle, a BBR flow pursues the deceleration
tactic, to try to send slower than the network is delivering data, to reduce
the amount of data in flight, with all of the standard motivations for the
deceleration tactic (discussed in "State Machine Tactics" in
<xref target="state-machine-tactics"/>). It does this by switching to a
BBR.pacing_gain of 0.90, sending at 90% of BBR.bw. The pacing_gain value
of 0.90 is derived based on the ProbeBW_UP pacing gain of 1.25, as the minimum
pacing_gain value that allows bandwidth-based convergence to approximate
fairness, and validated through experiments.</t>
            <t>Exit conditions: The flow exits the ProbeBW_DOWN phase and enters CRUISE
when the flow estimates that both of the following conditions have been
met:</t>
            <ul spacing="normal">
              <li>
                <t>There is free headroom: If BBR.inflight_longterm is set, then BBR remains in
ProbeBW_DOWN at least until inflight is less than or
equal to a target calculated based on (1 - BBR.Headroom)*BBR.inflight_longterm.
The goal of this constraint is to ensure that in cases where loss signals
suggest an upper limit on C.inflight, then the flow attempts
to leave some free headroom in the path (e.g. free space in the bottleneck
buffer or free time slots in the bottleneck link) that can be used by
cross traffic (both for convergence of bandwidth shares and for burst tolerance).</t>
              </li>
              <li>
                <t>C.inflight is less than or equal to BBR.bdp, i.e. the flow
estimates that it has drained any queue at the bottleneck.</t>
              </li>
            </ul>
          </section>
          <section anchor="probebwcruise">
            <name>ProbeBW_CRUISE</name>
            <t>In the ProbeBW_CRUISE phase of the cycle, a BBR flow pursues the "cruising"
tactic (discussed in "State Machine Tactics" in
<xref target="state-machine-tactics"/>), attempting to send at the same rate the
network is delivering data. It tries to match the sending rate to the flow's
current available bandwidth, to try to achieve high utilization of the
available bandwidth without increasing queue pressure. It does this by
switching to a pacing_gain of 1.0, sending at 100% of BBR.bw. Notably, while
in this state it responds to concrete congestion signals (loss) by reducing
BBR.bw_shortterm and BBR.inflight_shortterm, because these signals suggest that
the available bandwidth and deliverable inflight have likely
reduced, and the flow needs to change to adapt, slowing down to match the
latest delivery process.</t>
            <t>Exit conditions: The connection adaptively holds this state until it decides
that it is time to probe for bandwidth (see "Time Scale for Bandwidth Probing",
in <xref target="time-scale-for-bandwidth-probing-"/>), at which time it enters
ProbeBW_REFILL.</t>
          </section>
          <section anchor="probebwrefill">
            <name>ProbeBW_REFILL</name>
            <t>The goal of the ProbeBW_REFILL state is to "refill the pipe", to try to fully
utilize the network bottleneck without creating any significant queue pressure.</t>
            <t>To do this, BBR first resets the short-term model parameters BBR.bw_shortterm and
BBR.inflight_shortterm, setting both to "Infinity". This is the key moment in the BBR
time scale strategy (see "Time Scale Strategy", <xref target="time-scale-strategy"/>)
where the flow pivots, discarding its short-term model that incorporates packet
losses caused by cross-traffic and beginning to robustly probe the bottleneck's
long-term available bandwidth. During this time the estimated bandwidth and
BBR.inflight_longterm, if set, constrain the connection.</t>
            <t>During ProbeBW_REFILL BBR uses a BBR.pacing_gain of 1.0, to send at a rate
that matches the current estimated available bandwidth, for one packet-timed
round trip. The goal is to fully utilize the bottleneck link before
transitioning into ProbeBW_UP and significantly increasing the chances of
causing loss. The motivating insight is that, as soon as a flow starts
acceleration, sending faster than the available bandwidth, it will start
building a queue at the bottleneck. And if the buffer is shallow enough,
then the flow can cause loss signals very shortly after the first accelerating
packets arrive at the bottleneck. If the flow were to neglect to fill the
pipe before it causes this loss signal, then these very quick signals of excess
queue could cause the flow's estimate of the path's capacity (i.e. BBR.inflight_longterm)
to significantly underestimate. In particular, if the flow were to transition
directly from ProbeBW_CRUISE to ProbeBW_UP, C.inflight
(at the time the first accelerating packets were sent) may often be still very
close to the C.inflight maintained in CRUISE, which may be
only (1 - BBR.Headroom)*BBR.inflight_longterm.</t>
            <t>Exit conditions: The flow exits ProbeBW_REFILL after one packet-timed round
trip, and enters ProbeBW_UP. This is because after one full round trip of
sending in ProbeBW_REFILL the flow (if not application-limited) has had an
opportunity to place as many packets in flight as its BBR.bw and BBR.inflight_longterm
permit. Correspondingly, at this point the flow starts to see bandwidth samples
reflecting its ProbeBW_REFILL behavior, which may be putting too much data
in flight.</t>
          </section>
          <section anchor="probebwup">
            <name>ProbeBW_UP</name>
            <t>After ProbeBW_REFILL refills the pipe, ProbeBW_UP probes for possible
increases in available bandwidth by raising the sending rate, using a
BBR.pacing_gain of 1.25, to send faster than the current estimated available
bandwidth. It also raises BBR.cwnd_gain to 2.25, to ensure that the flow
can send faster than it had been, even if C.cwnd was previously limiting the
sending process.</t>
            <t>If the flow has not set BBR.inflight_longterm, it implicitly tries to raise
C.inflight to at least BBR.pacing_gain * BBR.bdp = 1.25 *
BBR.bdp.</t>
            <t>If the flow has set BBR.inflight_longterm and encounters that limit, it then
gradually increases the upper volume bound (BBR.inflight_longterm) using the
following approach:</t>
            <ul spacing="normal">
              <li>
                <t>BBR.inflight_longterm: The flow raises BBR.inflight_longterm in ProbeBW_UP in a manner
that is slow and cautious at first, but increasingly rapid and bold over time.
The initial caution is motivated by the fact that a given BBR flow may be sharing
a shallow buffer with thousands of other flows, so that the buffer space
available to the flow may be quite tight (even just a single packet or
less). The increasingly rapid growth over time is motivated by the fact that
in a high-speed WAN the increase in available bandwidth (and thus the estimated
BDP) may require the flow to grow C.inflight by up to
O(1,000,000) packets; even a high-speed WAN BDP like
10Gbps * 100ms is around 83,000 packets (with a 1500-byte MTU). The additive
increase to BBR.inflight_longterm exponentially doubles each round trip;
in each successive round trip, BBR.inflight_longterm grows by 1, 2, 4, 8, 16,
etc, with the increases spread uniformly across the entire round trip.
This helps allow BBR to utilize a larger BDP in O(log(BDP)) round trips,
meeting the design goal for scalable utilization of newly-available bandwidth.</t>
              </li>
            </ul>
            <t>Exit conditions: The BBR flow ends ProbeBW_UP bandwidth probing and
transitions to ProbeBW_DOWN to try to drain the bottleneck queue when either
of the following conditions are met:</t>
            <ol spacing="normal" type="1"><li>
                <t>Bandwidth saturation: BBRIsTimeToGoDown() (see below) uses the "full pipe"
  estimator (see <xref target="exiting-acceleration-based-on-bandwidth-plateau"/>) to
  estimate whether the flow has saturated the available per-flow bandwidth
  ("filled the pipe"), by looking for a plateau in the measured
  RS.delivery_rate. If, during this process, C.inflight is constrained
  by BBR.inflight_longterm (the flow becomes cwnd-limited while cwnd is limited by
  BBR.inflight_longterm), then the flow cannot fully explore the available bandwidth,
  and so it resets the "full pipe" estimator by calling BBRResetFullBW().</t>
              </li>
              <li>
                <t>Loss: The current loss rate, over the time scale of the last round trip,
  exceeds BBR.LossThresh (2%).</t>
              </li>
            </ol>
          </section>
          <section anchor="time-scale-for-bandwidth-probing-">
            <name>Time Scale for Bandwidth Probing</name>
            <t>Choosing the time scale for probing bandwidth is tied to the question of
how to coexist with legacy Reno/CUBIC flows, since probing for bandwidth
runs a significant risk of causing packet loss, and causing packet loss can
significantly limit the throughput of such legacy Reno/CUBIC flows.</t>
            <section anchor="bandwidth-probing-and-coexistence-with-renocubic">
              <name>Bandwidth Probing and Coexistence with Reno/CUBIC</name>
              <t>BBR has an explicit strategy for coexistence with Reno/CUBIC: to try to behave
in a manner so that  Reno/CUBIC flows coexisting with BBR can continue to
work well in the primary contexts where they do today:</t>
              <ul spacing="normal">
                <li>
                  <t>Intra-datacenter/LAN traffic: the goal is to allow Reno/CUBIC to be able
to perform well in 100M through 40G enterprise and datacenter Ethernet:  </t>
                  <ul spacing="normal">
                    <li>
                      <t>BDP = 40 Gbps * 20 us / (1514 bytes) ~= 66 packets</t>
                    </li>
                  </ul>
                </li>
                <li>
                  <t>Public Internet last mile traffic: the goal is to allow Reno/CUBIC to be
able to support up to 25Mbps (for 4K Video) at an RTT of 30ms, typical
parameters for common CDNs for large video services:  </t>
                  <ul spacing="normal">
                    <li>
                      <t>BDP = 25Mbps * 30 ms / (1514 bytes) ~= 62 packets</t>
                    </li>
                  </ul>
                </li>
              </ul>
              <t>The challenge in meeting these goals is that Reno/CUBIC need long periods
of no loss to utilize large BDPs. The good news is that in the environments
where Reno/CUBIC work well today (mentioned above), the BDPs are small, roughly
~100 packets or less.</t>
            </section>
            <section anchor="a-dual-time-scale-approach-for-coexistence">
              <name>A Dual-Time-Scale Approach for Coexistence</name>
              <t>The BBR strategy has several aspects:</t>
              <ol spacing="normal" type="1"><li>
                  <t>The highest priority is to estimate the bandwidth available to the BBR flow
  in question.</t>
                </li>
                <li>
                  <t>Secondarily, a given BBR flow adapts (within bounds) the frequency at which
  it probes bandwidth and knowingly risks packet loss, to allow Reno/CUBIC
  to reach a bandwidth at least as high as that given BBR flow.</t>
                </li>
              </ol>
              <t>To adapt the frequency of bandwidth probing, BBR considers two time scales:
a BBR-native time scale, and a bounded Reno-conscious time scale:</t>
              <ul spacing="normal">
                <li>
                  <t>T_bbr: BBR-native time-scale  </t>
                  <ul spacing="normal">
                    <li>
                      <t>T_bbr = uniformly randomly distributed between 2 and 3 secs</t>
                    </li>
                  </ul>
                </li>
                <li>
                  <t>T_reno: Reno-coexistence time scale  </t>
                  <ul spacing="normal">
                    <li>
                      <t>T_reno_bound = pick_randomly_either({62, 63})</t>
                    </li>
                    <li>
                      <t>reno_bdp = min(BBR.bdp, C.cwnd)</t>
                    </li>
                    <li>
                      <t>T_reno = min(reno_bdp, T_reno_bound) round trips</t>
                    </li>
                  </ul>
                </li>
                <li>
                  <t>T_probe: The time between bandwidth probe UP phases:  </t>
                  <ul spacing="normal">
                    <li>
                      <t>T_probe = min(T_bbr, T_reno)</t>
                    </li>
                  </ul>
                </li>
              </ul>
              <t>This dual-time-scale approach is similar to that used by CUBIC, which has
a CUBIC-native time scale given by a cubic curve, and a "Reno emulation"
module that estimates what C.cwnd would give the flow Reno-equivalent throughput.
At any given moment, choose the C.cwnd implied by the more aggressive
strategy.</t>
              <t>We randomize both the T_bbr and T_reno parameters, for better mixing and
fairness convergence.</t>
            </section>
            <section anchor="design-considerations-for-choosing-constant-parameters">
              <name>Design Considerations for Choosing Constant Parameters</name>
              <t>We design the maximum wall-clock bounds of BBR-native inter-bandwidth-probe
wall clock time, T_bbr, to be:</t>
              <ul spacing="normal">
                <li>
                  <t>Higher than 2 sec to try to avoid causing loss for a long enough time to
allow Reno flow with RTT=30ms to get 25Mbps (4K video) throughput. For this
workload, given the Reno sawtooth that raises C.cwnd from roughly BDP to 2*BDP,
one C.SMSS per round trip,  the inter-bandwidth-probe time must be at least:
BDP * RTT = 25Mbps * .030 sec / (1514 bytes) * 0.030 sec = 1.9secs</t>
                </li>
                <li>
                  <t>Lower than 3 sec to ensure flows can start probing in a reasonable amount
of time to discover unutilized bw on human-scale interactive  time-scales
(e.g. perhaps traffic from a competing web page download is now complete).</t>
                </li>
              </ul>
              <t>The maximum round-trip bounds of the Reno-coexistence time scale, T_reno,
are chosen to be 62-63 with the following considerations in mind:</t>
              <ul spacing="normal">
                <li>
                  <t>Choosing a value smaller than roughly 60 would imply that when BBR flows
coexisted with Reno/CUBIC flows on public Internet broadband links, the
Reno/CUBIC flows would not be able to achieve enough bandwidth to show 4K
video.</t>
                </li>
                <li>
                  <t>Choosing a value that is too large would prevent BBR from reaching its goal
of tolerating 1% loss per round trip.
Given that the steady-state (non-bandwidth-probing) BBR response to
a non-application-limited round trip with X% packet loss is to
reduce the sending rate by X% (see "Updating the Model Upon Packet
Loss" in <xref target="updating-the-model-upon-packet-loss"/>), this means that the
BBR sending rate after N rounds of packet loss at a rate loss_rate
is reduced to (1 - loss_rate)^N.
A simplified model predicts that for a flow that encounters 1% loss
in N=137 round trips of ProbeBW_CRUISE, and then doubles its C.cwnd
(back to BBR.inflight_longterm) in ProbeBW_REFILL and ProbeBW_UP, we
expect that it will be able to restore and reprobe its original
sending rate, since: (1 - loss_rate)^N * 2^2 = (1 - .01)^137 * 2^2
~= 1.01.
That is, we expect the flow will be able to fully respond to packet
loss signals in ProbeBW_CRUISE while also fully re-measuring its
maximum achievable throughput in ProbeBW_UP.
However, with a larger number of round trips of ProbeBW_CRUISE, the
flow would not be able to sustain its achievable throughput.</t>
                </li>
              </ul>
              <t>The resulting behavior is that for BBR flows with small BDPs, the bandwidth
probing will be on roughly the same time scale as Reno/CUBIC; flows with
large BDPs will intentionally probe more rapidly/frequently than Reno/CUBIC
would (roughly every 62 round trips for low-RTT flows, or 2-3 secs for
high-RTT flows).</t>
              <t>The considerations above for timing bandwidth probing can be implemented
as follows:</t>
              <artwork><![CDATA[
  /* Is it time to transition from DOWN or CRUISE to REFILL? */
  BBRIsTimeToProbeBW():
    if (BBRHasElapsedInPhase(BBR.bw_probe_wait) ||
        BBRIsRenoCoexistenceProbeTime())
      BBRStartProbeBW_REFILL()
      return true
    return false

  /* Randomized decision about how long to wait until
   * probing for bandwidth, using round count and wall clock.
   */
  BBRPickProbeWait():
    /* Decide random round-trip bound for wait: */
    BBR.rounds_since_bw_probe =
      random_int_between(0, 1); /* 0 or 1 */
    /* Decide the random wall clock bound for wait: */
    BBR.bw_probe_wait =
      2 + random_float_between(0.0, 1.0) /* 0..1 sec */

  BBRIsRenoCoexistenceProbeTime():
    reno_rounds = BBRTargetInflight()
    rounds = min(reno_rounds, 63)
    return BBR.rounds_since_bw_probe >= rounds

  /* How much data do we want in flight?
   * Our estimated BDP, unless congestion cut C.cwnd. */
  BBRTargetInflight()
    return min(BBR.bdp, C.cwnd)
]]></artwork>
            </section>
          </section>
          <section anchor="probebw-algorithm-details">
            <name>ProbeBW Algorithm Details</name>
            <t>BBR's ProbeBW algorithm operates as follows.</t>
            <t>Upon entering ProbeBW, BBR executes:</t>
            <artwork><![CDATA[
  BBREnterProbeBW():
    BBR.cwnd_gain = BBR.DefaultCwndGain
    BBRStartProbeBW_DOWN()
]]></artwork>
            <t>The core logic for entering each state:</t>
            <artwork><![CDATA[
  BBRStartProbeBW_DOWN():
    BBRResetCongestionSignals()
    BBR.probe_up_cnt = Infinity /* not growing BBR.inflight_longterm */
    BBRPickProbeWait()
    BBR.cycle_stamp = Now()  /* start wall clock */
    BBR.ack_phase  = ACKS_PROBE_STOPPING
    BBRStartRound()
    BBR.state = ProbeBW_DOWN

  BBRStartProbeBW_CRUISE():
    BBR.state = ProbeBW_CRUISE

  BBRStartProbeBW_REFILL():
    BBRResetShortTermModel()
    BBR.bw_probe_up_rounds = 0
    BBR.bw_probe_up_acks = 0
    BBR.ack_phase = ACKS_REFILLING
    BBRStartRound()
    BBR.state = ProbeBW_REFILL

  BBRStartProbeBW_UP():
    BBR.ack_phase = ACKS_PROBE_STARTING
    BBRStartRound()
    BBRResetFullBW()
    BBR.full_bw = RS.delivery_rate
    BBR.state = ProbeBW_UP
    BBRRaiseInflightLongtermSlope()
]]></artwork>
            <t>BBR executes the following BBRUpdateProbeBWCyclePhase() logic on each ACK
that acknowledges new data, to advance the ProbeBW state machine:</t>
            <artwork><![CDATA[
  /* The core state machine logic for ProbeBW: */
  BBRUpdateProbeBWCyclePhase():
    if (!BBR.full_bw_reached)
      return  /* only handling steady-state behavior here */
    BBRAdaptLongTermModel()
    if (!IsInAProbeBWState())
      return /* only handling ProbeBW states here: */

    switch (state)

    ProbeBW_DOWN:
      if (BBRIsTimeToProbeBW())
        return /* already decided state transition */
      if (BBRIsTimeToCruise())
        BBRStartProbeBW_CRUISE()

    ProbeBW_CRUISE:
      if (BBRIsTimeToProbeBW())
        return /* already decided state transition */

    ProbeBW_REFILL:
      /* After one round of REFILL, start UP */
      if (BBR.round_start)
        BBR.bw_probe_samples = 1
        BBRStartProbeBW_UP()

    ProbeBW_UP:
      if (BBRIsTimeToGoDown())
        BBRStartProbeBW_DOWN()
]]></artwork>
            <t>The ancillary logic to implement the ProbeBW state machine:</t>
            <artwork><![CDATA[
  IsInAProbeBWState()
    state = BBR.state
    return (state == ProbeBW_DOWN or
            state == ProbeBW_CRUISE or
            state == ProbeBW_REFILL or
            state == ProbeBW_UP)

  /* Time to transition from DOWN to CRUISE? */
  BBRIsTimeToCruise():
    if (C.inflight > BBRInflightWithHeadroom())
      return false /* not enough headroom */
    if (C.inflight > BBRInflight(BBR.max_bw, 1.0))
      return false /* C.inflight > estimated BDP */
    return true

  /* Time to transition from UP to DOWN? */
  BBRIsTimeToGoDown():
    if (C.is_cwnd_limited && C.cwnd >= BBR.inflight_longterm)
      BBRResetFullBW()   /* bw is limited by BBR.inflight_longterm */
      BBR.full_bw = RS.delivery_rate
    else if (BBR.full_bw_now)
      return true  /* we estimate we've fully used path bw */
    return false

  BBRIsProbingBW():
    return (BBR.state == Startup or
            BBR.state == ProbeBW_REFILL or
            BBR.state == ProbeBW_UP)

  BBRHasElapsedInPhase(interval):
    return Now() > BBR.cycle_stamp + interval

  /* Return a volume of data that tries to leave free
   * headroom in the bottleneck buffer or link for
   * other flows, for fairness convergence and lower
   * RTTs and loss */
  BBRInflightWithHeadroom():
    if (BBR.inflight_longterm == Infinity)
      return Infinity
    headroom = max(1*SMSS, BBR.Headroom * BBR.inflight_longterm)
    return max(BBR.inflight_longterm - headroom,
               BBR.MinPipeCwnd)

  /* Raise BBR.inflight_longterm slope if appropriate. */
  BBRRaiseInflightLongtermSlope():
    growth_this_round = 1*SMSS << BBR.bw_probe_up_rounds
    BBR.bw_probe_up_rounds = min(BBR.bw_probe_up_rounds + 1, 30)
    BBR.probe_up_cnt = max(C.cwnd / growth_this_round, 1)

  /* Increase BBR.inflight_longterm if appropriate. */
  BBRProbeInflightLongtermUpward():
    if (!C.is_cwnd_limited || C.cwnd < BBR.inflight_longterm)
      return  /* not fully using BBR.inflight_longterm, so don't grow it */
   BBR.bw_probe_up_acks += RS.newly_acked
   if (BBR.bw_probe_up_acks >= BBR.probe_up_cnt)
     delta = BBR.bw_probe_up_acks / BBR.probe_up_cnt
     BBR.bw_probe_up_acks -= delta * BBR.bw_probe_up_cnt
     BBR.inflight_longterm += delta
   if (BBR.round_start)
     BBRRaiseInflightLongtermSlope()

  /* Track ACK state and update BBR.max_bw window and
   * BBR.inflight_longterm. */
  BBRAdaptLongTermModel():
    if (BBR.ack_phase == ACKS_PROBE_STARTING && BBR.round_start)
      /* starting to get bw probing samples */
      BBR.ack_phase = ACKS_PROBE_FEEDBACK
    if (BBR.ack_phase == ACKS_PROBE_STOPPING && BBR.round_start)
      /* end of samples from bw probing phase */
      if (IsInAProbeBWState() && !RS.is_app_limited)
        BBRAdvanceMaxBwFilter()

    if (!IsInflightTooHigh())
      /* Loss rate is safe. Adjust upper bounds upward. */
      if (BBR.inflight_longterm == Infinity)
        return /* no upper bounds to raise */
      if (RS.tx_in_flight > BBR.inflight_longterm)
        BBR.inflight_longterm = RS.tx_in_flight
      if (BBR.state == ProbeBW_UP)
        BBRProbeInflightLongtermUpward()
]]></artwork>
          </section>
        </section>
        <section anchor="probertt">
          <name>ProbeRTT</name>
          <section anchor="probertt-overview">
            <name>ProbeRTT Overview</name>
            <t>To help probe for BBR.min_rtt, on an as-needed basis BBR flows enter the
ProbeRTT state to try to cooperate to periodically drain the bottleneck queue,
and thus improve their BBR.min_rtt estimate of the unloaded two-way propagation
delay.</t>
            <t>A critical point is that before BBR raises its BBR.min_rtt
estimate (which would in turn raise its maximum permissible C.cwnd), it first
enters ProbeRTT to try to make a concerted and coordinated effort to drain
the bottleneck queue and make a robust BBR.min_rtt measurement. This allows the
BBR.min_rtt estimates of ensembles of BBR flows to converge, avoiding feedback
loops of ever-increasing queues and RTT samples.</t>
            <t>The ProbeRTT state works in concert with BBR.min_rtt estimation. Up to once
every ProbeRTTInterval = 5 seconds, the flow enters ProbeRTT, decelerating
by setting its cwnd_gain to BBR.ProbeRTTCwndGain = 0.5 to reduce
C.inflight to half of its estimated BDP, to try to measure the unloaded
two-way propagation delay.</t>
            <t>There are two main motivations for making the MinRTTFilterLen roughly twice
the ProbeRTTInterval. First, this ensures that during a ProbeRTT episode
the flow will "remember" the BBR.min_rtt value it measured during the previous
ProbeRTT episode, providing a robust BDP estimate for the C.cwnd = 0.5*BDP
calculation, increasing the likelihood of fully draining the bottleneck
queue. Second, this allows the flow's BBR.min_rtt filter window to generally
include RTT samples from two ProbeRTT episodes, providing a more robust
estimate.</t>
            <t>The algorithm for ProbeRTT is as follows:</t>
            <t>Entry conditions: In any state other than ProbeRTT itself, if the
BBR.probe_rtt_min_delay estimate has not been updated (i.e., by getting a
lower RTT measurement) for more than ProbeRTTInterval = 5 seconds, then BBR
enters ProbeRTT and reduces the BBR.cwnd_gain to BBR.ProbeRTTCwndGain = 0.5.</t>
            <t>Exit conditions: After maintaining C.inflight at
BBR.ProbeRTTCwndGain*BBR.bdp for at least BBR.ProbeRTTDuration (200 ms) and at
least one packet-timed round trip, BBR leaves ProbeRTT and transitions to
ProbeBW if it estimates the pipe was filled already, or Startup otherwise.</t>
          </section>
          <section anchor="probertt-design-rationale">
            <name>ProbeRTT Design Rationale</name>
            <t>BBR is designed to have ProbeRTT sacrifice no more than roughly 2% of a flow's
available bandwidth. It is also designed to spend the vast majority of its
time (at least roughly 96 percent) in ProbeBW and the rest in ProbeRTT, based
on a set of tradeoffs. ProbeRTT lasts long enough (at least BBR.ProbeRTTDuration
= 200 ms) to allow diverse flows (e.g., flows with different RTTs or lower
rates and thus longer inter-packet gaps) to have overlapping ProbeRTT states,
while still being short enough to bound the throughput penalty of ProbeRTT's
cwnd capping to roughly 2%, with the average throughput targeted at:</t>
            <artwork><![CDATA[
  throughput = (200ms*0.5*BBR.bw + (5s - 200ms)*BBR.bw) / 5s
             = (.1s + 4.8s)/5s * BBR.bw = 0.98 * BBR.bw
]]></artwork>
            <t>As discussed above, BBR's BBR.min_rtt filter window, BBR.MinRTTFilterLen, and
time interval between ProbeRTT states, ProbeRTTInterval, work in concert.
BBR uses a BBR.MinRTTFilterLen equal to or longer than BBR.ProbeRTTInterval to allow
the filter window to include at least one ProbeRTT.</t>
            <t>To allow coordination with other BBR flows, each BBR flow MUST use the
standard BBR.ProbeRTTInterval of 5 secs.</t>
            <t>A BBR.ProbeRTTInterval of 5 secs is short enough to allow quick convergence if
traffic levels or routes change, but long enough so that interactive
applications (e.g., Web, remote procedure calls, video chunks) often have
natural silences or low-rate periods within the window where the flow's rate
is low enough for long enough to drain its queue in the bottleneck. Then the
BBR.probe_rtt_min_delay filter opportunistically picks up these measurements,
and the BBR.probe_rtt_min_delay estimate refreshes without requiring
ProbeRTT. This way, flows typically need only pay the 2 percent throughput
penalty if there are multiple bulk flows busy sending over the entire
BBR.ProbeRTTInterval window.</t>
            <t>As an optimization, when restarting from idle and finding that the
BBR.probe_rtt_min_delay has expired, BBR does not enter ProbeRTT; the idleness
is deemed a sufficient attempt to coordinate to drain the queue.</t>
            <t>The frequency of triggering ProbeRTT involves a tradeoff between the speed of
convergence and the throughput penalty of applying a cwnd cap during ProbeRTT.
The interval between ProbeRTTs is a subject of further experimentation.
A longer duration between ProbeRTT would reduce the throughput penalty for bulk
flows or flows on lower BDP links that are less likely to have silences or
low-rate periods, at the cost of slower convergence. Furthermore, some types
of links can switch between paths of significantly different base
RTT (e.g. LEO satellite or cellular handoff). If these path changes can be
predicted or detected, initiating a ProbeRTT immediately could concievably
speed up the convergence to an accurate BBR.min_rtt, especially when it
has increased.</t>
          </section>
          <section anchor="probertt-logic">
            <name>ProbeRTT Logic</name>
            <t>On every ACK BBR executes BBRUpdateMinRTT() to update its ProbeRTT scheduling
state (BBR.probe_rtt_min_delay and BBR.probe_rtt_min_stamp) and its BBR.min_rtt
estimate:</t>
            <artwork><![CDATA[
  BBRUpdateMinRTT()
    BBR.probe_rtt_expired =
      Now() > BBR.probe_rtt_min_stamp + ProbeRTTInterval
    if (RS.rtt >= 0 &&
        (RS.rtt < BBR.probe_rtt_min_delay ||
         BBR.probe_rtt_expired))
       BBR.probe_rtt_min_delay = RS.rtt
       BBR.probe_rtt_min_stamp = Now()

    min_rtt_expired =
      Now() > BBR.min_rtt_stamp + MinRTTFilterLen
    if (BBR.probe_rtt_min_delay < BBR.min_rtt ||
        min_rtt_expired)
      BBR.min_rtt       = BBR.probe_rtt_min_delay
      BBR.min_rtt_stamp = BBR.probe_rtt_min_stamp
]]></artwork>
            <t>Here BBR.probe_rtt_expired is a boolean recording whether the
BBR.probe_rtt_min_delay has expired and is due for a refresh, via either
an application idle period or a transition into ProbeRTT state.</t>
            <t>On every ACK BBR executes BBRCheckProbeRTT() to handle the steps related
to the ProbeRTT state as follows:</t>
            <artwork><![CDATA[
  BBRCheckProbeRTT():
    if (BBR.state != ProbeRTT &&
        BBR.probe_rtt_expired &&
        !BBR.idle_restart)
      BBREnterProbeRTT()
      BBRSaveCwnd()
      BBR.probe_rtt_done_stamp = 0
      BBR.ack_phase = ACKS_PROBE_STOPPING
      BBRStartRound()
    if (BBR.state == ProbeRTT)
      BBRHandleProbeRTT()
    if (RS.delivered > 0)
      BBR.idle_restart = false

  BBREnterProbeRTT():
    BBR.state = ProbeRTT
    BBR.pacing_gain = 1
    BBR.cwnd_gain = BBRProbeRTTCwndGain  /* 0.5 */

  BBRHandleProbeRTT():
    /* Ignore low rate samples during ProbeRTT: */
    MarkConnectionAppLimited()
    if (BBR.probe_rtt_done_stamp == 0 &&
        C.inflight <= BBRProbeRTTCwnd())
      /* Wait for at least ProbeRTTDuration to elapse: */
      BBR.probe_rtt_done_stamp =
        Now() + ProbeRTTDuration
      /* Wait for at least one round to elapse: */
      BBR.probe_rtt_round_done = false
      BBRStartRound()
    else if (BBR.probe_rtt_done_stamp != 0)
      if (BBR.round_start)
        BBR.probe_rtt_round_done = true
      if (BBR.probe_rtt_round_done)
        BBRCheckProbeRTTDone()

  BBRCheckProbeRTTDone():
    if (BBR.probe_rtt_done_stamp != 0 &&
        Now() > BBR.probe_rtt_done_stamp)
      /* schedule next ProbeRTT: */
      BBR.probe_rtt_min_stamp = Now()
      BBRRestoreCwnd()
      BBRExitProbeRTT()
]]></artwork>
          </section>
          <section anchor="exiting-probertt">
            <name>Exiting ProbeRTT</name>
            <t>When exiting ProbeRTT, BBR transitions to ProbeBW if it estimates the pipe
was filled already, or Startup otherwise.</t>
            <t>When transitioning out of ProbeRTT, BBR calls BBRResetShortTermModel() to reset
the short-term model, since any congestion encountered in ProbeRTT may have pulled
it far below the capacity of the path.</t>
            <t>But the algorithm is cautious in timing the next bandwidth probe: raising
C.inflight after ProbeRTT may cause loss, so the algorithm resets the
bandwidth-probing clock by starting the cycle at ProbeBW_DOWN(). But then as an
optimization, since the connection is exiting ProbeRTT, we know that infligh is
already below the estimated BDP, so the connection can proceed immediately to
ProbeBW_CRUISE.</t>
            <t>To summarize, the logic for exiting ProbeRTT is as follows:</t>
            <artwork><![CDATA[
  BBRExitProbeRTT():
    BBRResetShortTermModel()
    if (BBR.full_bw_reached)
      BBRStartProbeBW_DOWN()
      BBRStartProbeBW_CRUISE()
    else
      BBREnterStartup()
]]></artwork>
          </section>
        </section>
      </section>
      <section anchor="restarting-from-idle">
        <name>Restarting From Idle</name>
        <section anchor="actions-when-restarting-from-idle">
          <name>Actions when Restarting from Idle</name>
          <t>When restarting from idle in ProbeBW states, BBR leaves C.cwnd as-is and
paces packets at exactly BBR.bw, aiming to return as quickly as possible
to its target operating point of rate balance and a full pipe. Specifically, if
the flow's BBR.state is ProbeBW, and the flow is application-limited, and there
are no packets in flight currently, then before the flow sends one or more
packets BBR sets C.pacing_rate to exactly BBR.bw.</t>
          <t>Also, when restarting from idle BBR checks to see if the connection is in
ProbeRTT and has met the exit conditions for ProbeRTT. If a connection goes
idle during ProbeRTT then often it will have met those exit conditions by
the time it restarts, so that the connection can restore C.cwnd to its full
value before it starts transmitting a new flight of data.</t>
          <t>More precisely, the BBR algorithm takes the following steps in
BBRHandleRestartFromIdle() before sending a packet for a flow:</t>
          <artwork><![CDATA[
  BBRHandleRestartFromIdle():
    if (C.inflight == 0 && C.app_limited)
      BBR.idle_restart = true
      BBR.extra_acked_interval_start = Now()
      if (IsInAProbeBWState())
        BBRSetPacingRateWithGain(1)
      else if (BBR.state == ProbeRTT)
        BBRCheckProbeRTTDone()
]]></artwork>
        </section>
        <section anchor="previous-idle-restart">
          <name>Comparison with Previous Approaches</name>
          <t>The "Restarting Idle Connections" section of <xref target="RFC5681"/> suggests restarting
from idle by slow-starting from the initial window. However, this approach was
assuming a congestion control algorithm that had no estimate of the bottleneck
bandwidth and no pacing, and thus resorted to relying on slow-starting driven
by an ACK clock. The long (log_2(BDP)*RTT) delays required to reach full
utilization with that "slow start after idle" approach caused many large
deployments to disable this mechanism, resulting in a "BDP-scale line-rate
burst" approach instead. Instead of these two approaches, BBR restarts by
pacing at BBR.bw, typically achieving approximate rate balance and a full pipe
after only one BBR.min_rtt has elapsed.</t>
        </section>
      </section>
      <section anchor="updating-network-path-model-parameters">
        <name>Updating Network Path Model Parameters</name>
        <t>BBR is a model-based congestion control algorithm: it is based on an explicit
model of the network path over which a transport flow travels. The following
is a summary of each parameter, including its meaning and how the algorithm
calculates and uses its value. We can group the parameter into three groups:</t>
        <ul spacing="normal">
          <li>
            <t>core state machine parameters</t>
          </li>
          <li>
            <t>parameters to model the appropriate data rate</t>
          </li>
          <li>
            <t>parameters to model the appropriate inflight</t>
          </li>
        </ul>
        <section anchor="bbrroundcount-tracking-packet-timed-round-trips">
          <name>BBR.round_count: Tracking Packet-Timed Round Trips</name>
          <t>Several aspects of BBR depend on counting the progress of "packet-timed"
round trips, which start at the transmission of some packet, and then end
at the acknowledgment of that packet. BBR.round_count is a count of the number
of these "packet-timed" round trips elapsed so far. BBR uses this virtual
BBR.round_count because it is more robust than using wall clock time. In
particular, arbitrary intervals of wall clock time can elapse due to
application idleness, variations in RTTs, or timer delays for retransmission
timeouts, causing wall-clock-timed model parameter estimates to "time out"
or to be "forgotten" too quickly to provide robustness.</t>
          <t>BBR counts packet-timed round trips by recording state about a sentinel packet,
and waiting for an ACK of any data packet that was sent after that sentinel
packet, using the following pseudocode:</t>
          <t>Upon connection initialization:</t>
          <artwork><![CDATA[
  BBRInitRoundCounting():
    BBR.next_round_delivered = 0
    BBR.round_start = false
    BBR.round_count = 0
]]></artwork>
          <t>Upon sending each packet, the rate estimation algorithm in
<xref target="delivery-rate-samples"/> records the amount of data thus far
acknowledged as delivered:</t>
          <artwork><![CDATA[
  P.delivered = C.delivered
]]></artwork>
          <t>Upon receiving an ACK for a given data packet, the rate estimation algorithm
in <xref target="delivery-rate-samples"/> updates the amount of data thus far
acknowledged as delivered:</t>
          <artwork><![CDATA[
    C.delivered += P.size
]]></artwork>
          <t>Upon receiving an ACK for a given data packet, the BBR algorithm first executes
the following logic to see if a round trip has elapsed, and if so, increment
the count of such round trips elapsed:</t>
          <artwork><![CDATA[
  BBRUpdateRound():
    if (packet.delivered >= BBR.next_round_delivered)
      BBRStartRound()
      BBR.round_count++
      BBR.rounds_since_bw_probe++
      BBR.round_start = true
    else
      BBR.round_start = false

  BBRStartRound():
    BBR.next_round_delivered = C.delivered
]]></artwork>
        </section>
        <section anchor="bbrmaxbw-estimated-maximum-bandwidth">
          <name>BBR.max_bw: Estimated Maximum Bandwidth</name>
          <t>BBR.max_bw is BBR's estimate of the maximum bottleneck bandwidth available to
data transmissions for the transport flow. At any time, a transport
connection's data transmissions experience some slowest link or bottleneck. The
bottleneck's delivery rate determines the connection's maximum data-delivery
rate. BBR tries to closely match its sending rate to this bottleneck delivery
rate to help seek "rate balance", where the flow's packet arrival rate at the
bottleneck equals the departure rate. The bottleneck rate varies over the life
of a connection, so BBR continually estimates BBR.max_bw using recent signals.</t>
        </section>
        <section anchor="bbrmaxbw-max-filter">
          <name>BBR.max_bw Max Filter</name>
          <t>Delivery rate samples are often below the typical bottleneck bandwidth
available to the flow, due to "noise" introduced by random variation in
physical transmission processes (e.g. radio link layer noise) or queues or
along the network path.  To filter these effects BBR uses a max filter: BBR
estimates BBR.max_bw using the windowed maximum recent delivery rate sample
seen by the connection over recent history.</t>
          <t>The BBR.max_bw max filter window covers a time period extending over the
past two ProbeBW cycles. The BBR.max_bw max filter window length is driven
by trade-offs among several considerations:</t>
          <ul spacing="normal">
            <li>
              <t>It is long enough to cover at least one entire ProbeBW cycle (see the
"ProbeBW" section). This ensures that the window contains at least some
delivery rate samples that are the result of data transmitted with a
super-unity pacing_gain (a pacing_gain larger than 1.0). Such super-unity
delivery rate samples are instrumental in revealing the path's underlying
available bandwidth even when there is noise from delivery rate shortfalls
due to aggregation delays, queuing delays from variable cross-traffic, lossy
link layers with uncorrected losses, or short-term buffer exhaustion (e.g.,
brief coincident bursts in a shallow buffer).</t>
            </li>
            <li>
              <t>It aims to be long enough to cover short-term fluctuations in the network's
delivery rate due to the aforementioned sources of noise. In particular, the
delivery rate for radio link layers (e.g., wifi and cellular technologies)
can be highly variable, and the filter window needs to be long enough to
remember "good" delivery rate samples in order to be robust to such
variations.</t>
            </li>
            <li>
              <t>It aims to be short enough to respond in a timely manner to sustained
reductions in the bandwidth available to a flow, whether this is because
other flows are using a larger share of the bottleneck, or the bottleneck
link service rate has reduced due to layer 1 or layer 2 changes, policy
changes, or routing changes. In any of these cases, existing BBR flows
traversing the bottleneck should, in a timely manner, reduce their BBR.max_bw
estimates and thus pacing rate and in-flight data, in order to match the
sending behavior to the new available bandwidth.</t>
            </li>
          </ul>
        </section>
        <section anchor="bbrmaxbw-and-application-limited-delivery-rate-samples">
          <name>BBR.max_bw and Application-limited Delivery Rate Samples</name>
          <t>Transmissions can be application-limited, meaning the transmission rate is
limited by the application rather than the congestion control algorithm.  This
is quite common because of request/response traffic. When there is a
transmission opportunity but no data to send, the delivery rate sampler marks
the corresponding bandwidth sample(s) as application-limited
<xref target="delivery-rate-samples"/>.  The BBR.max_bw estimator carefully decides which
samples to include in the bandwidth model to ensure that BBR.max_bw reflects
network limits, not application limits. By default, the estimator discards
application-limited samples, since by definition they reflect application
limits. However, the estimator does use application-limited samples if the
measured delivery rate happens to be larger than the current BBR.max_bw
estimate, since this indicates the current BBR.Max_bw estimate is too low.</t>
        </section>
        <section anchor="updating-the-bbrmaxbw-max-filter">
          <name>Updating the BBR.max_bw Max Filter</name>
          <t>For every ACK that acknowledges some data packets as delivered, BBR invokes
BBRUpdateMaxBw() to update the BBR.max_bw estimator as follows:</t>
          <artwork><![CDATA[
  BBRUpdateMaxBw()
    BBRUpdateRound()
    if (RS.delivery_rate > 0 &&
        (RS.delivery_rate >= BBR.max_bw || !RS.is_app_limited))
        BBR.max_bw = UpdateWindowedMaxFilter(
                      filter=BBR.max_bw_filter,
                      value=RS.delivery_rate,
                      time=BBR.cycle_count,
                      window_length=MaxBwFilterLen)
]]></artwork>
          <t>UpdateWindowedMaxFilter() can be implemented using Kathleen Nichols' algorithm
for tracking the minimum/maximum value of a data stream over some measurement
window. The description of the algorithm and a sample implementation are
available in Linux <xref target="KN_FILTER"/>.</t>
        </section>
        <section anchor="tracking-time-for-the-bbrmaxbw-max-filter">
          <name>Tracking Time for the BBR.max_bw Max Filter</name>
          <t>BBR tracks time for the BBR.max_bw filter window using a virtual
(non-wall-clock) time tracked by counting the cyclical progression through
ProbeBW cycles.  Each time through the Probe bw cycle, one round trip after
exiting ProbeBW_UP (the point at which the flow has its best chance to measure
the highest throughput of the cycle), BBR increments BBR.cycle_count, the
virtual time used by the BBR.max_bw filter window. Note that BBR.cycle_count
only needs to be tracked with a single bit, since the BBR.max_bw filter only
needs to track samples from two time slots: the previous ProbeBW cycle and the
current ProbeBW cycle:</t>
          <artwork><![CDATA[
  BBRAdvanceMaxBwFilter():
    BBR.cycle_count++
]]></artwork>
        </section>
        <section anchor="bbrminrtt-estimated-minimum-round-trip-time">
          <name>BBR.min_rtt: Estimated Minimum Round-Trip Time</name>
          <t>BBR.min_rtt is BBR's estimate of the round-trip propagation delay of the path
over which a transport connection is sending. The path's round-trip propagation
delay determines the minimum amount of time over which the connection must be
willing to sustain transmissions at the BBR.bw rate, and thus the minimum
amount of data needed in flight, for the connection to reach full utilization
(a "Full Pipe"). The round-trip propagation delay can vary over the life of a
connection, so BBR continually estimates BBR.min_rtt using recent round-trip
delay samples.</t>
          <section anchor="round-trip-time-samples-for-estimating-bbrminrtt">
            <name>Round-Trip Time Samples for Estimating BBR.min_rtt</name>
            <t>For every data packet a connection sends, BBR calculates an RTT sample that
measures the time interval from sending a data packet until that packet is
acknowledged.</t>
            <t>The only divergence from RTT estimation for retransmission timeouts is in the
case where a given acknowledgment ACKs more than one data packet. In order to
be conservative and schedule long timeouts to avoid spurious retransmissions,
the maximum among such potential RTT samples is typically used for computing
retransmission timeouts; i.e., C.srtt is typically calculated using the data
packet with the earliest transmission time. By contrast, in order for BBR to
try to reach the minimum amount of data in flight to fill the pipe, BBR uses
the minimum among such potential RTT samples; i.e., BBR calculates the RTT
using the data packet with the latest transmission time.</t>
          </section>
          <section anchor="bbrminrtt-min-filter">
            <name>BBR.min_rtt Min Filter</name>
            <t>RTT samples tend to be above the round-trip propagation delay of the path,
due to "noise" introduced by random variation in physical transmission processes
(e.g. radio link layer noise), queues along the network path, the receiver's
delayed ack strategy, ack aggregation, etc. Thus to filter out these effects
BBR uses a min filter: BBR estimates BBR.min_rtt using the minimum recent
RTT sample seen by the connection over that past BBR.MinRTTFilterLen seconds.
(Many of the same network effects that can decrease delivery rate measurements
can increase RTT samples, which is why BBR's min-filtering approach for RTTs
is the complement of its max-filtering approach for delivery rates.)</t>
            <t>The length of the BBR.min_rtt min filter window is BBR.MinRTTFilterLen = 10 secs.
This is driven by trade-offs among several considerations:</t>
            <ul spacing="normal">
              <li>
                <t>The BBR.MinRTTFilterLen is longer than BBR.ProbeRTTInterval, so that it covers an
entire ProbeRTT cycle (see the "ProbeRTT" section below). This helps ensure
that the window can contain RTT samples that are the result of data
transmitted with C.inflight below the estimated BDP of the flow. Such RTT
samples are important for helping to reveal the path's underlying two-way
propagation delay even when the aforementioned "noise" effects can often
obscure it.</t>
              </li>
              <li>
                <t>The BBR.MinRTTFilterLen aims to be long enough to avoid needing to reduce in-flight
data and throughput often. Measuring two-way propagation delay requires in-flight
data to be at or below the BDP, which risks  some amount of underutilization, so BBR
uses a filter window long enough that such underutilization events can be
rare.</t>
              </li>
              <li>
                <t>The BBR.MinRTTFilterLen aims to be long enough that many applications have a
"natural" moment of silence or low utilization that can reduce in-flight data below
the BDP and naturally serve to refresh the BBR.min_rtt, without requiring BBR to
force an artificial reduction in in-flight data. This applies to many popular
applications, including Web, RPC, or chunked audio/video traffic.</t>
              </li>
              <li>
                <t>The BBR.MinRTTFilterLen aims to be short enough to respond in a timely manner to
real increases in the two-way propagation delay of the path, e.g. due to
route changes, which are expected to typically happen on longer time scales.</t>
              </li>
            </ul>
            <t>A BBR implementation MAY use a generic windowed min filter to track BBR.min_rtt.
However, a significant savings in space and improvement in freshness can
be achieved by integrating the BBR.min_rtt estimation into the ProbeRTT state
machine, so this document discusses that approach in the ProbeRTT section.</t>
          </section>
        </section>
        <section anchor="bbr-offload-budget">
          <name>BBR.offload_budget</name>
          <t>BBR.offload_budget is the estimate of the minimum volume of data necessary
to achieve full throughput when send and/or receive offload is in use.
This varies based on the transport protocol and operating environment.</t>
          <section anchor="tcp-offload-budget">
            <name>TCP Offload Budget</name>
            <t>For TCP, senders commonly use TSO or GSO and receivers use LRO or GRO.</t>
            <t>For TCP, offload_budget can be computed as follows:</t>
            <artwork><![CDATA[
    BBRUpdateOffloadBudget():
      BBR.offload_budget = 3 * C.send_quantum
]]></artwork>
            <t>The factor of 3 is chosen to allow maintaining at least:</t>
            <ul spacing="normal">
              <li>
                <t>1 quantum in the sending host's queuing discipline layer</t>
              </li>
              <li>
                <t>1 quantum being segmented in the sending host TSO/GSO engine</t>
              </li>
              <li>
                <t>1 quantum being reassembled or otherwise remaining unacknowledged due to
the receiver host's LRO/GRO/delayed-ACK engine</t>
              </li>
            </ul>
          </section>
          <section anchor="quic-offload-budget">
            <name>QUIC Offload Budget</name>
            <t>For QUIC, in the simplest case, offload_budget is equal to the send quantum:</t>
            <artwork><![CDATA[
    BBRUpdateOffloadBudget():
      BBR.offload_budget = C.send_quantum
]]></artwork>
            <t>In addition, QUIC senders might have pacing offload available, allowing them to
schedule packets for transmission in the future. In this case, the offload
budget SHOULD be increased to include the amount of data that can be scheduled
for future transmissions by the pacing offload mechanism.</t>
            <t>Furthermore, QUIC receivers might acknowledge packets less often than
<xref section="13.2" sectionFormat="comma" target="RFC9000"/>, such as when using the ACK-FREQUENCY
(<xref target="I-D.draft-ietf-quic-ack-frequency"/>) extension. The offload budget can be
increased by min(Ack-Eliciting Threshold, Requested Max Ack Delay * BBR.max_bw)
to account for delayed acknowledgements.</t>
          </section>
        </section>
        <section anchor="bbrextraacked">
          <name>BBR.extra_acked</name>
          <t>BBR.extra_acked is a volume of data that is the estimate of the recent degree
of aggregation in the network path. For each ACK, the algorithm computes
a sample of the estimated extra ACKed data beyond the amount of data that
the sender expected to be ACKed over the timescale of a round-trip, given
the BBR.bw. Then it computes BBR.extra_acked as the windowed maximum sample
over the last BBRExtraAckedFilterLen=10 packet-timed round-trips. If the
ACK rate falls below the expected bandwidth, then the algorithm estimates
an aggregation episode has terminated, and resets the sampling interval to
start from the current time.</t>
          <t>The BBR.extra_acked thus reflects the recently-measured magnitude of data
and ACK aggregation effects such as batching and slotting at shared-medium
L2 hops (wifi, cellular, DOCSIS), as well as end-host offload mechanisms
(TSO, GSO, LRO, GRO), and end host or middlebox ACK decimation/thinning.</t>
          <t>BBR augments C.cwnd by BBR.extra_acked to allow the connection to keep
sending during inter-ACK silences, to an extent that matches the recently
measured degree of aggregation.</t>
          <t>More precisely, this is computed as:</t>
          <artwork><![CDATA[
  BBRUpdateACKAggregation():
    /* Find excess ACKed beyond expected amount over this interval */
    interval = (Now() - BBR.extra_acked_interval_start)
    expected_delivered = BBR.bw * interval
    /* Reset interval if ACK rate is below expected rate: */
    if (BBR.extra_acked_delivered <= expected_delivered)
        BBR.extra_acked_delivered = 0
        BBR.extra_acked_interval_start = Now()
        expected_delivered = 0
    BBR.extra_acked_delivered += RS.newly_acked
    extra = BBR.extra_acked_delivered - expected_delivered
    extra = min(extra, C.cwnd)
    if (BBR.full_bw_reached)
      filter_len = BBRExtraAckedFilterLen
    else
      filter_len = 1  /* in Startup, just remember 1 round */
    BBR.extra_acked =
      UpdateWindowedMaxFilter(
        filter=BBR.extra_acked_filter,
        value=extra,
        time=BBR.round_count,
        window_length=filter_len)
]]></artwork>
        </section>
        <section anchor="updating-the-model-upon-packet-loss">
          <name>Updating the Model Upon Packet Loss</name>
          <t>In every state, BBR responds to (filtered) congestion signals, including
loss. The response to those congestion signals depends on the flow's current
state, since the information that the flow can infer depends on what the
flow was doing when the flow experienced the signal.</t>
          <section anchor="probing-for-bandwidth-in-startup">
            <name>Probing for Bandwidth In Startup</name>
            <t>In Startup, if the congestion signals meet the Startup exit criteria, the flow
exits Startup and enters Drain (see <xref target="exiting-startup-based-on-packet-loss"/>).</t>
          </section>
          <section anchor="probing-for-bandwidth-in-probebw">
            <name>Probing for Bandwidth In ProbeBW</name>
            <t>BBR searches for the maximum volume of data that can be sensibly placed
in flight in the network. A key precondition is that the flow is actually
trying robustly to find that operating point. To implement this, when a flow is
in ProbeBW, and an ACK covers data sent in one of the accelerating phases
(REFILL or UP), and the ACK indicates that the loss rate over the past round
trip exceeds the queue pressure objective, and the flow is not application
limited, and has not yet responded to congestion signals from the most recent
REFILL or UP phase, then the flow estimates that the volume of data it allowed
in flight exceeded what matches the current delivery process on the path, and
reduces BBR.inflight_longterm:</t>
            <artwork><![CDATA[
  /* Do loss signals suggest C.inflight was too high? */
  IsInflightTooHigh():
    return ((RS.lost > RS.tx_in_flight * BBR.LossThresh) ||
            (RS.lost > 0 && !C.has_selective_acks))

  BBRHandleInflightTooHigh():
    BBR.bw_probe_samples = 0;  /* only react once per bw probe */
    if (!RS.is_app_limited)
      BBR.inflight_longterm = max(RS.tx_in_flight,
                            BBRTargetInflight() * BBR.Beta))
    If (BBR.state == ProbeBW_UP)
      BBRStartProbeBW_DOWN()
]]></artwork>
            <t>Here RS.tx_in_flight is the C.inflight value
when the most recently ACKed packet was sent. And the BBR.Beta (0.7x) bound
is to try to ensure that BBR does not react more dramatically than CUBIC's
0.7x multiplicative decrease factor.</t>
            <t>Some loss detection algorithms, including RACK <xref target="RFC8985"/> or QUIC loss
detection <xref target="RFC9002"/>, delay loss marking to wait for potential
reordering, so packets can be declared lost long after the loss itself.
happened. In such cases, the tx_in_flight for the delivered sequence range
that allowed the loss to be detected may be considerably smaller than the
tx_in_flight of the lost packet itself. In such cases using the former
tx_in_flight rather than the latter can cause BBR.inflight_longterm to be
significantly underestimated. To avoid such issues, BBR processes each loss
detection event to more precisely estimate C.inflight at
which loss rates cross BBR.LossThresh, noting that this may have happened
mid-way through some TSO/GSO offload burst (represented as a "packet" in
the pseudocode in this document). To estimate this threshold volume of data,
we can solve for "lost_prefix" in the following way, where inflight_prev
represents C.inflight preceding this packet, and lost_prev
represents the data lost among that previous in-flight data.</t>
            <t>First we start with:</t>
            <artwork><![CDATA[
  lost / C.inflight >= BBR.LossThresh
]]></artwork>
            <t>Expanding this, we get:</t>
            <artwork><![CDATA[
  (lost_prev + lost_prefix) /    >= BBR.LossThresh
  (inflight_prev + lost_prefix)
]]></artwork>
            <t>Solving for lost_prefix, we arrive at:</t>
            <artwork><![CDATA[
  lost_prefix >= (BBR.LossThresh * inflight_prev - lost_prev) /
                    (1 - BBR.LossThresh)
]]></artwork>
            <t>In pseudocode:</t>
            <artwork><![CDATA[
  BBRNoteLoss()
    if (!BBR.loss_in_round)   /* first loss in this round trip? */
      BBR.loss_round_delivered = C.delivered
      BBRSaveStateUponLoss()
    BBR.loss_in_round = 1

  BBRHandleLostPacket(packet):
    BBRNoteLoss()
    if (!BBR.bw_probe_samples)
      return /* not a packet sent while probing bandwidth */
    RS.tx_in_flight = P.tx_in_flight /* C.inflight at transmit */
    RS.lost = C.lost - P.lost /* data lost since transmit */
    RS.is_app_limited = P.is_app_limited;
    if (IsInflightTooHigh())
      RS.tx_in_flight = BBRInflightAtLoss(rs, packet)
      BBRHandleInflightTooHigh()

  /* At what prefix of packet did losses exceed BBR.LossThresh? */
  BBRInflightAtLoss(RS, packet):
    size = packet.size
    /* What was in flight before this packet? */
    inflight_prev = RS.tx_in_flight - size
    /* What was lost before this packet? */
    lost_prev = RS.lost - size
    lost_prefix = (BBR.LossThresh * inflight_prev - lost_prev) /
                  (1 - BB.RLossThresh)
    /* At what C.inflight value did losses cross BBR.LossThresh? */
    inflight_at_loss = inflight_prev + lost_prefix
    return inflight_at_loss
]]></artwork>
          </section>
          <section anchor="when-not-probing-for-bandwidth">
            <name>When not Probing for Bandwidth</name>
            <t>When not explicitly accelerating to probe for bandwidth (Drain, ProbeRTT,
ProbeBW_DOWN, ProbeBW_CRUISE), BBR  responds to loss by slowing down to some
extent. This is because loss suggests that the available bandwidth and safe
C.inflight may have decreased recently, and the flow needs
to adapt, slowing down toward the latest delivery process. BBR flows implement
this response by reducing the short-term model parameters, BBR.bw_shortterm and
BBR.inflight_shortterm.</t>
            <t>When encountering packet loss when the flow is not probing for bandwidth,
the strategy is to gradually adapt to the current measured delivery process
(the rate and volume of data that is delivered through the network path over
the last round trip). This applies generally: whether in fast recovery, RTO
recovery, TLP recovery; whether application-limited or not.</t>
            <t>There are two key parameters the algorithm tracks, to measure the current
delivery process:</t>
            <t>BBR.bw_latest: a 1-round-trip max of delivered bandwidth (RS.delivery_rate).</t>
            <t>BBR.inflight_latest: a 1-round-trip max of delivered volume of data
(RS.delivered).</t>
            <t>Upon the ACK at the end of each round that encountered a newly-marked loss,
the flow updates its model (BBR.bw_shortterm and BBR.inflight_shortterm) as follows:</t>
            <artwork><![CDATA[
      bw_shortterm = max(       bw_latest, BBR.Beta *       BBR.bw_shortterm )
inflight_shortterm = max( inflight_latest, BBR.Beta * BBR.inflight_shortterm )
]]></artwork>
            <t>This logic can be represented as follows:</t>
            <artwork><![CDATA[
  /* Near start of ACK processing: */
  BBRUpdateLatestDeliverySignals():
    BBR.loss_round_start = 0
    BBR.bw_latest       = max(BBR.bw_latest,       RS.delivery_rate)
    BBR.inflight_latest = max(BBR.inflight_latest, RS.delivered)
    if (RS.prior_delivered >= BBR.loss_round_delivered)
      BBR.loss_round_delivered = C.delivered
      BBR.loss_round_start = 1

  /* Near end of ACK processing: */
  BBRAdvanceLatestDeliverySignals():
    if (BBR.loss_round_start)
      BBR.bw_latest       = RS.delivery_rate
      BBR.inflight_latest = RS.delivered

  BBRResetCongestionSignals():
    BBR.loss_in_round = 0
    BBR.bw_latest = 0
    BBR.inflight_latest = 0

  /* Update congestion state on every ACK */
  BBRUpdateCongestionSignals():
    BBRUpdateMaxBw()
    if (!BBR.loss_round_start)
      return  /* wait until end of round trip */
    BBRAdaptLowerBoundsFromCongestion()  /* once per round, adapt */
    BBR.loss_in_round = 0

  /* Once per round-trip respond to congestion */
  BBRAdaptLowerBoundsFromCongestion():
    if (BBRIsProbingBW())
      return
    if (BBR.loss_in_round)
      BBRInitLowerBounds()
      BBRLossLowerBounds()

  /* Handle the first congestion episode in this cycle */
  BBRInitLowerBounds():
    if (BBR.bw_shortterm == Infinity)
      BBR.bw_shortterm = BBR.max_bw
    if (BBR.inflight_shortterm == Infinity)
      BBR.inflight_shortterm = C.cwnd

  /* Adjust model once per round based on loss */
  BBRLossLowerBounds()
    BBR.bw_shortterm       = max(BBR.bw_latest,
                          BBR.Beta * BBR.bw_shortterm)
    BBR.inflight_shortterm = max(BBR.inflight_latest,
                          BBR.Beta * BBR.inflight_shortterm)

  BBRResetShortTermModel():
    BBR.bw_shortterm       = Infinity
    BBR.inflight_shortterm = Infinity

  BBRBoundBWForModel():
    BBR.bw = min(BBR.max_bw, BBR.bw_shortterm)

]]></artwork>
          </section>
        </section>
        <section anchor="updating-the-model-upon-spurious-packet-loss">
          <name>Updating the Model Upon Detecting a Spurious Loss Recovery</name>
          <t>In some cases a transport protocol detects that a loss recovery episode was
spurious, i.e., the connection previously concluded that one or more packets
were lost (using fast recovery or RTO recovery) but later concludes that
no packets marked lost in that loss recovery episode were actually lost.</t>
          <t>In order to handle such cases, when a loss recovery episode starts, BBR saves
information about its current state. If the transport protocol later declares
the loss recovery episode to be spurious, then BBR restores aspects of its
state to their previously saved values. This greatly reduces the performance
impact of spurious loss recovery episodes.</t>
          <section anchor="saving-state-on-loss-recovery">
            <name>Saving State Upon Loss Recovery</name>
            <t>If a connection's transport protocol starts a loss recovery episode that may
later be declared spurious (including possibly fast recovery or RTO recovery,
depending on the transport protocol), BBR saves information about its current
state as follows:</t>
            <artwork><![CDATA[
  /* Save state in case a loss episode is later declared spurious */
  BBRSaveStateUponLoss():
    BBR.undo_state       = BBR.state
    BBR.undo_bw_shortterm       = BBR.bw_shortterm
    BBR.undo_inflight_shortterm = BBR.inflight_shortterm
    BBR.undo_inflight_longterm = BBR.inflight_longterm
]]></artwork>
          </section>
          <section anchor="handling-spurious-loss-recovery">
            <name>Handling a Spurious Loss Recovery</name>
            <t>If a loss recovery episode is declared spurious, BBR restores aspects of its
state to their previously saved values as follows:</t>
            <artwork><![CDATA[
  /* Handle a declaration of a spurious loss episode */
  BBRHandleSpuriousLossDetection():
    BBR.loss_in_round = 0
    BBRResetFullBW():
    BBR.bw_shortterm       = max(BBR.bw_shortterm,       BBR.undo_bw_shortterm)
    BBR.inflight_shortterm = max(BBR.inflight_shortterm, BBR.undo_inflight_shortterm)
    BBR.inflight_longterm = max(BBR.inflight_longterm, BBR.undo_inflight_longterm)
    /* If flow was probing bandwidth, return to that state: */
    if (BBR.state != ProbeRTT && BBR.state != BBR.undo_state)
      if (BBR.undo_state == Startup)
        BBREnterStartup()
      else if (BBR.undo_state == ProbeBW_UP)
        BBRStartProbeBW_UP()
]]></artwork>
          </section>
        </section>
      </section>
      <section anchor="updating-control-parameters">
        <name>Updating Control Parameters</name>
        <t>BBR uses three distinct but interrelated control parameters: pacing rate,
send quantum, and congestion window.</t>
        <section anchor="summary-of-control-behavior-in-the-state-machine">
          <name>Summary of Control Behavior in the State Machine</name>
          <t>The following table summarizes how BBR modulates the control parameters in
each state. In the table below, the semantics of the columns are as follows:</t>
          <ul spacing="normal">
            <li>
              <t>State: the state in the BBR state machine, as depicted in the "State
Transition Diagram" section above.</t>
            </li>
            <li>
              <t>Tactic: The tactic chosen from the "State Machine Tactics" in
<xref target="state-machine-tactics"/>: "accel" refers to acceleration, "decel" to
deceleration, and "cruise" to cruising.</t>
            </li>
            <li>
              <t>Pacing Gain: the value used for BBR.pacing_gain in the given state.</t>
            </li>
            <li>
              <t>Cwnd Gain: the value used for BBR.cwnd_gain in the given state.</t>
            </li>
            <li>
              <t>Rate Cap: the rate values applied as bounds on the BBR.max_bw value applied
to compute BBR.bw.</t>
            </li>
            <li>
              <t>Volume Cap: the volume values applied as bounds on the BBR.max_inflight value
to compute C.cwnd.</t>
            </li>
          </ul>
          <t>The control behavior can be summarized as follows. Upon processing each ACK,
BBR uses the values in the table below to compute BBR.bw in
BBRBoundBWForModel(), and C.cwnd in BBRBoundCwndForModel():</t>
          <artwork><![CDATA[
---------------+--------+--------+------+--------------+-----------------
State          | Tactic | Pacing | Cwnd | Rate         | Volume
               |        | Gain   | Gain | Cap          | Cap
---------------+--------+--------+------+--------------+-----------------
Startup        | accel  | 2.77   | 2    | N/A          | N/A
               |        |        |      |              |
---------------+--------+--------+------+--------------+-----------------
Drain          | decel  | 0.5    | 2    | bw_shortterm | inflight_longterm,
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
ProbeBW_DOWN   | decel  | 0.90   | 2    | bw_shortterm | inflight_longterm,
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
ProbeBW_CRUISE | cruise | 1.0    | 2    | bw_shortterm | 0.85*inflight_longterm
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
ProbeBW_REFILL | accel  | 1.0    | 2    |              | inflight_longterm
               |        |        |      |              |
---------------+--------+--------+------+--------------+-----------------
ProbeBW_UP     | accel  | 1.25   | 2.25 |              | inflight_longterm
               |        |        |      |              |
---------------+--------+--------+------+--------------+-----------------
ProbeRTT       | decel  | 1.0    | 0.5  | bw_shortterm | 0.85*inflight_longterm
               |        |        |      |              | inflight_shortterm
---------------+--------+--------+------+--------------+-----------------
]]></artwork>
        </section>
        <section anchor="pacing-rate-bbrpacingrate">
          <name>Pacing Rate: C.pacing_rate</name>
          <t>To help match the packet-arrival rate to the bottleneck bandwidth available
to the flow, BBR paces data packets. Pacing enforces a maximum rate at which
BBR schedules quanta of packets for transmission.</t>
          <t>The sending host implements pacing by maintaining inter-quantum spacing at
the time each packet is scheduled for departure, calculating the next departure
time for a packet for a given flow (C.next_send_time) as a function
of the most recent packet size and the current pacing rate, as follows:</t>
          <artwork><![CDATA[
  C.next_send_time = max(Now(), C.next_send_time)
  P.send_time = C.next_send_time
  pacing_delay = packet.size / C.pacing_rate
  C.next_send_time = C.next_send_time + pacing_delay
]]></artwork>
          <t>To adapt to the bottleneck, in general BBR sets the pacing rate to be
proportional to bw, with a dynamic gain, or scaling factor of proportionality,
called pacing_gain.</t>
          <t>When a BBR flow starts it has no bw estimate (bw is 0). So in this case it
sets an initial pacing rate based on the transport sender implementation's
initial congestion window ("C.InitialCwnd", e.g. from <xref target="RFC6928"/>), the
initial C.srtt after the first non-zero RTT sample, and the initial pacing_gain:</t>
          <artwork><![CDATA[
  BBRInitPacingRate():
    nominal_bandwidth = C.InitialCwnd / (C.srtt ? C.srtt : 1ms)
    C.pacing_rate =  BBR.StartupPacingGain * nominal_bandwidth
]]></artwork>
          <t>After initialization, on each data ACK BBR updates its pacing rate to be
proportional to bw, as long as it estimates that it has filled the pipe
(BBR.full_bw_reached is true; see the "Startup" section for details), or
doing so increases the pacing rate. Limiting the pacing rate updates in this way
helps the connection probe robustly for bandwidth until it estimates it has
reached its full available bandwidth ("filled the pipe"). In particular,
this prevents the pacing rate from being reduced when the connection has only
seen application-limited bandwidth samples. BBR updates the pacing rate on each
ACK by executing the BBRSetPacingRate() step as follows:</t>
          <artwork><![CDATA[
  BBRSetPacingRateWithGain(pacing_gain):
    rate = pacing_gain * bw * (100 - BBR.PacingMarginPercent) / 100
    if (BBR.full_bw_reached || rate > C.pacing_rate)
      C.pacing_rate = rate

  BBRSetPacingRate():
    BBRSetPacingRateWithGain(C.pacing_gain)
]]></artwork>
          <t>To help drive the network toward lower queues and low latency while maintaining
high utilization, the BBR.PacingMarginPercent constant of 1 aims to cause
BBR to pace at 1% below the bw, on average.</t>
        </section>
        <section anchor="send-quantum-bbrsendquantum">
          <name>Send Quantum: C.send_quantum</name>
          <t>In order to amortize per-packet overheads involved in the sending process (host
CPU, NIC processing, and interrupt processing delays), high-performance
transport sender implementations (e.g., Linux TCP) often schedule an aggregate
containing multiple packets (multiple C.SMSS) worth of data as a single quantum
(using TSO, GSO, or other offload mechanisms). The BBR congestion control
algorithm makes this control decision explicitly, dynamically calculating a
quantum control parameter that specifies the maximum size of these transmission
aggregates. This decision is based on a trade-off:</t>
          <ul spacing="normal">
            <li>
              <t>A smaller quantum is preferred at lower data rates because it results in
shorter packet bursts, shorter queues, lower queueing delays, and lower rates
of packet loss.</t>
            </li>
            <li>
              <t>A bigger quantum can be required at higher data rates because it results
in lower CPU overheads at the sending and receiving hosts, who can ship larger
amounts of data with a single trip through the networking stack.</t>
            </li>
          </ul>
          <t>On each ACK, BBR runs BBRSetSendQuantum() to update C.send_quantum  as
follows:</t>
          <artwork><![CDATA[
  BBRSetSendQuantum():
    C.send_quantum = C.pacing_rate * 1ms
    C.send_quantum = min(C.send_quantum, 64 KBytes)
    C.send_quantum = max(C.send_quantum, 2 * C.SMSS)
]]></artwork>
          <t>A BBR implementation MAY use alternate approaches to select a C.send_quantum,
as appropriate for the CPU overheads anticipated for senders and receivers,
and buffering considerations anticipated in the network path. However, for
the sake of the network and other users, a BBR implementation SHOULD attempt
to use the smallest feasible quanta.</t>
        </section>
        <section anchor="congestion-window">
          <name>Congestion Window</name>
          <t>The congestion window (C.cwnd) controls the maximum C.inflight.
It is the maximum C.inflight
that the algorithm estimates is appropriate for matching the current
network path delivery process, given all available signals in the model,
at any time scale. BBR adapts C.cwnd based on its model of the network
path and the state machine's decisions about how to probe that path.</t>
          <t>By default, BBR grows C.cwnd to meet its BBR.max_inflight, which models
what's required for achieving full throughput, and as such is scaled to adapt
to the estimated BDP computed from its path model. But BBR's selection of C.cwnd
is designed to explicitly trade off among competing considerations that
dynamically adapt to various conditions. So in loss recovery BBR more
conservatively adjusts its sending behavior based on more recent delivery
samples, and if BBR needs to re-probe the current BBR.min_rtt of the path then
it cuts C.cwnd accordingly. The following sections describe the various
considerations that impact C.cwnd.</t>
          <section anchor="initial-cwnd">
            <name>Initial cwnd</name>
            <t>BBR generally uses measurements to build a model of the network path and
then adapts control decisions to the path based on that model. As such, the
selection of the initial cwnd is considered to be outside the scope of the
BBR algorithm, since at initialization there are no measurements yet upon
which BBR can operate. Thus, at initialization, BBR uses the transport sender
implementation's initial congestion window (e.g. from <xref target="RFC6298"/> for TCP).</t>
          </section>
          <section anchor="computing-bbrmaxinflight">
            <name>Computing BBR.max_inflight</name>
            <t>The BBR BBR.max_inflight is the upper bound on the volume of data BBR allows in
flight. This bound is always in place, and dominates when all other
considerations have been satisfied: the flow is not in loss recovery, does not
need to probe BBR.min_rtt, and has accumulated confidence in its model
parameters by receiving enough ACKs to gradually grow the current C.cwnd to meet
the BBR.max_inflight.</t>
            <t>On each ACK, BBR calculates the BBR.max_inflight in BBRUpdateMaxInflight()
as follows:</t>
            <artwork><![CDATA[
  BBRBDPMultiple(gain):
    if (BBR.min_rtt == Infinity)
      return C.InitialCwnd /* no valid RTT samples yet */
    BBR.bdp = BBR.bw * BBR.min_rtt
    return gain * BBR.bdp

  BBRQuantizationBudget(inflight_cap)
    BBRUpdateOffloadBudget()
    inflight_cap = max(inflight_cap, BBR.offload_budget)
    inflight_cap = max(inflight_cap, BBR.MinPipeCwnd)
    if (BBR.state == ProbeBW_UP)
      inflight_cap += 2*C.SMSS
    return inflight_cap

  BBRInflight(gain):
    inflight_cap = BBRBDPMultiple(gain)
    return BBRQuantizationBudget(inflight_cap)

  BBRUpdateMaxInflight():
    inflight_cap = BBRBDPMultiple(BBR.cwnd_gain)
    inflight_cap += BBR.extra_acked
    BBR.max_inflight = BBRQuantizationBudget(inflight_cap)
]]></artwork>
            <t>The "estimated_bdp" term tries to allow enough packets in flight to fully
utilize the estimated BDP of the path, by allowing the flow to send at BBR.bw
for a duration of BBR.min_rtt. Scaling up the BDP by BBR.cwnd_gain bounds
in-flight data to a small multiple of the BDP, to handle common network and
receiver behavior, such as delayed, stretched, or aggregated ACKs <xref target="A15"/>.
The "quanta" term allows enough quanta in flight on the sending and
receiving hosts to reach high throughput even in environments using
offload mechanisms.</t>
          </section>
          <section anchor="minimum-cwnd-for-pipelining">
            <name>Minimum cwnd for Pipelining</name>
            <t>For BBR.max_inflight, BBR imposes a floor of BBR.MinPipeCwnd (4 packets, i.e.
4 * C.SMSS). This floor helps ensure that even at very low BDPs, and with
a transport like TCP where a receiver may ACK only every alternate C.SMSS of
data, there are enough packets in flight to maintain full pipelining. In
particular BBR tries to allow at least 2 data packets in flight and ACKs
for at least 2 data packets on the path from receiver to sender.</t>
          </section>
          <section anchor="modulating-cwnd-in-loss-recovery">
            <name>Modulating cwnd in Loss Recovery</name>
            <t>BBR interprets loss as a hint that there may be recent changes in path behavior
that are not yet fully reflected in its model of the path, and thus it needs
to be more conservative.</t>
            <t>Upon a retransmission timeout (RTO), BBR conservatively reduces C.cwnd to a
value that will allow 1 C.SMSS to be transmitted. Then BBR gradually increases
C.cwnd using the normal approach outlined below in "cwnd Adjustment Mechanism"
in <xref target="cwnd-adjustment-mechanism"/>.</t>
            <t>When a BBR sender is in Fast Recovery it uses the response described in
"Updating the Model Upon Packet Loss" in
<xref target="updating-the-model-upon-packet-loss"/>.</t>
            <t>When BBR exits loss recovery it restores C.cwnd to the "last known good"
value that C.cwnd held before entering recovery. This applies equally whether
the flow exits loss recovery because it finishes repairing all losses or
because it executes an "undo" event after inferring that a loss recovery
event was spurious.</t>
            <t>The high-level design for updating C.cwnd in loss recovery is as follows:</t>
            <t>Upon retransmission timeout (RTO):</t>
            <artwork><![CDATA[
  BBROnEnterRTO():
    BBRSaveCwnd()
    BBRSaveStateUponLoss()
    C.cwnd = C.inflight + 1
]]></artwork>
            <t>Upon entering Fast Recovery:</t>
            <artwork><![CDATA[
  BBROnEnterFastRecovery():
    BBRSaveCwnd()
    BBRSaveStateUponLoss()
]]></artwork>
            <t>Upon exiting loss recovery (RTO recovery or Fast Recovery), either by repairing
all losses or undoing recovery, BBR restores the best-known cwnd value we
had upon entering loss recovery:</t>
            <artwork><![CDATA[
  BBRRestoreCwnd()
]]></artwork>
            <t>Note that exiting loss recovery happens during ACK processing, and at the
end of ACK processing BBRBoundCwndForModel() will bound the cwnd based on
the current model parameters. Thus the cwnd and pacing rate after loss recovery
will generally be smaller than the values entering loss recovery.</t>
            <t>The BBRSaveCwnd() and BBRRestoreCwnd() helpers help remember and restore
the last-known good C.cwnd (the latest C.cwnd unmodulated by loss recovery or
ProbeRTT), and is defined as follows:</t>
            <artwork><![CDATA[
  BBRSaveCwnd():
    if (!InLossRecovery() && BBR.state != ProbeRTT)
      BBR.prior_cwnd = C.cwnd
    else
      BBR.prior_cwnd = max(BBR.prior_cwnd, C.cwnd)

  BBRRestoreCwnd():
    C.cwnd = max(C.cwnd, BBR.prior_cwnd)
]]></artwork>
          </section>
          <section anchor="modulating-cwnd-in-probertt">
            <name>Modulating cwnd in ProbeRTT</name>
            <t>If BBR decides it needs to enter the ProbeRTT state (see the "ProbeRTT" section
below), its goal is to quickly reduce C.inflight and drain
the bottleneck queue, thereby allowing measurement of BBR.min_rtt. To implement
this mode, BBR bounds C.cwnd to BBR.MinPipeCwnd, the minimal value that
allows pipelining (see the "Minimum cwnd for Pipelining" section, above):</t>
            <artwork><![CDATA[
  BBRProbeRTTCwnd():
    probe_rtt_cwnd = BBRBDPMultiple(BBR.bw, BBR.ProbeRTTCwndGain)
    probe_rtt_cwnd = max(probe_rtt_cwnd, BBR.MinPipeCwnd)
    return probe_rtt_cwnd

  BBRBoundCwndForProbeRTT():
    if (BBR.state == ProbeRTT)
      C.cwnd = min(C.cwnd, BBRProbeRTTCwnd())
]]></artwork>
          </section>
          <section anchor="cwnd-adjustment-mechanism">
            <name>cwnd Adjustment Mechanism</name>
            <t>The network path and traffic traveling over it can make sudden dramatic
changes.  To adapt to these changes smoothly and robustly, and reduce packet
losses in such cases, BBR uses a conservative strategy. When C.cwnd is above the
BBR.max_inflight derived from BBR's path model, BBR cuts C.cwnd immediately
to the BBR.max_inflight. When C.cwnd is below BBR.max_inflight, BBR raises
C.cwnd gradually and cautiously, increasing C.cwnd by no more than the amount of
data acknowledged (cumulatively or selectively) upon each ACK.</t>
            <t>Specifically, on each ACK that acknowledges "RS.newly_acked" packets as newly
acknowledged, BBR runs the following BBRSetCwnd() steps to update C.cwnd:</t>
            <artwork><![CDATA[
  BBRSetCwnd():
    BBRUpdateMaxInflight()
    if (BBR.full_bw_reached)
      C.cwnd = min(C.cwnd + RS.newly_acked, BBR.max_inflight)
    else if (C.cwnd < BBR.max_inflight || C.delivered < C.InitialCwnd)
      C.cwnd = C.cwnd + RS.newly_acked
    C.cwnd = max(C.cwnd, BBR.MinPipeCwnd)
    BBRBoundCwndForProbeRTT()
    BBRBoundCwndForModel()
]]></artwork>
            <t>There are several considerations embodied in the logic above. If BBR has
measured enough samples to achieve confidence that it has filled the pipe
(see the description of BBR.full_bw_reached in the "Startup" section below), then
it increases C.cwnd based on the data delivered, while bounding
C.cwnd to be no larger than the BBR.max_inflight adapted to the estimated
BDP. Otherwise, if C.cwnd is below the BBR.max_inflight, or the sender
has marked so little data delivered (less than C.InitialCwnd) that it does not
yet judge its BBR.max_bw estimate and BBR.max_inflight as useful, then it increases
C.cwnd without bounding it to be below BBR.max_inflight. Finally, BBR imposes
a floor of BBR.MinPipeCwnd in order to allow pipelining even with small BDPs
(see the "Minimum cwnd for Pipelining" section, above).</t>
          </section>
          <section anchor="bounding-cwnd-based-on-recent-congestion">
            <name>Bounding cwnd Based on Recent Congestion</name>
            <t>Finally, BBR bounds C.cwnd based on recent congestion, as outlined in the
"Volume Cap" column of the table in the "Summary of Control Behavior in the
State Machine" section:</t>
            <artwork><![CDATA[
  BBRBoundCwndForModel():
    cap = Infinity
    if (IsInAProbeBWState() &&
        BBR.state != ProbeBW_CRUISE)
      cap = BBR.inflight_longterm
    else if (BBR.state == ProbeRTT ||
             BBR.state == ProbeBW_CRUISE)
      cap = BBRInflightWithHeadroom()

    /* apply BBR.inflight_shortterm (possibly infinite): */
    cap = min(cap, BBR.inflight_shortterm)
    cap = max(cap, BBR.MinPipeCwnd)
    C.cwnd = min(C.cwnd, cap)
]]></artwork>
          </section>
        </section>
      </section>
    </section>
    <section anchor="implementation-status">
      <name>Implementation Status</name>
      <t>This section records the status of known implementations of the algorithm
defined by this specification at the time of posting of this Internet-Draft,
and is based on a proposal described in <xref target="RFC7942"/>.
The description of implementations in this section is intended to assist
the IETF in its decision processes in progressing drafts to RFCs. Please
note that the listing of any individual implementation here does not imply
endorsement by the IETF. Furthermore, no effort has been spent to verify
the information presented here that was supplied by IETF contributors. This
is not intended as, and must not be construed to be, a catalog of available
implementations or their features.  Readers are advised to note that other
implementations may exist.</t>
      <t>According to <xref target="RFC7942"/>, "this will allow reviewers and working groups to
assign due consideration to documents that have the benefit of running code,
which may serve as evidence of valuable experimentation and feedback that have
made the implemented protocols more mature.  It is up to the individual working
groups to use this information as they see fit".</t>
      <t>As of the time of writing, the following implementations of BBRv3 have been
publicly released:</t>
      <ul spacing="normal">
        <li>
          <t>Linux TCP
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URL:
              </t>
              <ul spacing="normal">
                <li>
                  <t>https://github.com/google/bbr/blob/v3/README.md</t>
                </li>
                <li>
                  <t>https://github.com/google/bbr/blob/v3/net/ipv4/tcp_bbr.c</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: dual-licensed: GPLv2 / BSD</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: November 22, 2023</t>
            </li>
          </ul>
        </li>
        <li>
          <t>QUIC
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URLs:
              </t>
              <ul spacing="normal">
                <li>
                  <t>https://cs.chromium.org/chromium/src/net/third_party/quiche/src/quic/core/congestion_control/bbr2_sender.cc</t>
                </li>
                <li>
                  <t>https://cs.chromium.org/chromium/src/net/third_party/quiche/src/quic/core/congestion_control/bbr2_sender.h</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: BSD-style</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: October 21, 2021</t>
            </li>
          </ul>
        </li>
      </ul>
      <t>As of the time of writing, the following implementations of the delivery
rate sampling algorithm have been publicly released:</t>
      <ul spacing="normal">
        <li>
          <t>Linux TCP
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URL:
              </t>
              <ul spacing="normal">
                <li>
                  <t>GPLv2 license: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/ipv4/tcp_rate.c</t>
                </li>
                <li>
                  <t>BSD-style license: https://groups.google.com/d/msg/bbr-dev/X0LbDptlOzo/EVgkRjVHBQAJ</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: dual-licensed: GPLv2 / BSD-style</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: September 24, 2021</t>
            </li>
          </ul>
        </li>
        <li>
          <t>QUIC
          </t>
          <ul spacing="normal">
            <li>
              <t>Source code URLs:
              </t>
              <ul spacing="normal">
                <li>
                  <t>https://github.com/google/quiche/blob/main/quiche/quic/core/congestion_control/bandwidth_sampler.cc</t>
                </li>
                <li>
                  <t>https://github.com/google/quiche/blob/main/quiche/quic/core/congestion_control/bandwidth_sampler.h</t>
                </li>
              </ul>
            </li>
            <li>
              <t>Source: Google</t>
            </li>
            <li>
              <t>Maturity: production</t>
            </li>
            <li>
              <t>License: BSD-style</t>
            </li>
            <li>
              <t>Contact: https://groups.google.com/d/forum/bbr-dev</t>
            </li>
            <li>
              <t>Last updated: October 5, 2021</t>
            </li>
          </ul>
        </li>
      </ul>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>This proposal makes no changes to the underlying security of transport protocols
or congestion control algorithms. BBR shares the same security considerations
as the existing standard congestion control algorithm <xref target="RFC5681"/>.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document has no IANA actions. Here we are using that phrase, suggested
by <xref target="RFC8126"/>, because BBR does not modify or extend the wire format of
any network protocol, nor does it add new dependencies on assigned numbers.
BBR involves only a change to the congestion control algorithm of a transport
sender, and does not involve changes in the network, the receiver, or any network
protocol.</t>
      <t>Note to RFC Editor: this section may be removed on publication as an RFC.</t>
    </section>
    <section anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The authors are grateful to Len Kleinrock for his work on the theory underlying
congestion control. We are indebted to Larry Brakmo for pioneering work on
the Vegas <xref target="BP95"/> and New Vegas <xref target="B15"/> congestion control algorithms,
which presaged many elements of BBR, and for Larry's advice and guidance during
BBR's early development. The authors would also like to thank Kevin Yang,
Priyaranjan Jha, Yousuk Seung, Luke Hsiao for their work on TCP BBR; Jana Iyengar,
Victor Vasiliev, Bin Wu for their work on QUIC BBR; and Matt Mathis for his
research work on the BBR algorithm and its implications <xref target="MM19"/>. We would also
like to thank C. Stephen Gunn, Eric Dumazet, Nandita Dukkipati, Pawel Jurczyk,
Biren Roy, David Wetherall, Amin Vahdat, Leonidas Kontothanassis,
and the YouTube, google.com, Bandwidth Enforcer, and Google SRE teams for
their invaluable help and support. We would like to thank Randall R. Stewart,
Jim Warner, Loganaden Velvindron, Hiren Panchasara, Adrian Zapletal, Christian
Huitema, Bao Zheng, Jonathan Morton, Matt Olson, Junho Choi, Carsten Bormann,
Pouria Mousavizadeh Tehrani, Amanda Baber, Frédéric Lécaille,
and Tatsuhiro Tsujikawa
for feedback, suggestions, and edits on earlier versions of this document.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC9293">
          <front>
            <title>Transmission Control Protocol (TCP)</title>
            <author fullname="W. Eddy" initials="W." role="editor" surname="Eddy"/>
            <date month="August" year="2022"/>
            <abstract>
              <t>This document specifies the Transmission Control Protocol (TCP). TCP is an important transport-layer protocol in the Internet protocol stack, and it has continuously evolved over decades of use and growth of the Internet. Over this time, a number of changes have been made to TCP as it was specified in RFC 793, though these have only been documented in a piecemeal fashion. This document collects and brings those changes together with the protocol specification from RFC 793. This document obsoletes RFC 793, as well as RFCs 879, 2873, 6093, 6429, 6528, and 6691 that updated parts of RFC 793. It updates RFCs 1011 and 1122, and it should be considered as a replacement for the portions of those documents dealing with TCP requirements. It also updates RFC 5961 by adding a small clarification in reset handling while in the SYN-RECEIVED state. The TCP header control bits from RFC 793 have also been updated based on RFC 3168.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="7"/>
          <seriesInfo name="RFC" value="9293"/>
          <seriesInfo name="DOI" value="10.17487/RFC9293"/>
        </reference>
        <reference anchor="RFC2018">
          <front>
            <title>TCP Selective Acknowledgment Options</title>
            <author fullname="M. Mathis" initials="M." surname="Mathis"/>
            <author fullname="J. Mahdavi" initials="J." surname="Mahdavi"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <author fullname="A. Romanow" initials="A." surname="Romanow"/>
            <date month="October" year="1996"/>
            <abstract>
              <t>This memo proposes an implementation of SACK and discusses its performance and related issues. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="2018"/>
          <seriesInfo name="DOI" value="10.17487/RFC2018"/>
        </reference>
        <reference anchor="RFC7323">
          <front>
            <title>TCP Extensions for High Performance</title>
            <author fullname="D. Borman" initials="D." surname="Borman"/>
            <author fullname="B. Braden" initials="B." surname="Braden"/>
            <author fullname="V. Jacobson" initials="V." surname="Jacobson"/>
            <author fullname="R. Scheffenegger" initials="R." role="editor" surname="Scheffenegger"/>
            <date month="September" year="2014"/>
            <abstract>
              <t>This document specifies a set of TCP extensions to improve performance over paths with a large bandwidth * delay product and to provide reliable operation over very high-speed paths. It defines the TCP Window Scale (WS) option and the TCP Timestamps (TS) option and their semantics. The Window Scale option is used to support larger receive windows, while the Timestamps option can be used for at least two distinct mechanisms, Protection Against Wrapped Sequences (PAWS) and Round-Trip Time Measurement (RTTM), that are also described herein.</t>
              <t>This document obsoletes RFC 1323 and describes changes from it.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7323"/>
          <seriesInfo name="DOI" value="10.17487/RFC7323"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8126">
          <front>
            <title>Guidelines for Writing an IANA Considerations Section in RFCs</title>
            <author fullname="M. Cotton" initials="M." surname="Cotton"/>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <author fullname="T. Narten" initials="T." surname="Narten"/>
            <date month="June" year="2017"/>
            <abstract>
              <t>Many protocols make use of points of extensibility that use constants to identify various protocol parameters. To ensure that the values in these fields do not have conflicting uses and to promote interoperability, their allocations are often coordinated by a central record keeper. For IETF protocols, that role is filled by the Internet Assigned Numbers Authority (IANA).</t>
              <t>To make assignments in a given registry prudently, guidance describing the conditions under which new values should be assigned, as well as when and how modifications to existing values can be made, is needed. This document defines a framework for the documentation of these guidelines by specification authors, in order to assure that the provided guidance for the IANA Considerations is clear and addresses the various issues that are likely in the operation of a registry.</t>
              <t>This is the third edition of this document; it obsoletes RFC 5226.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="26"/>
          <seriesInfo name="RFC" value="8126"/>
          <seriesInfo name="DOI" value="10.17487/RFC8126"/>
        </reference>
        <reference anchor="RFC6298">
          <front>
            <title>Computing TCP's Retransmission Timer</title>
            <author fullname="V. Paxson" initials="V." surname="Paxson"/>
            <author fullname="M. Allman" initials="M." surname="Allman"/>
            <author fullname="J. Chu" initials="J." surname="Chu"/>
            <author fullname="M. Sargent" initials="M." surname="Sargent"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>This document defines the standard algorithm that Transmission Control Protocol (TCP) senders are required to use to compute and manage their retransmission timer. It expands on the discussion in Section 4.2.3.1 of RFC 1122 and upgrades the requirement of supporting the algorithm from a SHOULD to a MUST. This document obsoletes RFC 2988. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6298"/>
          <seriesInfo name="DOI" value="10.17487/RFC6298"/>
        </reference>
        <reference anchor="RFC5681">
          <front>
            <title>TCP Congestion Control</title>
            <author fullname="M. Allman" initials="M." surname="Allman"/>
            <author fullname="V. Paxson" initials="V." surname="Paxson"/>
            <author fullname="E. Blanton" initials="E." surname="Blanton"/>
            <date month="September" year="2009"/>
            <abstract>
              <t>This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods. This document obsoletes RFC 2581. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5681"/>
          <seriesInfo name="DOI" value="10.17487/RFC5681"/>
        </reference>
        <reference anchor="RFC7942">
          <front>
            <title>Improving Awareness of Running Code: The Implementation Status Section</title>
            <author fullname="Y. Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="A. Farrel" initials="A." surname="Farrel"/>
            <date month="July" year="2016"/>
            <abstract>
              <t>This document describes a simple process that allows authors of Internet-Drafts to record the status of known implementations by including an Implementation Status section. This will allow reviewers and working groups to assign due consideration to documents that have the benefit of running code, which may serve as evidence of valuable experimentation and feedback that have made the implemented protocols more mature.</t>
              <t>This process is not mandatory. Authors of Internet-Drafts are encouraged to consider using the process for their documents, and working groups are invited to think about applying the process to all of their protocol specifications. This document obsoletes RFC 6982, advancing it to a Best Current Practice.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="205"/>
          <seriesInfo name="RFC" value="7942"/>
          <seriesInfo name="DOI" value="10.17487/RFC7942"/>
        </reference>
        <reference anchor="RFC9438">
          <front>
            <title>CUBIC for Fast and Long-Distance Networks</title>
            <author fullname="L. Xu" initials="L." surname="Xu"/>
            <author fullname="S. Ha" initials="S." surname="Ha"/>
            <author fullname="I. Rhee" initials="I." surname="Rhee"/>
            <author fullname="V. Goel" initials="V." surname="Goel"/>
            <author fullname="L. Eggert" initials="L." role="editor" surname="Eggert"/>
            <date month="August" year="2023"/>
            <abstract>
              <t>CUBIC is a standard TCP congestion control algorithm that uses a cubic function instead of a linear congestion window increase function to improve scalability and stability over fast and long-distance networks. CUBIC has been adopted as the default TCP congestion control algorithm by the Linux, Windows, and Apple stacks.</t>
              <t>This document updates the specification of CUBIC to include algorithmic improvements based on these implementations and recent academic work. Based on the extensive deployment experience with CUBIC, this document also moves the specification to the Standards Track and obsoletes RFC 8312. This document also updates RFC 5681, to allow for CUBIC's occasionally more aggressive sending behavior.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9438"/>
          <seriesInfo name="DOI" value="10.17487/RFC9438"/>
        </reference>
        <reference anchor="RFC8985">
          <front>
            <title>The RACK-TLP Loss Detection Algorithm for TCP</title>
            <author fullname="Y. Cheng" initials="Y." surname="Cheng"/>
            <author fullname="N. Cardwell" initials="N." surname="Cardwell"/>
            <author fullname="N. Dukkipati" initials="N." surname="Dukkipati"/>
            <author fullname="P. Jha" initials="P." surname="Jha"/>
            <date month="February" year="2021"/>
            <abstract>
              <t>This document presents the RACK-TLP loss detection algorithm for TCP. RACK-TLP uses per-segment transmit timestamps and selective acknowledgments (SACKs) and has two parts. Recent Acknowledgment (RACK) starts fast recovery quickly using time-based inferences derived from acknowledgment (ACK) feedback, and Tail Loss Probe (TLP) leverages RACK and sends a probe packet to trigger ACK feedback to avoid retransmission timeout (RTO) events. Compared to the widely used duplicate acknowledgment (DupAck) threshold approach, RACK-TLP detects losses more efficiently when there are application-limited flights of data, lost retransmissions, or data packet reordering events. It is intended to be an alternative to the DupAck threshold approach.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8985"/>
          <seriesInfo name="DOI" value="10.17487/RFC8985"/>
        </reference>
        <reference anchor="RFC9000">
          <front>
            <title>QUIC: A UDP-Based Multiplexed and Secure Transport</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar"/>
            <author fullname="M. Thomson" initials="M." role="editor" surname="Thomson"/>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document defines the core of the QUIC transport protocol. QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances. Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9000"/>
          <seriesInfo name="DOI" value="10.17487/RFC9000"/>
        </reference>
        <reference anchor="RFC4340">
          <front>
            <title>Datagram Congestion Control Protocol (DCCP)</title>
            <author fullname="E. Kohler" initials="E." surname="Kohler"/>
            <author fullname="M. Handley" initials="M." surname="Handley"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <date month="March" year="2006"/>
            <abstract>
              <t>The Datagram Congestion Control Protocol (DCCP) is a transport protocol that provides bidirectional unicast connections of congestion-controlled unreliable datagrams. DCCP is suitable for applications that transfer fairly large amounts of data and that can benefit from control over the tradeoff between timeliness and reliability. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="4340"/>
          <seriesInfo name="DOI" value="10.17487/RFC4340"/>
        </reference>
        <reference anchor="RFC6928">
          <front>
            <title>Increasing TCP's Initial Window</title>
            <author fullname="J. Chu" initials="J." surname="Chu"/>
            <author fullname="N. Dukkipati" initials="N." surname="Dukkipati"/>
            <author fullname="Y. Cheng" initials="Y." surname="Cheng"/>
            <author fullname="M. Mathis" initials="M." surname="Mathis"/>
            <date month="April" year="2013"/>
            <abstract>
              <t>This document proposes an experiment to increase the permitted TCP initial window (IW) from between 2 and 4 segments, as specified in RFC 3390, to 10 segments with a fallback to the existing recommendation when performance issues are detected. It discusses the motivation behind the increase, the advantages and disadvantages of the higher initial window, and presents results from several large-scale experiments showing that the higher initial window improves the overall performance of many web services without resulting in a congestion collapse. The document closes with a discussion of usage and deployment for further experimental purposes recommended by the IETF TCP Maintenance and Minor Extensions (TCPM) working group.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6928"/>
          <seriesInfo name="DOI" value="10.17487/RFC6928"/>
        </reference>
        <reference anchor="RFC6675">
          <front>
            <title>A Conservative Loss Recovery Algorithm Based on Selective Acknowledgment (SACK) for TCP</title>
            <author fullname="E. Blanton" initials="E." surname="Blanton"/>
            <author fullname="M. Allman" initials="M." surname="Allman"/>
            <author fullname="L. Wang" initials="L." surname="Wang"/>
            <author fullname="I. Jarvinen" initials="I." surname="Jarvinen"/>
            <author fullname="M. Kojo" initials="M." surname="Kojo"/>
            <author fullname="Y. Nishida" initials="Y." surname="Nishida"/>
            <date month="August" year="2012"/>
            <abstract>
              <t>This document presents a conservative loss recovery algorithm for TCP that is based on the use of the selective acknowledgment (SACK) TCP option. The algorithm presented in this document conforms to the spirit of the current congestion control specification (RFC 5681), but allows TCP senders to recover more effectively when multiple segments are lost from a single flight of data. This document obsoletes RFC 3517 and describes changes from it. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6675"/>
          <seriesInfo name="DOI" value="10.17487/RFC6675"/>
        </reference>
        <reference anchor="RFC6937">
          <front>
            <title>Proportional Rate Reduction for TCP</title>
            <author fullname="M. Mathis" initials="M." surname="Mathis"/>
            <author fullname="N. Dukkipati" initials="N." surname="Dukkipati"/>
            <author fullname="Y. Cheng" initials="Y." surname="Cheng"/>
            <date month="May" year="2013"/>
            <abstract>
              <t>This document describes an experimental Proportional Rate Reduction (PRR) algorithm as an alternative to the widely deployed Fast Recovery and Rate-Halving algorithms. These algorithms determine the amount of data sent by TCP during loss recovery. PRR minimizes excess window adjustments, and the actual window size at the end of recovery will be as close as possible to the ssthresh, as determined by the congestion control algorithm.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6937"/>
          <seriesInfo name="DOI" value="10.17487/RFC6937"/>
        </reference>
        <reference anchor="RFC9002">
          <front>
            <title>QUIC Loss Detection and Congestion Control</title>
            <author fullname="J. Iyengar" initials="J." role="editor" surname="Iyengar"/>
            <author fullname="I. Swett" initials="I." role="editor" surname="Swett"/>
            <date month="May" year="2021"/>
            <abstract>
              <t>This document describes loss detection and congestion control mechanisms for QUIC.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9002"/>
          <seriesInfo name="DOI" value="10.17487/RFC9002"/>
        </reference>
        <reference anchor="RFC3168">
          <front>
            <title>The Addition of Explicit Congestion Notification (ECN) to IP</title>
            <author fullname="K. Ramakrishnan" initials="K." surname="Ramakrishnan"/>
            <author fullname="S. Floyd" initials="S." surname="Floyd"/>
            <author fullname="D. Black" initials="D." surname="Black"/>
            <date month="September" year="2001"/>
            <abstract>
              <t>This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="3168"/>
          <seriesInfo name="DOI" value="10.17487/RFC3168"/>
        </reference>
        <reference anchor="RFC9330">
          <front>
            <title>Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture</title>
            <author fullname="B. Briscoe" initials="B." role="editor" surname="Briscoe"/>
            <author fullname="K. De Schepper" initials="K." surname="De Schepper"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="G. White" initials="G." surname="White"/>
            <date month="January" year="2023"/>
            <abstract>
              <t>This document describes the L4S architecture, which enables Internet applications to achieve low queuing latency, low congestion loss, and scalable throughput control. L4S is based on the insight that the root cause of queuing delay is in the capacity-seeking congestion controllers of senders, not in the queue itself. With the L4S architecture, all Internet applications could (but do not have to) transition away from congestion control algorithms that cause substantial queuing delay and instead adopt a new class of congestion controls that can seek capacity with very little queuing. These are aided by a modified form of Explicit Congestion Notification (ECN) from the network. With this new architecture, applications can have both low latency and high throughput.</t>
              <t>The architecture primarily concerns incremental deployment. It defines mechanisms that allow the new class of L4S congestion controls to coexist with 'Classic' congestion controls in a shared network. The aim is for L4S latency and throughput to be usually much better (and rarely worse) while typically not impacting Classic performance.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9330"/>
          <seriesInfo name="DOI" value="10.17487/RFC9330"/>
        </reference>
        <reference anchor="RFC8511">
          <front>
            <title>TCP Alternative Backoff with ECN (ABE)</title>
            <author fullname="N. Khademi" initials="N." surname="Khademi"/>
            <author fullname="M. Welzl" initials="M." surname="Welzl"/>
            <author fullname="G. Armitage" initials="G." surname="Armitage"/>
            <author fullname="G. Fairhurst" initials="G." surname="Fairhurst"/>
            <date month="December" year="2018"/>
            <abstract>
              <t>Active Queue Management (AQM) mechanisms allow for burst tolerance while enforcing short queues to minimise the time that packets spend enqueued at a bottleneck. This can cause noticeable performance degradation for TCP connections traversing such a bottleneck, especially if there are only a few flows or their bandwidth-delay product (BDP) is large. The reception of a Congestion Experienced (CE) Explicit Congestion Notification (ECN) mark indicates that an AQM mechanism is used at the bottleneck, and the bottleneck network queue is therefore likely to be short. Feedback of this signal allows the TCP sender-side ECN reaction in congestion avoidance to reduce the Congestion Window (cwnd) by a smaller amount than the congestion control algorithm's reaction to inferred packet loss. Therefore, this specification defines an experimental change to the TCP reaction specified in RFC 3168, as permitted by RFC 8311.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8511"/>
          <seriesInfo name="DOI" value="10.17487/RFC8511"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="CCGHJ16" target="http://queue.acm.org/detail.cfm?id=3022184">
          <front>
            <title>BBR: Congestion-Based Congestion Control</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="C." surname="Gunn" fullname="C. Stephen Gunn">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2016" month="October"/>
          </front>
          <seriesInfo name="ACM Queue" value="Oct 2016"/>
        </reference>
        <reference anchor="CCGHJ17" target="https://cacm.acm.org/magazines/2017/2/212428-bbr-congestion-based-congestion-control/pdf">
          <front>
            <title>BBR: Congestion-Based Congestion Control</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="C." surname="Gunn" fullname="C. Stephen Gunn">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2017" month="February"/>
          </front>
          <seriesInfo name="Communications of the ACM" value="Feb 2017"/>
        </reference>
        <reference anchor="MM19">
          <front>
            <title>Deprecating The TCP Macroscopic Model</title>
            <author initials="M." surname="Mathis" fullname="M. Mathis">
              <organization/>
            </author>
            <author initials="J." surname="Mahdavi" fullname="J. Mahdavi">
              <organization/>
            </author>
            <date year="2019" month="October"/>
          </front>
          <seriesInfo name="Computer Communication Review, vol. 49, no. 5, pp. 63-68" value=""/>
        </reference>
        <reference anchor="BBRStartupCwndGain" target="https://github.com/google/bbr/blob/master/Documentation/startup/gain/analysis/bbr_startup_cwnd_gain.pdf">
          <front>
            <title>BBR Startup cwnd  Gain: a Derivation</title>
            <author initials="I." surname="Swett" fullname="Ian Swett">
              <organization/>
            </author>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2018" month="July"/>
          </front>
        </reference>
        <reference anchor="BBRStartupPacingGain" target="https://github.com/google/bbr/blob/master/Documentation/startup/gain/analysis/bbr_startup_gain.pdf">
          <front>
            <title>BBR Startup Pacing Gain: a Derivation</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2018" month="June"/>
          </front>
        </reference>
        <reference anchor="BBRDrainPacingGain" target="https://github.com/google/bbr/blob/master/Documentation/startup/gain/analysis/bbr_drain_gain.pdf">
          <front>
            <title>BBR Drain Pacing Gain: a Derivation</title>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="Y." surname="Cheng" fullname="Yuchung Cheng">
              <organization/>
            </author>
            <author initials="S." surname="Hassas Yeganeh" fullname="Soheil Hassas Yeganeh">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="2021" month="September"/>
          </front>
        </reference>
        <reference anchor="draft-romo-iccrg-ccid5">
          <front>
            <title>Profile for Datagram Congestion Control Protocol (DCCP) Congestion Control ID 5</title>
            <author fullname="Nathalie Romo Moreno" initials="N. R." surname="Moreno">
              <organization>Deutsche Telekom</organization>
            </author>
            <author fullname="Juhoon Kim" initials="J." surname="Kim">
              <organization>Deutsche Telekom</organization>
            </author>
            <author fullname="Markus Amend" initials="M." surname="Amend">
              <organization>Deutsche Telekom</organization>
            </author>
            <date day="25" month="October" year="2021"/>
            <abstract>
              <t>   This document contains the profile for Congestion Control Identifier
   5 (CCID 5), BBR-like Congestion Control, in the Datagram Congestion
   Control Protocol (DCCP).  CCID 5 is meant to be used by senders who
   have a strong demand on low latency and require a steady throughput
   behavior.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-romo-iccrg-ccid5-00"/>
        </reference>
        <reference anchor="A15" target="https://www.ietf.org/mail-archive/web/aqm/current/msg01480.html">
          <front>
            <title>TCP ACK suppression</title>
            <author initials="M." surname="Abrahamsson" fullname="Mikael Abrahamsson">
              <organization/>
            </author>
            <date year="2015" month="November"/>
          </front>
          <refcontent>IETF AQM mailing list</refcontent>
        </reference>
        <reference anchor="Jac88" target="http://ee.lbl.gov/papers/congavoid.pdf">
          <front>
            <title>Congestion Avoidance and Control</title>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="1988" month="August"/>
          </front>
          <seriesInfo name="SIGCOMM 1988, Computer Communication Review, vol. 18, no. 4, pp. 314-329" value=""/>
        </reference>
        <reference anchor="Jac90" target="ftp://ftp.isi.edu/end2end/end2end-interest-1990.mail">
          <front>
            <title>Modified TCP Congestion Avoidance Algorithm</title>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date year="1990" month="April"/>
          </front>
          <seriesInfo name="end2end-interest mailing list" value=""/>
        </reference>
        <reference anchor="BP95">
          <front>
            <title>TCP Vegas: end-to-end congestion avoidance on a global Internet</title>
            <author initials="L." surname="Brakmo" fullname="Lawrence S. Brakmo">
              <organization/>
            </author>
            <author initials="L." surname="Peterson" fullname="Larry L. Peterson">
              <organization/>
            </author>
            <date year="1995" month="October"/>
          </front>
          <seriesInfo name="IEEE Journal on Selected Areas in Communications 13(8): 1465-1480" value=""/>
        </reference>
        <reference anchor="B15" target="https://docs.google.com/document/d/1o-53jbO_xH-m9g2YCgjaf5bK8vePjWP6Mk0rYiRLK-U/edit">
          <front>
            <title>TCP-NV: An Update to TCP-Vegas</title>
            <author initials="L." surname="Brakmo" fullname="Lawrence S. Brakmo">
              <organization/>
            </author>
            <date year="2015" month="August"/>
          </front>
          <seriesInfo name="" value=""/>
        </reference>
        <reference anchor="WS95">
          <front>
            <title>TCP/IP Illustrated, Volume 2: The Implementation</title>
            <author initials="G." surname="Wright" fullname="Gary R. Wright">
              <organization/>
            </author>
            <author initials="W." surname="Stevens" fullname="W. Richard Stevens">
              <organization/>
            </author>
            <date year="1995"/>
          </front>
          <seriesInfo name="Addison-Wesley" value=""/>
        </reference>
        <reference anchor="HRX08">
          <front>
            <title>CUBIC: A New TCP-Friendly High-Speed TCP Variant</title>
            <author initials="S." surname="Ha">
              <organization/>
            </author>
            <author initials="I." surname="Rhee">
              <organization/>
            </author>
            <author initials="L." surname="Xu">
              <organization/>
            </author>
            <date year="2008"/>
          </front>
          <seriesInfo name="ACM SIGOPS Operating System Review" value=""/>
        </reference>
        <reference anchor="GK81" target="http://www.lk.cs.ucla.edu/data/files/Gail/power.pdf">
          <front>
            <title>An Invariant Property of Computer Network Power</title>
            <author initials="R." surname="Gail">
              <organization/>
            </author>
            <author initials="L." surname="Kleinrock">
              <organization/>
            </author>
            <date/>
          </front>
          <seriesInfo name="Proceedings of the International Conference on Communications" value="June, 1981"/>
        </reference>
        <reference anchor="K79">
          <front>
            <title>Power and deterministic rules of thumb for probabilistic problems in computer communications</title>
            <author initials="L." surname="Kleinrock">
              <organization/>
            </author>
            <date/>
          </front>
          <seriesInfo name="Proceedings of the International Conference on Communications" value="1979"/>
        </reference>
        <reference anchor="KN_FILTER" target="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/lib/win_minmax.c?id=a4f1f9ac8153e22869b6408832b5a9bb9c762bf6">
          <front>
            <title>Linux implementation of Kathleen Nichols' windowed min/max algorithm</title>
            <author initials="K." surname="Nichols" fullname="Kathleen Nichols">
              <organization/>
            </author>
            <author initials="N." surname="Cardwell" fullname="Neal Cardwell">
              <organization/>
            </author>
            <author initials="V." surname="Jacobson" fullname="Van Jacobson">
              <organization/>
            </author>
            <date/>
          </front>
        </reference>
        <reference anchor="RFC8311">
          <front>
            <title>Relaxing Restrictions on Explicit Congestion Notification (ECN) Experimentation</title>
            <author fullname="D. Black" initials="D." surname="Black"/>
            <date month="January" year="2018"/>
            <abstract>
              <t>This memo updates RFC 3168, which specifies Explicit Congestion Notification (ECN) as an alternative to packet drops for indicating network congestion to endpoints. It relaxes restrictions in RFC 3168 that hinder experimentation towards benefits beyond just removal of loss. This memo summarizes the anticipated areas of experimentation and updates RFC 3168 to enable experimentation in these areas. An Experimental RFC in the IETF document stream is required to take advantage of any of these enabling updates. In addition, this memo makes related updates to the ECN specifications for RTP in RFC 6679 and for the Datagram Congestion Control Protocol (DCCP) in RFCs 4341, 4342, and 5622. This memo also records the conclusion of the ECN nonce experiment in RFC 3540 and provides the rationale for reclassification of RFC 3540 from Experimental to Historic; this reclassification enables new experimental use of the ECT(1) codepoint.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8311"/>
          <seriesInfo name="DOI" value="10.17487/RFC8311"/>
        </reference>
        <reference anchor="I-D.draft-ietf-quic-ack-frequency">
          <front>
            <title>QUIC Acknowledgment Frequency</title>
            <author fullname="Jana Iyengar" initials="J." surname="Iyengar">
              <organization>Fastly</organization>
            </author>
            <author fullname="Ian Swett" initials="I." surname="Swett">
              <organization>Google</organization>
            </author>
            <author fullname="Mirja Kühlewind" initials="M." surname="Kühlewind">
              <organization>Ericsson</organization>
            </author>
            <date day="5" month="February" year="2026"/>
            <abstract>
              <t>   This document specifies an extension to QUIC that enables an endpoint
   to request its peer change its behavior when sending or delaying
   acknowledgments.

Note to Readers

   Discussion of this draft takes place on the QUIC working group
   mailing list (quic@ietf.org), which is archived at
   https://mailarchive.ietf.org/arch/search/?email_list=quic.  Source
   code and issues list for this draft can be found at
   https://github.com/quicwg/ack-frequency.

   Working Group information can be found at https://github.com/quicwg.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-quic-ack-frequency-14"/>
        </reference>
      </references>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+S96XbbWJYu+B9PgVKsqCAjSWrwELainHVl2eFwhQeVJWdU
9q1bWiAJSUiTBBMAJSudrlWvcP93v9x9kt7fHs4AgLKjKrtvr9WxVlVaBHCG
ffbZ8zAej5Prw/ReMi9nq2yZH6bzKrtoxkXeXIxns5vL8XRajfceJLOsOUzz
j+uk3kyXRV0X5aq5XdP7L5+f/ZSsi8MkTZtydph+11Sb/Dv6q75dVvlFHf5S
Vk30U1M0Cxri6dN36XG5uszrhobFP5uqXCQZTZ1f8+Mkq/JM57q5PEyPj399
kXy4Oez7LMk2zVVZHSbjlP6k4fN50ZQVzS8bfJNni/Q4q+Y3+WJBv5YVDfii
LC8XOf2VL7NicZiuZvrCf7vkJ5NZudw24MtslZ7e5E2zbbAiW9V4/hVj/VNZ
5+ur9GleX2W3Nt7rvMn8aH+a8sP/tqRfeahVWS2zprjOcQbvfjp+fPD4nv7z
YG//kf7zh3sH7tf9/cf6z0f7Bw/1nw8PHtu7Dx4+2rfPHt8/sHHv37MXHj1+
9MB+3dvb03/ev3ff/vnw8YG9+/DhDw/cr/d+8J/ZuPf2H9q7j+/dsxEePdin
NRSri3B3x8cvfv6nfV4xoVtWXeaElVdNsz7c3f3zJt/kk2y2nBDQducEnWIx
mV0s/7GYP7m3d3Cw/+i+fCZI9x2hVYg/46dZnc97EOo7/sqQCv8eb8Ek/+SP
m9nVZnWZHl/lq0t9UqwI8Y8n6YvNasU/2ZnTb6cNnXq+8s9soNPyKi8W6V/T
n7O6zur0j/lltsqvonf+QPj3T9msnNalfDzPGvqZzv7heH+Pf6nzqshrAPNQ
pz46fp3+MyB2mL6dNfyyg+8PXfjWBOAZYGvwXWaX2V+KVV7v0qc/7B7sHuwf
3D94xORi5oE6BVDDH2YC1N31/OL/b6fxw3jvYMtpHJfL5WZVEJGlrdZpeZE2
VznO6DD9KZ/yx/Tm69dybx3UnuXrKsdHtLkz+uDs+CR9nc2qsp6V62KWvi7n
+WIrwF5P6OXmqqijX/8Jv17Ns+siXv3j7bhEq19vmryKt5G+y6+L/GaUXpeL
SXr/8ShdlZP0wShdryfpw3tjuvTpdzhOOvpTwrVmsz6+Wc1fZMXqsI0aqb6Q
zuiNNOV30oz2XxXXPNl3vRh7WTRXmymI5K6Q3l1Cz93popwS/ta04t1n5Wyz
zFcND7Jbyyy7lzT+brbKFrd1UeObc31yjvnP8XhiCNwH2ZAj/GcR9G+BcY/G
/7RZRBA+yWaELHfDWN753wjkL8L3fw8kVwLJZxUt7y448gv/+6A4x/T/H4Th
wf74NF/TTyJhVuWyHBezWXVJcmYxf0C3Zvxs0v6VXj/af9DPj25ubiaQU5Uf
FYtxVs2uSFLYvcmnu9mfl7uzTVURxHaX9eXe/v1He5OrZrkIzwsE8+j4l7Te
rImSslS7nVwWH7J8kR5Nq+wqW9YdFHkw3t/nX6qZSKrp0T+/TrEu4MGiqEEN
CDKPHvWKL3k+WUwXk8vyenedrfOq3gXXzK7LYj5pMcuAKx7hebaa5Wm2mjsR
eNsWtpzN/uNHj8Z7j7ZQ99OXL47fvn7Nb42+itbvPxJaf19o/b39++N7B4+V
2tP8j/diCFwwAOj/T4q6mOTzzW6+mh/Q/9n/josVTUlbHu8/frw3AUxDeBCb
Ky4KEhdwnL3AOVpclhXdsOVvB83jvfHe/S2gaS8vOm3jbiePH8Q0Aqv8A90c
EkDwcVOO6X9SLyOlmVs3/kgviQjQXX2JSVZ50ycBsTTzapI+rbIPyzKSZ15l
N3QHaKzT6LH/5iSncW3X/ququu08dTB5sF0Y+O7l8+fPSY/ZVESUsIHTfJHP
GjqdI9Lhapq2Le3s3xs8GtKw9x/SsHRLvzPIbbv4pKvWE69L4W+mjLvz3f1y
/ODen6Zvzz/+PF4+vjz44/Hln7KLB9NfHl3nJ3/69eTh6w971R+Ld69+Gb/f
he7VPpnxmz8cpker9P0amyWVFkg15uP62wA+oBdbr9x3BoJfT3uQZ/flSfpy
sdjUTUVjzUfpH8oF7T89OGQh8OVyvcgdo9i66BeT9NequLxqokW/yOjY30WP
9P1fWS6+zld19AH9/K6YXREfiR57TNmmf8znBaHV+Ne8XuS3ut2f3/3L3qN4
v8fvn748phMhfnXDR/ETjbSaL27Tn2mF49N1rvf+D1lFevb220Gn8HMW/vCS
Vn6V562D/JdNdEpbjwj6E1HGtyen6Vui1iKDn94Sp14qMdQ9vfhFdOkOwQf7
WnyYECpvZouMyR7Nmu1eFAvSqkhuICWpvMmrNvUn5Hy5upbdpidVSbM3t1AZ
HGl+kzc3ZfUhPcHn2+BBh/zC6Kjf/i+LvFhV5exDAIWLbFHnW8BA88/oBGjz
TmsROsXIBwmjXF3kcg/K9s0nZWOzykfgLOCcv/wQaze8fOZrcxChZbEiqkpa
TbUhAMlsm+U0vSirdF0RiZwWC3kBf9EVYFozU6DoemfRAu640P8vA+K7/cc/
PAa6/PLm/KeXr86ev+snfSQmTj6ADSxY6llvprv1bLlLTGfzcVce4J3dpqyu
s8W8licT/ISt0/8siunuDQmIBM5l9nEyg3kku3+xf/GYRJP9B/fyg4NHDx9P
H97fe/To3sH0QfZ4On08++HhwfTiYXg6rzBwWkTEBjv/hVTKRU7a8xsiC+Wi
/i6l2eZ0lPOUpiQ57WOafZEbtwf5Ssl1CxOXc0uS8XicZlOQzVlDf56R6psa
70jrdT6DFFHz2UGODziyWi38yif8xmDnadkQNFY54clTwtSbYt5cMcq+Kzfg
7VWxBjqus0sBUFMs851huqlpHlLdMfGSuOKmYiAy6mQJLXBVr8uqwbw0NtNx
Wmq+IOGW6DPI/iit/AwYdcTTrrPZh7xJF2Vd82sJMbDppljM6SnMt4tiVtCU
sAsYmq6UWqwJ5rKtBsYPXiKsA/p2UyYGhWlJm7wqbwiwJPPQeDVR5BrAzngR
GJXOuVhulhAIwZtoLn5ML2eLRXlTJ3Q3LxZgMril4ToyemV1y3siCp0v2PgH
RoxNiUGp52gSdzQ1SfMzOoSaPl6V6adPatD8/DklSsH8RH6EPfPz5xFvubyg
y1nDuE2K1aopaJG36RUtjyhQc0WgvrwiMsK0ZuqOvE5uaMK0vuItEZx5EMxC
Bzgvl7zivJaTiYdeMG1jkyUYBwE4u62T1vApDz/P87Ube8CyIT7ZkZ9IQ8ya
naEc3IwOeZon7lISpAi4DE2HUoSNTTmjU2yuCNJQfOjXWhFnbDiW0J+r8maR
zy8x0IQki01N512NUmI4q7Qm+Y4oWXz96zSrSBG5JraSEQHm7YA1C7QPHt+j
IwAo/vm9O4K9vb3PnzF6cBUTfxVpKdDJ0nuGrNilO2k+uut7E7nZy2I+p0mT
b0B4q3K+4XuTfvqmCP78jGufO1k6vSI0IdiQIMhUmk5mA/y6E9X8AupkUXzI
Bc8Gnz6xcgeM4n8+3pN/QoajjYeIOBRtjVGRvmOxh15OArQcAih5nQeTpbge
tCKQPcbEJU72Ns8IL6b5LKOVJwSjW3qBjqEm9KDLTvBc3PIn42XWzK7oe7pK
ACUpvMQiQIEINy75ik6NhI0ZIZO1wK1WJnxZ5cJ4BfWAhcXKwxIEpJ6kR3XA
/QTESU5k4Bqi6t2ALcCyZ1ATaGxatjLyDEydZqpJvqyyBfCbcBbiT00UlCSh
sj5Mkn0SUOObSDLeqn07RxGJxIW5ytbAaMiu6Q0oX0bq2wp8H8iBTzcNSRZ/
YRQnyRh3EpRhXEP2xJZWl+OrbLPgz+o0pztR3gI44LikmjZw5NR0lwn6eqc7
i7obLlgmny+xr9t6mYHrBVRJDx/EtSQIEfzozEaEHh9YF4UIkS43i6YA+Rdy
Os8ZzDkAjtURAce7YBnpZk1Th1AaTG/TB3vf4gj6SOq9vW9pCOh1bdo6FNJX
rugsa9ov/c9lVd4w4hBWhbPyjghvswtCGyUI7dMphAaES6PDxyYuKqK2TOOA
8BiryoD+BOCqppn4XPEtjihlnlbjXhREMYh8HEzSZwGVJRm74beJ+vG9aMp5
dksc2HD6C+dF8/sTk6vGeIyhYro9Sgm0Vb7OocjRekj2Xwg/8KsR2C6I1Y6X
pBoolgGumABvAxuZl3hWQpu6R5u6JbmIoGDQAAGheedQ54CGjmNdMJOY5+vm
apTO9atl8ZGvOy1gldPZjcFR+NZfgIPj6IWD14698zmIXKPQ1XcdPGiwiyqn
xa6a8CAnfFXBt92V/vKdwFjEa0DQlhlReZhdgVf0Y4UrVsVEjU4tzxjh6IN1
Scsn3FqoDE5jkSLFzkYS0Ql8tB6Wc3B+el3HAq90sD8ZYvcKqLGBd3CPfvfr
By+jkfN5kk3pXuJ6gssVeJduI1aRza4KQu30YrNYhDeaiUTv/p0QRntdQ5At
ZpBwTbIZERBrAALb3N97MV3XTBPo33vEQd6dndFZ5iSsZUlbVCQyApK0N9nD
f/e+TQfFJJ8QGSkrYDEd4P09urI0/xz8prmBfB4MktfDUcLXHfNlwYxgDPQD
D3QB4j4FGrt5mYfuf8s4IqRCtpBkDd9T4v6vsY+AsBhzXBSk0pjsQYd5m66A
Cw204BuSb8APA4nKXf9ZtoaEAvQo5U0+50jc9liUFISfP20qEChsYsRot6lY
KgW1FSwJ0M5ff8Yzf7IMIPkNmEvCLWmEt3x++u8xUcW6YCIdoCcu9JnKP6HW
kf4WraMHi5x0AahkIuorzvlHLCk22YccZAerqkrsGZgMjbZJ+NrfeV0lyONO
tcfLqMldas/X6T2p6T3JV+k9I8gdi83cOBMWvwRNDqTZgI605KRU5aTRl7Sf
RhEt2CCjynqRQZjephGBGpREFYze5x9n8FUQhrACkbDrgmBJRLTpV95owLKa
Q5cp08tNMc95l06fy6+y66KsEpax8g+OpwQUCTsDeeAZUzdjW4ue5/WsKqaq
Rav/xYnxdwrxQG/66bKAtaT1Revt/VF6k9VusjnLskW5qYkAQH1MePULoqwL
ohoaNPL5s/3zByUgrL6ETndEGkFhArF3QAikP7zjRT87n36ZLiGYgqbSacyL
erapa9HGSlAR5sS0NHj0sZijGV94/rVcGZCS6HbitIoVeHwH6gWY8WW2Kv4C
jCXSVbKOTQKxItkBUPSazp1UKjCnDYB3QVxCzGCMlzcke9B6RQHSkwfWNeFU
fsh7fkjQbTowGD0TPTD6ubjcduCsDusw91soE72Kw5BAnvCC0itEFsLrK1g+
cgi9zqpsyY4LnayBhLkEt13lfgsPWnO3jFn4akMj2FIf+tcT+m1DS7zFlDUB
oRIk8i//QJyo4XHlykMxpv8j9vTy6M1R+7MkBMmjdEZ0zFSFo0gRBx8gDfeM
zaHlory8JQW38X997l7IC0TpCAj48EHMVH6kqaFG1XLmJIp2ToCmO9a36OPF
hr5kvD4m+C6Oidiz5U2Gw/5sIDEjGQ4loDblWrfL3C5LLwvI9J4STsSEz6vj
OeoVsZzzmZtkkhwRhup8FROhCxJR5wIn5d9N/pHhfZtAnGFZ7zAdHA950oDs
MjxG6eBEntDaxspD7Mm7U36U4BE9EbZSZ8CREaSGAe1RPo4o1HdQqdjwu5Ch
JonfF2OD4IEdzVzlLvxqv5Ds8+mT8b0xJh7LxDVMCjvPjCO+w5JO5cmOyKys
btT5hhCgnOfCSQJ8GIG4kbi5mgnRY0ALGYRpbgbGogaCBFBrQ4wPWnZ+l2FE
XxbgZCFDJ1oXLaBYiWkAK4Unfg07es1sLpsHZplooUm0UP4/Oy53IMFxkcC1
3iyYnQfD81nQlAa3CEoYYbOCQn3BEBNG7hUdenl628C2F7yZ4E3IJDWeq6js
hYJwPB4ES6zdUMBB/WiSvGwZ1l4f/RF3S7kHRgKtUTvntFATDf/DDzNKgNmM
1XUgJUWvpGzjALR7aSCLoETv6F1lyTXeZBlwTVhN6J3+XN7AMEPrKeDdsGtE
zEj0PiI6t8l8wzLZ7Aq2JvbMvD57n8Y6GMmEdZ2Rys3fjgIgJerGYTTp7kIN
n/SQEGMjcqJAnRgdq13Ju+cIYnj+5tnzZ3LyoFBt8yUgLGfB0OycSg+i7HhD
aT7fwYZ29OLiT94BSbqAmllhSWOhjSSRQEhC8bKE1OrMtPlqvi6JkLARima/
KKol8+WMLe74FYJ0Aatakk8uSVG7LjIiq2xcxU16x1a9wMw6YmMsLXyjNvXY
xpsEZlomcHj7FCMNdsSHj29idkRKxXRRzj7INIiTImmGGBRxKLeTY09CTpkq
EMOyh2NPGMZMMoh/HU9OX5+eii/7lMBA8H+t4jT+TE8JOYAGfDwix6m4nQBv
RJdgA2JuAOfAnlCEYCpJ80MVKiOrkjuRHot5627Aj/ShZh7RQwvXm2oNLj5J
ZUPp6/enZ7qEvtFJH7+FbYZxxn9E996+Yd14y9L0Y8aqH5kG5h+VVZkVvg/h
MZrOdPrz2/evnsF9gB+fX1xMakJ5PPFcyVvyR14OnPww2QfG3FwVRI50sXXi
Vou5t6+Yt0saFQma4Il4mdb78iS9yjM6emL6PZC4Gwhy69aLDe8v6XlRx+4B
FN+eHkglfZDidfAXO6bwYepLEj6ZhO0k7vZ5eO3f7wLLD9TZQuIe6aI9rN4/
a8MKRjTjmqC2DuH7gGBTgdS5sdsIEkAdwrhCgG1ieteEI7kBjCepeD4m9Zge
0xWB6UX/ckdLZ0uE7Lw26nKOC0U3vyJmocbeQAC5UhbFxhvSyUj6ZvsLU8HE
DdIia3XACejfsPrQV4gomDWBbatc5ZFFDGSfLQ0pWxqgm8PZSbNB1bmdpD+F
qMNMoYAraSNuL/B4PjW/frB5otFVs8lNL9pOq5NlDlZZ1Kou6djuMofDMsli
+WiVX5ZNwYKOOvYSE4e+jpgHZDx1K+BzegldMVsgQltIcyE/hGKgOPqJWzam
S/SQuRYRtfUF5wwOJ4Ob2s2+YKzCsVZZQ1M2tIJsSefUmGiWuHfSuoTHUuyR
Yvy/yDGUgb/Pz04oLhoUk+s3bz3JXsPgxyKqyB9sbVFaYO/U9BK066TKQ9YT
nhFbBEKhIc2cjSuf8y6LldiBZJORMWyKKEOzTjlL1mY5xSW7EJ6YkMYFjW2u
DrrAlKSOlYjutL4Wgd0vtYYWyTSCMaxsenZA13uZsW+Spcm6UVGQhi8qdl1E
nDgZQGBJd9bFOt8RnwEjHjKFxKW0w0s5L1bnAonwLWQOwT/6tcfEEK0lcp/t
xYw920kMbPG3avVBCsqEcw4kKiERmUyByhtTxznwah5QDJ52RcroObxc53gs
x5lnFSRq/lD8NWtEcoMOATWzlRBUpUXq0cchiFx1QtqoUAyn+jmxSjXVUGF0
UtW70/bladkm/bUx2767Inpsim5JKCkBYH/a1IoUtLB87vQds/7p7fUruD2v
OC7njA1FgXU3HWQfMm9dHZr+Vk7hD6ChgQVJW1rrWYNMVzV6heCE0KFm2WKm
uqDYqtXcxo4G81yNa++eiuXD8Cb3zsmeMvCxLWD24jchWQnVQokxTJfBnUrY
CytZQL/IpDe2TxP8J+mA78CfN4grIXZYQCm4gAeYvfxZnZitIJ8/o9l3nAyH
TDwS1YfBmnFve5fMj1O94Hy7e9eWxGvjkZuP/g4jtcooW6qumQ41jlHtS8ec
DoCP0dGlwdGBM9ANs5OTT5QzdaCa2MqHsvSt4HBfzfPZIqsMJttuDZv2YpHk
t+wuRszkP7u7NNwd6MjbTQOzvmYMpCfOYkp0pORnlqo39tZU1s5mTgTw/LNm
Pe27uisNTNJf2fdumi+TPIj8fP4WARCQ4HwFx4LeswBj5iX9CPYD74ejy0xm
hZIGRMUoj5LYymxRGYv28EibBK57RDhaE5gA5UMenMk337DN8jDUNlOvbUrk
2+VllV+y1QlhNpuFkcJA6W/Ky5zJO2wobMYZpaK847ouoZHSmIExMr4UdJEh
O9d6hppidOoMc9Epyh6EAYzpaXyOBAiD2yWnKDExVs8/fiHRadYQzMQdUNKm
sgUbhyfTG3ags88rj8E/kYG7yWbpkbM2a6TlrenhiOsFOHlStjGLSq2E2qnr
wVl6f4W4XEJUMu/fvNxA1od3NhG2PLif/uv36WI1OBim//4kPZj88AMk3r7k
uM+ff5SNF6vE0uLgY2AsaoFO99xOC7tjx7qVYMM2l6SOYSbBCEKa5ZojBTir
S4JSnG9SfHFqr2h5uaGtkU4TekvhY85ridCCc0I1+m5KGxRUCDuyNhofYYls
pN5Pd3vSNdMn/OCA/ndv8kAOhpZEulVTzGC9TSzOAXrSxLuCsW6ZhL6zKwlK
KJLnVXnDaiwHZ6aXZTlPghgNCeqBYZCJMDsBRHbKFhp4fQ2nAZ+ObO51Vl3S
RvMK1FRQHteDMB4uOtYiFOvpWu9/24v7icf9tI37uJQsLW65knh214V02aVf
ex1jfHj67ARhura6rEc3GwjZHBrS5hfZZtG4vNvfdk3FmyHhvF+4gioZ4+qF
N84mhtXwvbqO5rKmlC8C5OQrjlOzm+czcAXeLwjnK9ICXYqXE4cv5cnYeSWc
MIxx+N8xuxCnBVN04xMm6uNveWxeRBmFt3XOqIO8PNVDt+gEdZovsnXtNNNo
DE7oxAFMy3KRZytvpa5Z7CzFAFCyid0pCOEcCeYYQaAl9m++pvk135XWWnVm
1k/k50A5OPGagh70PCd6YYgAGyyHqW/XfDA2gr/OkRTXty0zZxQXPkAQ/w4F
gQJaNH/OziDEBRJdYJ1WovYwiyS78luyERXZeNm0ymDPQi+EwspR8mcsqgKR
jmEL8lj0TDzY8f2lVwJ0Eid39xq/Kuv67IoWf3XnhVIxoiG4VOKcIr7v6XU3
QIs3gFhCjsEDUXbBPwNxuuvNqdODb+2KP82b7K5l2EdbAkRBUJbZB40KDa4y
s5T5huMBhXC3dGkxsVlkeSKRpJ7i0yL3Jj/YKn8msaYqy+WdAItXqASR1ieR
X3RDGO9UYDyHVwueceUTgTiRBSL9RZXnbJDE7AnfMh6KcPUaLjvlzMq3myu1
XfBnEBNzexjwXw0ThDMSr7GSUy/Kpu6+y2bRoVwKdWKZCx1lHGoXutmG3P4D
A91r4mjFOveWOabUSJERy4V8xA55TiaqRbRgwQkWmIUEC4J/JuZ1UuohcSSI
79iBRpGzoh7aJ3Yk3DSfs01iXdLh3B4mLGeJqVjvlstIQ7AGV6WIL5ZZiwHh
sQShRbfqGxoDCqyYPb40GsRxMYJ8aVzmsuZl1Fgp/0LqxHVncfMhoSGvQ0Br
UbF8H8YDIqjTX1EXSWZua+Yv8KVPiBScT2+GnfixJBR6O8EtjpRyMEndgIW5
UIg4Tispp39Sm+9AxBjDHplaEMcnailp0lA9vwcX/mAGGQ5IS4xDxsYcfpnv
mw9jgsC5LbAh0ZDAuZGV0IjESX4S5BWsx4jh7HZGcB0gJpc/NaF9ZDY+4eUI
g4FwWigbYjYwnKQDQh+nlXM8PwhHwvgAG4mA7JwE0gqhpaoH8p/8plcJFSv8
Cp0/0Qc1GTJJlEB2kWsyRwPBIt51FOTUCoRk81VSSn4PU1dwo2zhqIwNsgVc
ZguuUxWUSE6XtCR2iAhitGATbjkGTks1/go4pCEckiCi4D8HDgknol0k/qK1
AKJRYkGCmYjRbMkvahNzB0NsV/bP1pvw6IWgKTXSxOuvokfCcr6eIimL+i00
yVhfYvQo/c30KHH0yAbrUiU3z10EKIkJUBoQoFgFIYCfO0NtmCoqFCgO9jUr
rqMU5ttJ2Bav7PDd2dlPBXTAVznrpi5sXVFe1WoWqh0U2W53U45vJO/JxU5L
hG9PzLAmCy0WkTMO8d8iYwRMXiLjNHZRHLiElsqEa+HNraUI13YH6axXk+l8
nQRvOg2sk0MglE/jDhf5ZTa7DWO0kcmTuIRIydKwxccyih4UTawejLb7KQDK
dzVroQMXlj5+xvA7kQjp4chH8qD+hY5Lh6SGJRIcAqTQmUlLqTKzqR/12mP1
9raXpgxMUtdgyTcjHSsYq87ydcLy4gKu8fPpZs7Z1327NgztGMsRX4aIp7vS
OyRGG6AWy6nlWpydvt19cfqW8wPNIO0evnr3dpS+eEdPr6AX6yK9n7YOWHrs
QgzXHphUAzKRRF6vjkB7N9EYRa6UBAjgZhzoEQ8lUi6Ef/xOcMj6rm4wkVOQ
t+KTUdrUkoJNAW0rAgILx963kDPsoI9HsVXLmVZms3zdMBxiOheAQplP0uLG
XyHIcFCeI3HTW82QOoNjtqhHoimbRNOrD2re3bTOq2uzpRtPzsBNGqQgZYj4
171ztgqM0l7ddOnPmbo7fVI5K7flYj6S7G/WbNpqFyMIzRtfkK8Tttxogch1
RNAsLwE21fRCtjzqiCcqQXcOd4sA0i+IJXdLHhpNGhz6F2Sx5OtlsbQli7mN
lP7UHLC/QkyDGnbqonLf5TXp85YMF9RM+vSN2CfppXHlXkKRIm9MZHey3KYw
yjZwrUo+Zuy9ajnBJk6sxifwtWXpfmD+wKYZabxf2ls72r7kYefOf+WYMXIm
g9BNbq6y54Imyr5VYyJA5e53VL2kn6c3n0Ol6vyChRCwrFCzSuVnOYbWPhLV
hiQYLe+b2LT+7OPTGyfl3GU20enUAEwU/ZIgCA9LZ6XEhg/SQYWikvAzQo/a
rIEfB+Df0/zpr4KtGvVsd0OULxMSHXFjpsfP7HT4DzOWspkO0VCZxEy3cxAM
ztHyJ+mbkumPmEbDEcWIKMmNSP7KJT7TPAMuJHQKn5sLT+kBQjwOD6IYbnnG
N2Vg0zmMdp1EgOqEQkRP+9Er4IMdHONn/OhzRzI6Zw8miY1mSVbvhqcKvGZ7
q4VgrMJLXlkrniqO9NEw7oiNX1zAzteV1UKbco8f3UcMBRFbOJcv7Kxnoi9f
ND5Gb/KEOMgB+11xMBADn2OKI8zwVRctvmE0umFyd6lYXhJfTLZK59n8VrwU
orcM6FcO9r/Lq2B0ytfvdMpi5nxRTNfxOFA32QkV+kRwZXETWGcUYTsy2yP6
sZpzOj2roFBcPBPV2PQEDjyYCtuRVJyl4ORHT823xeeFWSPx+ggf/3Nr42IU
X7ksoRFF43kYh5eIz4C4c/lBQgSChWFRzmZWw7IakbJQxys425CdnTvI+ofx
ENjHIXHFyg6ztfGAdvqwvVW5Gmfr9ViD2yKHk6WOchGIRItrSP5FMK7FE4A6
IVYKaKMq2DZsgviS03PGIXoX/9yCWd9wqIIbBdexzVJ1LgQtuCEggbRYLE/z
ObIbgCYsVTG9YWWc8xEklq4JPRNKgsP5hKwnCJIxi6Y3qodWhLuuvbvvnnPp
8PS/8SUf9VooSOoTE0VtxqUWtE4lngQTuiPqgKp2L43tbBRS9k3grt++mzgK
I3SbaOiPGWdt1EP2+Q+Q5OKyqjwiBZYWou8bdpeM1Uol2nvJpUUUepEv28QG
G+3ZRvIHv8ZLPd8EuYaCBDZMIvQV6ksdasOqVQSKJHtw8hvn7ai5ImDqFG0b
8aXyp69ZWMyENWgscQDjtdE8DwwfONgETxnZgVlskAo8PTRoEN8oZND7rGAX
S9oL7R33N16kbSuzhfRcKv92/nFdVDGDkYWLN1EouF2nnkmEx8ggFh8/31iQ
V5VfQD9VwW8VpZhxTZJ1XhUlH68WWytUAiAciI9Cyy58yG9vmLvsIN54ZyT/
i7hj/Pvd839+//Ld82f49+nPR69euX/YGxImvjNK9F/+S5eVxoMe/XFHVPed
tydnL9++OXq100nhZHteUyZTxSMSPRuToywf3sI80emBM7KSb8yh/VYztWGX
Fv+15W4zqZaymq84dV6/eFFCkf30DWe+c1K9eb4v8USN1v5pip81nZ4Nc94Q
BlO1lGo6u+qaxQaSDliVH4VHBgU12JJZXgxjA5RYQRKU+vs+PZI5pOSZZGtw
VjWRMH5/mZE0oXUgB2J888yevnk7WJSXA9CdIWM9yE88sOATalBdxlWIJJGT
dizq0v63UlPoVadeQjqwGgquVI+rrBAMyBLdGVc4YRDzkQMPUBoEBNBsORdc
GhxZO1V5rQFV5XwuBTV42SDcg/0h25yFAm95C7E5HIK18M7NWmwQNnyy9cOv
GX5/qFXkYgvbjI6lbMReI6ZHhw8wxoDC4ecWHImd5muYThEVK+nzS4S5/gne
a0RWrxv5DhiHS+wc+z4UzYHXKhLkH7iUgSvnKsH+N1bwK22VccGhyRF++vTL
D3TP6H9R7dVqCmbFUpB/nq2bqOCV2o0ktqJRumompq51CSdPOueckJLPX6ud
Oce1qKBsiokoNZsIg9qMNYmCC7of85wNTG5qQvPux63l1G7E73Q0jkgIBGfv
7kiDEH6+CIFDS9baUjG3VT5BZbIta95WgyXlG+wjrJsr4j3mIDMXuhWpcJ7/
AXJnm6soylSK9AVLTwZz4jczoJe93gmw+1G40arzpgyc6MAVJ1pLEbsJMcFl
UcvOaP95wdzPnXarlpg6DGCyt9pXeS7oFZBgrb/B6C3YYv5Enekwfcl5sjIa
wiplvVIksO7UZ7OiAGmfriQuLXwSyFMSp61VPZ6dsM1oJWXebNoZ2za1clem
lepc6hJf+VFQEdDhpJZKwmII2lVRs2oflhNjtJOgjr4Nfxet1PasK9VSYAIP
21gEi3BrfTfAQ4bGkoCh8Og0IBzWdvhKgluQQd1HVNEcpP2Ks5bVJ5H6pXMV
K8YkAaqynUwYajgkhxShBJ7INjJOsE4FcVRIznGlXaK/UkhHIArpwMfHBXKE
j4gLRImnYXZNpVWQUcCQ3hH62FRcbbeEqAz6LU4thjZLnWaIT/pI58Bf3tY1
HZprquWXdKVNdQjiRghRZeGK/RgoE1VWigcHY3cBJcmxFXvDsV/wKIK5BoRk
xx29FAPwPGNHCAq4ZejgH+zEGtEOuLBYLE2OuihWLmvdVZaSYiz0lP37nQUm
rtRI2znP1iTnjYfE3Knp9Ix5K6YEYWPnkbozrGxLXH2HVL+LHHUWMaW+wbW5
RGK4YzKuxWieLpZwqqUroM0YUROazIjhItmFKNn3ffzDXMBefRQLMM/lVKFO
hB4uJ773OUrp2I9xDkYi7MrvQMRZR7NYMJfFgBiFkfKkFP/r9/Fotvq26Ngz
SeYOOg4KDb81ti2SdhB1+uTg23TAaUerreKqr0A0NV9T5CR6LWHO4TUXx5DG
P0dXnS4zKgsp2+hWYXIm+DCCWoMXICRdKKVJ/G0ZsXilhF7/KGsV1awKnegJ
qJtpmgLj7EVYq00kWjsxdoCCYMKjNfdBf321sJPkyMeBs93PufPM1Kq1gpgL
kdpbgxJ3uMWfN8XsA6RUGEGydYFCn1Gshx11nxUyjA/UXKqiSULD5hZPvUuj
iM2dQulWjUp0SRgPfVeaiRToZAO1VQLywJFqqonL6TA3h1gzON5UVG5JB0mn
hCsXkqs3Nx1G0iqmkvyjrmdk+s+Q2Kh+lFX8paonPmhXVg8fLF0DRAsT87DV
X7BPRwFgw3hAtPT+kRQcv1EzhJ5nYpKQW6m+0zUqamUSqV7GSbVSY0/8LkhK
EigWodrOpGN7yGxwF3vCyCI1frsebwn3PeNvVfHZwEgf9USv0WxHIPq+4t7I
DDdWexBBzLfqaa4I7Ci8AH0mg4msqVt3L8HYTOhlUWA/39WtSCinVbZMCIdm
Wc5jAR+UPrhIbZEh7Ua9hcExaSc4RirPBkVmf1vAjLj7ubDvrKxID+Cr7C+1
8b/tdGFkwqr5SlKtY2keR12+L6+5unURQxd51nCOpUpnuEsIKUIVQRrIBxTZ
CxG3Z7LtAmF6HYGts1ujPl3TdA6vW0+ydX4WxJGGEpbGL3fPUOMKvyYcOvXc
thNsExM/5yt0VptWuVHa7pvSQvEaAmMeIXfbctOoqQwVAMJgvcAHlYTx4TdC
8VWNR3kCNmwRUGluLqTLkT6z2WYN/ou94KgDbVyzESQBIfjwRw3AucFSulJZ
ZlYvBQlb2jqrqTk/4I7VXBC56qZG8GKS4DsWxml21IITZ7UAbNQK2XE2FW/F
cVgSVivlx3xl1KLFcd/QlE2Q3hafaj6YM6ggpwgP7tJMI5eQccccQtwlkf8l
6sgRBdzZp3DkIIluYWk2Ia3BG4QzQ+H/Blt45UPddMEu3Mot7+1KjGBYiKbY
RRIur9yEyFYpVEYLXyXZarX2USv28Uo9xtABYhG5Sq0CGpgEFhyzODltT2+8
igdFWFiRGDPsgPF8SXs+Kckm2WT6BYrTeWuO6Ts87pL04SlqnxT1bcIRYrfu
qIJQq+Bzl9KfLSoWmbKplJy4dZEYKEyMpJmsqjhPVUOTERYsq5PyqhKxp2uk
N5qyctHIKAaTuJoVAc6HWbKlD2tA2qELg2PqAvuAiqcScsrz4I9iKQgpzkT5
TExLiERoZeGyt79AnO2MC8KahMTqnxPighPVS0bKRitSDWqG+8nfIavz1/Us
cLMD9qyzzbRzhGHVN1YgQHWKC/br0zFCs56JNIuei8I2dn2MqHnMpZWU79DH
1pHEJcmZhvnFONncDHmZm5+k0kRriaHWqQONJ0CkbgD1pdarIznjWn9VDayW
qvrSX8ZSoHNfqr02nb5jaNFIZAd3icWs9UoghtQKliAiEFivUe/t2wIgZWKn
5LYGEkeWeMgGH4nNMFILZB1Y1qJFuXDQipqJIaRWf3fSkgc547pDu2BR4ZoR
CaK3kr0HZRWgasiOXXpoq1JGKI53SmW01WNWjbBYtx/TknVZvsSTy51RjWGC
pDcLxE0ERi6araNrj6ScPkZVf30rX0yAFlSZMuTl1h6+L82dfW9GflOIBc6R
P0+cddb0KP+HYZr8jOStuKaGOG9kraOw1uDg06fthUg+Dw8leG51XVTlamlF
qt5z/dBP3+T+d45F2eB3PQs2r9xZvdbsSdnlquSOchIIqDUruYideIB8TbuG
hGYuwVxA4XX8TPoZsOF2DHOLFUAVQR01TRJ4v/T9sDxqIHdP0GJwa88lJYfg
wP4KhGXfvtR+KSi8b1XafNXaujuhrw1mybFr119Jit+pJd8H/LlmQJrOOkEX
w+2NpGDCsZPiPJloX8+ObVP3793fk9ZK/T2FrRjp8+M3wInZyspj5x9BeHi+
qNY8JzFaoRmN4MBlk4ZUs1QioiUr+niRkew6k5Xc23/InZ+OfOmJ9CkpCaRy
iRKCJQyOnj7XGnuPHuxL95zk1f1TPYx797CZurmlfdLrQWHdbqGy2YIlYYxq
lf6mUqPJe0JqJajqUMIIz4/PBnvcuwT/2oeuhJJp8HOKrbZ7LRYohGjb5qiH
hniepNAdP+cpWdD3XxLMP336R2zy3v6+L3x5H2VCaahF9lHNRIGOKpG1Ckgn
pIR1VQPQ+3kRAwEfMBgISx2hXnYW78cN0FaNcQJSfVttNVx+W08kXP89h08h
+pxymXiruw48TNsY5tm+5WPVJV0MxNWuQjMIoV/iA5bkPLhPpLCkfDWj25Jd
ilECAhEcnn4m69NFrLpch80DxH/Pr4ziQBGzOLmSMNqqBvEjuC/mym4FlzR+
r3ZdkruvC1BVS1mu2IwoGfZcnjmTviKEVmz8u9hwOTymfJoQ6boJOCsdr9AF
IaqxyrmZcI9cE7nMh365wCvJiMloR7RO3kBQC5TVSKLbB99KMRD1eve0AnFt
JCCEIv5DubNXy9yMrDfw7czmqMjFpyHlOavi8jKPguo4f8igyO6+5nYteSoZ
OglCVwWRvJCmNCG26Q0UsP0X0rE5xArH4kutQqYZO8t1K/+2hSJ0g5Dpp03V
mQcHtiL0vPhVy/dMmeEzkXzKnR0shjsx+5cXqyWaUuuter+4Kq1sFG7K0lSa
CTMREneKS7WVc611EYHElHPbSW43HS004Y+S6UYziNSQwILTtJsj1af8Jj7f
asHi6OnZ0buz9ycSFef5DUIcpLkfA8NLzZDngxA2i/NVXsqtAlBqA6GfwXsW
DzyWAjp0JXx13sC3kmakHJXgRGjmUUom6CSV49msTIe50KJOHKGscXZsfPXR
xiIUcp6clAl11XbVvCaKED5ilG5lq/GHIOtJ1j0UqTlvW2JnxIwDmW+EpMZB
fmLWp3F46/ZOlP1sGR3csdKPLOTO9pCgLBRaW5cLySjofi+lyHJt6RCuUTpR
whp1qpmAXmgO7YvhHeUeToibrdm6Q7dJWFuBcWphPr3dHziar+8eK1+qXVEW
a3iSSaoZ0RdPD+RYe2pF+6CTyHFlMdXxJlS1ZJONWh8u6DKKgYPl5OLPXMYk
ibqFlCmyPDlyEwL0gltmGMsAgVvVUpgmnAstmKulEb1eW1tfvl9/vJR5IHw+
ivNCa1S/peBdpKSaTBnzpVK3ClwCQNXNxNL/cpVYSW52PWnV+DDINUyiuxFM
b1/iUaoB1lI2M6wKKKWBcKmVLrQIWHgxKq9JtuWrbovjJHmO0IJW7UopVfN1
VUx9+xtnvLyK8n1WpD0LAMWfALSMoMFnY6kd3vOfdKvWuiyS3gjrtAM2V7jR
6iyZ56SvVmdmRTeF4dvlD+21giEJqtStN862L4KeN4BbkTsxIbJGqOJ5Kxud
6HHHOzPQa4gGw5obPnT1KLpUAYffG6/TQymYd/SF8bAp6l0oMX/6JhSgjcBE
AZ0sl//2JsRWUcxd8dZRDKRsWrdKfdR22GqkJ1/Se4fO5mpLd2o73eEl+IWL
M8Ed5qyfSyKYejByGur9ty+dfVXPKghvC4fkWgSuv02n/ZD0JFKiwjVfrej/
bd54pGc+2FgJXE0dFmtX0fg1sVeXRLFZwZg/WBaoLcUW1fElnQkkEzbQVgld
F7o4EgDNCQF1vFtEfTJeL9GptGcMILa0HpZC+OIfd3yzJqCj2ooTeszyIU57
LXhA3xktVosIxEWJTN5W5R+TEVEelxdjaXUnQapcXTUJ6toPXf8l8/CaWOQd
n5CzATS2eEIFw97Ze5lYgxmXOwmmKhkxVc4Tm49QQm20X4BWMXTG3SABKr65
UcpndFHZDt4j0ucRffSSxRdDeknC3sq3RmE295dJa9JPWtPfRlpPOfuzxRHV
hubKu1s7oxkyh+lBxIo0ajFgR0Ejc5x0gMn4OeQpfQWenRwgsZO+IgfwB2Wf
g+o4QUyr1jiBWBqW3ohqwrRkpmuzz0Ywngi6fJO6ku+fvrEK74QPPxVV3Yxc
KFePWGPNqqIkXhUQbDcsozQRJRyZ8YG2Ebn7sA5fK80iYMU7WC/KtRRZSa9u
1ygTJ8Jr2BCJLs+1+cpFEBI50vUkcrXMiea2OLy1cxB69MeUtEOzUwJmVltT
n/8LPzek4oqY4moDN4Jcj7QZ7ivNFfGw/e5CIVwyN2jytWtaNkpEoeDASwDE
1c5ww4ZuIywmGpmRTSv+4UPSU67h6QWsNusb6Bs832YdVI+4thoRaDjFtQJ2
ovMpNK8iPAs7tHD2eCdR7IMr2rDzYZXnO+aB5S8GtM8hvOWz3IWiWhyzubjk
s7TzWeKZWQql94XEh7rupCOjDRFR2+Et7Eh5KVcyyFC8KF3uXM8N7kst1xtW
1PEN8/eXnbL8ljbnYTtez1CHSfLv9B8CPvUo0ic8r2bt7/Lvio7yKvJPHEHt
ozlSvd0RWr+6gLIYmS9WSUde2AmKtu6MtlK1q2wulnvZVqDpTPMLtn+Fddrt
zByMVm4NV4j62xiBsWGtC0gwrHd4O/phvgljDuIA51z3naDFy462k9bTl6M3
w9os96cQgP5JGgyQjsNStsFBKHXucDTp5mYntzPiAtpi6iEafuliVgkzbpCV
Aret62dNl8BSNNpFywak0gcpKcLsnnJiAVCMjaeoPgMTFkTb2wBXQluIr/hm
vlK3hR5eLL0GfyMLPvO5Cau5UE0wtCtSIml0ZJLD/S9WFRipmAZyRdCRJIyK
tF+jqQlWqrH2kHxIVFUJdRRpYrB8N7mbA/oAdCZJvgIMAvMa5FyNeAAqjawe
RdDWUWiNS69DuXG1vvNeYUbnOLUR7Ndw7iWRWTPabxGkyLCNJ67J7s/DpXtE
J9PCA7XKSlFV1cylk7ZD8Qj/Jr6ER+ECZ+okvOrcV0bve9hLwwkXwsaUMjsC
T6BL+mk5SednonMGU0bXcsuUyZen5It/wrtaETjdPYw2jbJwEX01th5dbV5E
fL/5J7vkOObbIkeGd7mpUhL0fTa4TGsbN4FmZMzF+I5rNaM7+SreJx1FfxuD
iN4abKVfw3RXclftv8FXwGOoAFEp8thb5JkgHAXRmwhi2G6vJ1mzA7qoclHE
qdbchaZ2Gxt1JVOnevC9ioV0q85TwIMpUn5giBPDIIktSAyZJM5JKhKsXSmw
PcvBkxsZCE4XmZiaZmjZnaP1mlbko+MQ23tGqqrGyfJN9aAYpQGcuE8mUAiR
T0t9LuID/PGoCpQtQKw+fTrahzs65TB6ME42X498CBjiZ0B6QF3KipRpTsb7
wJo8qFFWaeRSZHBk6HmMYZZsAfVGNIPwS+5TkJ6VVpYCBJ1XA7aDJDAO4TR0
dMWobvJYx2BHi3VVSZgHeMuoNHQMupA79OP2narFYAhTY85C7UkZXx1PRKyZ
g0b4/Ebmf4jmRLsj1/j8GceRQCy6gI7km3Dt2D0PySW/I5NJ8rikY/nNICP/
ZLJlEBmB+z3rCFI8O1pdMnD7Me1HGa0y6J2TnaFQfKN4PPgFq7M+/0uXVgRU
kpflyeQgWCiThBYEho5IKhmO5FIH5YgQ+19bhCycW8d9L2x44zvh3po6JcFP
mgIYz6ZWQ1gdnaDf1WYyQYOOHpn3KvJsODN1MQnURTu5XpXxsKvU9rWoSkyc
iM/ZY3vbrtM25pxafSEUcby6rTUIDnnBtRMpOdampQeH91uaG9fiaWbfcG2t
0ix/lElbYBqCca1sillorLQrn0QXq6g7tCS6+QFCsmtDiApYrnT36iUqqGxP
9KzNRCLdT7L3VqW6V5Pu9ATm16X0gJ8Vdb64HXWVJycUSwXk0EIlekXlCxwm
LE02vYQo613kyKO3USVfDN0Ks1gZ2sgms5XzpWVkwkmc5e5W3sUxB2pPODvd
S1Sudvdp5ABu113QzR2DVDyw28clpB2u6e3mHlWrzRLlEMrA6Q61x6nkHDWR
w95biKc6rrY3L67V6GxhHwDJTQm7mSPXPXvy5GyZfRwE4tIoIjnDHjhE9Kk9
YCgPQdmRonA9DjdrO8PNqvmtu3zr3Bn8jlH6nXZF22eXfIXP7s7IxQmnJlSr
vn6/YZVqyVqC1tXra7QSBJtcOhr8eUMT7rpoHhiRkUhUq3k9/TWfWjTCKH13
cmx/DCUf1gU/JZ0MeEt8n11tVh/ceFwxA6a6GrFmS0CEgxPg7fplJcqTOVD7
b+c2Pyqbk6rNrMgWiSgDd0S5Sq6pH6QOlfyuCz+JrNaSv7Fodwq+e0LE7i5u
k3nJDoob1JEKY9zbe7U4VXGyqO2gB95FLyx+TLdAANv7QERHRGwzLW73J2i8
9cpWF0hNHEcp6zLKlkRjhJguWndQvjNuGWiF+5el1o89s/KcksIf8zgNTNwW
StIGRuLKpQNz2AIh84TNKGQElcy0/VgbegmbhoSYcmX2vBUjILsDLRcQ3XIL
SJuZmJ7VRcPhGZ+zYDCICO2wBmu0Izkwd9Ahjnm3AoZOLOilOXLlaxdVZY76
UdLp18zJmRFxCxIIJSNEkNjpzDKQq43qo1qRn6WFZpyRBB5GdpOqYNbtaQZL
VdgopecAWvPYrjye6iEEJSfVMmcxLhrG7XGROyJ3zivRWu3082bFYpmEwVRp
uyM07GhaNNa6luLVocStqlzDZ4u6h4ETRvotaJN3qYLETtoraYQYboHdEsFp
oCW8uc7VSoba1VJmY5S+OH27iwr0mic6SvKGFOzkJd+MOg+OghkIkQetPxLI
TYBJtXGmQoPNRS6nkLiQ9jNvihPfHTwmIm5dZms6lfR//cf/Od1MSXj8X//x
f3FQEUq3cVkJFjLUNyIhHNLHiMOFgirh8L8t1w3AK+ZOlpE5gm1GSjen3S2K
P2+KuRXgniP9kouPcA6WXqnbfh4TtSWWkwluOiNWwn7t4Bt+1Vllgy5LHAnr
gkA+NmFgHRPwVRLc0P/1H//TrrbJzGzGLRuXArTBecQhT9xzbUsY66YOVAVP
XIyMeoHiQlI++1czksQq/qY33s7UhtYWtw6YDrzczGHqYtJgvJB/M1owyqgO
WWcFtzRgCpB/lFhJxRTreMXV3cwopoUwBLPf5eL0JvR5Z+7en7BlSwB6JVlH
nPQTfDiu5EMUljDP6hiwcukrknkjsiIzGIlgdE5lycjTWTS3aWARm9hAK5kj
yebsN4ScrIO4OqPvjv8w+fXNMwl2YIIq3dJc6GtkslWzUBwMw2H+dRRm2VXY
biTyH6louAmwf4t1y5MKZZzMeDn+7SuGJZ02GBXScbGyzNrAnqVNEVIt4KND
O4AGLybhKrYMhKCPTHOD0N4Blcnr2yWkoFtH7/EhrIS5tJFak/wBXXqbKMVM
LgfFxSDrTUXKvQsDSZ9ZxsEXo7dgPpB346CQnugtC9r6Q1axpRSoem3//mxY
jz7yAacYHA/Vkyht5P2j8cwV8t0WLRUzbIgKMpSb1RW9CFKN/AyHaLMc2637
y5AyeQoN5Bz1mHFDctRykW7QLXMX1/1yUSOSjW82C2VeOHiuButNdGIc7LSa
x4Rxu/Ee//Akfb6o856sHS407L4E3fryEtKt3c15s0Qzz5VSCsyQUPrRddcA
bIL+07YXt+ikV/2jw9rryTlqC0R9H9Oifs01aauuN0tXFyG/I6y4lfzlA0MS
j1WCUWh8GlQFT19etJCvjXga65VYL4DpLQsFHzld8bI184h5VjxgUMQIyU9J
0Fcgkqbi+BlGac23PY/kvnbx7uktmLMWqtJK9UEk1SoyC3DfH+RwIuWwDjuE
xACWlEQWRYOaefTbKLF+kjUpWyz+8M9BaNg1qBn/OmQMQ5TTebnRquPdwo3N
VdDbZtO46n0qnDfStCa4JxiQR1aJ+L86OI5YM/nzANd5iqi5mDPKSVscibTf
Vnw+RDNuA++GqZclHN78+UiwaV2unNh1k4O2sGWMi5yYIVndpPHkPVN58qyX
dXASk2YNkl3/LclyQOMSVzgqJBwIPxV1p5GHod0y7iiOXf73t1pq5rVTPv7H
4BvVK8ZeIxlaYsKo0/5XQqpgZm+1uRKSKXbzoH6yJzRRYIy/r0l7tzvm1ct3
IiMbv4PCANfFHDKAxUawIYeNGQUniePKJ64qs7o8oQdCANIaWXW6489sZ+Ss
Vmr2p3+FqygrRPakL088iQ88uocx7zNZXXfDZguXSelpQcB+juPxlD123Mjb
RuavO9y1w3Dv/j748uwq96EayJHiFCEuY8Lxm9s2KYXupduqWlJ46KI+j9ih
dZiOuKQaWVbjv+RV2Td+IpaZnFg4Cc0LLshwMmk++iKBh2Ht0ILkwzm6ViL6
9KKxQNlWOGwQ/XYykT/OCy5/vizp4pQrc/74WqcbzZLh9NaLQriFH8f6RXle
qhYikj7YhqpR8MKYrD2elnXmShEzUsYdG/IwEHsIvuZ4+cL11tO55RtHqIIk
pHRQ0YV+yzn7yBLwaUjjqh5LLn+XXEmZE62OIC+Jmo6RdWBSBDYzRo+qZhOq
lvJLYhJ3wREfh+hOFbdVOtRabD06/MDazItCEvZ/SncxkHmTwA3xZwvJMHAb
9XrbXy1uAxeewPLHxMqPbM8H6jfd6lpc3wGsIu5D4bI9g1DicHO9HPeOuMmt
I66roqzarX7CFvIS7/qVIAnH9FSiTbR+22ihw+hQghDiQPYvNS7TDuZ+9EGd
i9a640IadlyKHbu3FVsCx9Uhh3r8zSa2gPD+eSH4a8OhgNqc+Vsck5WvgqTz
mXmrfBzlz64y93CcjbnvrTyke/9+Xa5io34Wm8sCK2K9IphdlY2Fh8dphkmQ
9cdKeCsaVpV5zuKUPud+6ITzDBGGyMGV4VXjWMORiZ3vWaX0RGgw5JBmbySz
vDGaaXA65ABeZyzVLH+ndmoVqLbqeZNzUUoWGrhTV0SiEmczM9wQ9uoi97v5
ieLFdvUw85Wm3cKZnLscCqaaXDjRwldM/Hdtz8yTk7GZQ+sTa4CD22Lhzd++
gvttHuwOFjnZWcfG43PxVWyzUxYLr8SA3UiRBG+HlTh+5rMrbpol9DNxwp/t
hP1/GkRhm+GXpC52Ccgy09bjNuqWdKIHVuJi8FZLWgNyTXkRkfQcMP0Im/OP
+cxFMIe6ar4OfN9vVyc8DpGUZnCijH54yEF/JMUMAqHjyZN0b6jRgF35qy9U
8kkaiF7c16IbGMTftX7TN1uj9czQflGDFKMX9Z0Ws5Sox/CXv3PbiwUvG9DB
gf7b/d7Q2juTT9LvdyNHP5MeuTRaABjk89M3G/p97H4fZ6sxgZ1IFee+x/XZ
LCPG6ScSMwaxrU5froomohQwDdBvpBEWf5GMJtAoo9kq1bw79YdP23jpPmgL
AdKGxKjWPIgwxoii+1qgNSlj3yNQtL0kwSNiDShR6/97ko737Qnpd+cMQn3C
0q899CxZHu7ZA3df0taDdlzGnh7JG7pDI696kl4aJw3PI4YQ1rwBrJMeugyv
PP+ahpAOSbvvz+pYC8x7weCONoW8JSa8kd86UBhCowPz2NBmtrouPxBq9qx6
iQLFa3Vp1dpNPEHEA7oRzAX5wlxq90HoTu7MaEWYk96NbLUmtoAYAC9CUtkH
Ha+uN/VMMLYIMBJ2dt1D1jrEJSRuBItNtcLEJ76UYogqNE0iVNBTnt+B2HEU
kIjE7ReM2L0pbwZD+dzvDFALLhhhJ8qjyi4PZVey7r8Lb8xf/5q+rN/wu7LF
wcnQ9tC5WXxNoJz6F1pCdMr0OqabPbcwbb3nCbHczBad5ZfjH/3LURCpjXxn
HKn/NgzT71L9rbH67vs+aZXnd39t53V9nI1O8zVpMuE9jVUaCbz8zuOsR9ke
bgfSpSRaWLg0ODgxm9kdt0oot6sGypYzN/o/KqVuIU58RfQGREG9v+8BxF//
GmUHRO8/6WHs6d//fZxOEECbJrBTcb8NLY7wJ0Q6IwjFWzsC6oNdOlbE0uNW
+m6+4KJOmI1x1hMrmD3EsmW76/AGpbPJC2WSbf5wQey4hgt5rd0M21Ru1GbH
LZmBJWsuglOs+i0JBDI50L41HBpiHi/yrArrxIjVAiRFnNogpZINmHHB01U+
CclOLCn9/d9HlO/3sa3Li4jhN4rOMlxMU/pJ76psrjSfyl0gMSvh3IwCA2Qa
GofqnV6xjCgLR9QF5EJ3FsoREvLZIkmjFp1Ruh2Za9rJf1266lb6BpxRrPVc
O2sWFOr6/ZNUOnROBBLfp76+t6Q+o8QBl4GaSuK2DwVUplivN5UE9tgQkWsi
rT9MWUERGXJQD9U5pZF7O7aWHfscmTBcQ9JPNdCmPyt4EoZBsUCYx4rVJppe
cnz7Y641vi0uVxCESeoQ6uB2IaRauYPLeV/l2TXXgaGrz4cv1Q/g6Javd0OE
c5D+B6CmumhaaCd457N9wotmGNce7u8C7O2RP2PTXohySUhpBUGkJS/uOkQd
CWlurSFUMJ5ZAa67Q4m/qk4XqZZ3xfBpIxJnuY5kWFd+uc61pm2iMW6ojCQp
lcUq8KZ3yPc8jEprBXJNklOErcy1kEg8kYbFcQ4x17Pz2b9hZOFNxfEnif9E
M+j8KrQdH6bqDMdYqmBE2IIgLzOQOAd0sy7NK+qq63ECCfZeDYPW54rJJfsd
1DBeSRlWM85M85ZvkVtH8SqYx4UJXVH4VCL64fFVPvvw8uLIw+GVnKkwpxke
S05Tz7G7ziHFqmXvrotG8hpr7pdxtiWYM6s/BBHogYO4c3w/phy9SNeBwccM
FwcWGFyDgUeWNA7XYjihFZ3IV+LX5kYXrRXwxNoNwHu91XH7fXokgtU0vyxW
Kz27+JDd7Ky1GGlz9BFplVuaDAp+6dc9MTvLci6FI7WAMjfFCGoYc/ZJ/xLD
GFPgWbRePmC1ulVWWp6zuVLtVxOBKXAfIQ8dIc3uzjjlL4gtPyRG8Bb99Xj4
UXr26sT9+x0rxL4+jf1OX/8f8IP9Kv70EymtzQ8nagZh5GTvyhfws6uv48va
YiqSICQsIDGtsqLE/L4iK2MnqUNSAdcxTL0Sk+r8wxKG2FpUf8yLl/zuuKpe
Z31Tvufg3md0FnSFA1F6S9QHS1bxe86I9Q+GZ9FjC7tI/wFiTRAqYRwOGs6x
2xet160T7Gz708OkTySEyBXp0MEKR+l+mEe8LVjtWVHPNoKyW0uMzd07FqXW
DRSgr7uBAp/Zsp61CEg3dkijBtJu1ICLpDw7fTtKXtD/SxHeLARYqpB6C8vx
5PT16WlAOjj5QnOauuEDiStpoQF++NI3ZI/LHYy+FMDQre4RXRKrR/zf+2JE
/segJ0ZkmE5zFwH1m6IgwkgEOa+XrtYtSIpIeXRiUgIXxbfo9bH8/BmNBLuK
kgRlBdbMWNGyqjo/iq0fwrY6fBPOUgtqBUkOW39BsKgyZPqqrAPDrxDbOiFg
Lsq6XypWOS9KEhRRJVM3C6tnsgseO+kdILMaFpnWAnGCiWZ9+/ZX4lrQqnTJ
jHuBBiHcvaK7FpOp4MubFRpDcMEuxWQnEHF32pWKg0oYXOZGi+SGxmXfMEvT
xXuQQJEoYC0hLiga+qdd9z+av7gGLJ3RfhQ6/kEqNHNE4VoTEKaWZSrV3opV
K7syixLjJU1fK+b5erzKlIMKvUHRmyIoU7dE2Y/qzinuKod24jxIwL8gkMMc
XVFkj6iWCQeDaAVFrmtA2mxmPis4w6J2IZ0zYVFZahPGvKjnhPAu13xovSvJ
+VazHPol4qFcrycttqCRZEQ15kzuosq8velxOPbAJh9rx47ivLxoCY0+2EUr
RqJAMZHuS2i7Qi8uwlQbHk6y7zNuqtApASm0OJLZNTBL+/965GyBZpQEiXa+
SovZp05GLtXBSIlkJwQr5vKC6IJ7HbiOe7arPT1a4f8jiVbvbD9p2RpUp0kr
bTzhdtwtREOn9vroj7zKmoWi/YnTKgj78Wx2VZa1VYS2EhJhoncfFeSq85yq
2xL2es11XDnMpMFe190OlyRQCZC0Qi3TzDqR8pgb7sQWmgeKTsCsBoTSYNz3
EZe3bKvE0kl6KxDqDyjuc+X4WeHKH/YxgwtWJdr4Hp+QLyCA8z4OLC4wQwDa
p+K8DIwxY31EgpbzX3JPJrmqBcybGC203wid1+L+ff1Q0Pjh9Oe37189Q51m
SRHnqa3intQHJR0kKxYbKY40L1GljwmYq8z+XS0lrVdzn3BkReekdKdo2Fy7
ZIHezlZj01QQWSo2EIFAU+00WbmsZImSDtcOrK1padxphcbO1zLA0GrGtEvX
EieWgp9hjaeg4WzM6Q2U842oDLncxIHsBxfXddbh2/vg4SM0wvCdzlikYLEn
GiL1wWES/hAF23TdCVoKmIHtq4KjO4KvBd6NwYmM6sAGJVrEa8tZkWmd+RZB
StCGTNrUBLKqlpmTvI2x5G24YM1Ypw0LR5kHcRU5nDjSaedEwkPfcQxfpILv
SI8B+UW0jOm0kj+5wMZn4UkcPCIRkD7aCqBizZkPkeOyAOGOq6LWDFEfy0HI
zI1pIq0j+RqYtTUVrSMmXbZ8ubdVwrSITyymO2i94qrwgP27ZL92/q3wBG6M
l2idSbTXKKQeJVu6uuurOZuCH1pDrdBnw1z0JrOivQH9YitC+B3jgaS6tEPI
gzgsCQ0CpYg5omW8Cl8NouR9PV9h4xrfCpKQBOWK+Y79cO+AOOSwm54alvVY
pR7UIw2N4PZwnbIUGjahLmP2ZXqXo2nGPpurN20rzs9qddluNde2RlWGMXV/
72zXbznuxm4dxggF5iRFNrW32Pr6Nz1NujlgDDJk9IlE+up6OfpPMo+fFRld
tKVbeuMekZbPjz63691pl2k/hr5I4txySfj5FwuRQsieWNMsiU6qAiPDEmu6
IqYbFnb37Vak47Q/ueC/2EGa/oH//N14PP69a+edjoP/fscv/NV9nrb/+2v8
wh/ufEH6bEcT/O2mkF1o1+3zZ29/feNmsin+7UtT/PVLu8ALNsXxu/cvT5+n
8RR//ZtO8e75Ty9fvdJ9/D8zxfuT4EC+eorf6ftbX9j6nz+sse/08w8RPgRm
tphCvF2beBkU8o+Ixri0V8L6/SxAOe1ksxYmz42T6iBS1JqATjfFYo5S6ExE
ejpz/8hOR9Rn4Q7pbNqwrnXqyZH+tJX0l7SqMNyJB5qUuie5soy2cnSRR65B
osvsC/rp9HXXtjrL2pCQJ2g1lYdu4Gr/TtJXJJDdFEF6ZlCkEyp2JlX+Ei4e
6rv1SdkF2+cVt942Yrqp03hy35f+Mm/CTovcbcviirh7N9EEsV+0uuTwc99U
XhP0ExGnY1ZQbVa166M5CtqAhp3BRs4zJJBJIreTtNT20EyDonN+M1xQM+za
MwdNY8rMfh6BRzyqrj+1Tqkar3ZRic5zO6JtXK6KZjPPR8l8U2Xequ+TPkor
QBG2OhOarwUw2P2ysn7jGimWSH+1Qa3UHZtCCapbs4q2yvt0Om8iOaTR3uVo
2VAFvvXhxDrrRIdxxblOYidBs1rxzB1pt2puTRO0qSbVNSydJe2VJWmw00Zq
Eg4U9W+K+5GrTkJDf8hzTvHkw6FfgOgagsGtKHjEU+5g728U+7q5oK90ghJb
U6j90EK56fMyI+Wq4kF+yl0dCkcFhIK4ejzarJfb+/ASgmBDufK9ZO+M0+vr
Dq1r5HcppqDGSsIqEmVAqUZWKkgitVyYe1iiX1R49qyJHuLlFB1cj26WL5Su
avJIuyaeSV+FizGzKjGHqSdyVwh463Y0620UjImf5Z2J65b94gsTk064mWHW
ppvpEzhDNbrB54KBWi60RN+ybIpr52NGHMg7jGpo5cosBxPGD0bBA8NYMYOk
3cJiaEtSjy1FWWT9dyfHKEDqCozh4kXrCHA/XEXwc7gGSON/o4WI71F6K/Iw
s5ITy9mWw1CF4r/L7XR10eD6anbxuckhjli/8+AF+fKpwxiiQXRAl1wDnZlx
MQ875NHVXMGIErzGNjK0W0SoBBa6WXHnrbBFMruyPYvlBm1lFXFddtYzLCWp
hqkEjyeUwjW15sIrWpMs6KJXR+yYaQATPNm4rEngqOvSnW8qNsDiMrg98wKm
eHLry+dCZeg5OG5sjZPjwWlTqFBnFa6Gan2yK8it7HuhYbYnGLYMkPlH9u5f
t5A+9Q0PgH+4zsfVpsBnepWtSRAqklRGB++8zyKnLVGsSa90CNuQehD6WC5N
XzvCQOqTlvFAXuZMchxRv4+eARwggmRVZjEpV0Umhp2ANTKTK1XnD5PrM6HV
3GSmaOK25bPbmTTVk+ZZ9Fsi/aWVJI+6clKg+bI4Jjsr1zQdUIuzStfqpEs4
Akr6LnbkKNGmb3xnbu99PnT+MLVAXXJHJ1GlE3HfT+hyF0uu28z74uHan/j8
vcCy5VL4hlo7OtEVSJsFsJvI5VRLurk6QWmzIr5N0qPOfL9P912T+lpCHM1a
JykNXK/YnntJtW+sf/2HdJ/lG+y5XK/LGoFBVkw/QIb+aW64f1NWLNWRastK
wlk1EgJi1SYQjLiddMD4IDk7KoAftE6sZpPzchm91VmcS+iMAdExv0n6lnsL
WceSAB0SlwpnqpEykECc56CAwHWYTetyseEKCwuIw4oFhiBing06slWXhF5/
sfLrQfu14IEaUdhOH3qo4KFH75rxvGIZp9W83KWccZKZi4YLhBzpfHMYB1Fr
BpJK4T4IjB3ebpC2d5NfsXbuKtyVq3asnkxImLUwxTKpN9OxLFDit+dSRQE8
S1sTq4+13a6YHZ7RauHVjH6IrN2tSgidnQK8DmQFo0j4QtJK1aPX364wv4XT
4N+/atva19nHn9g1NhAP2RPvRz+3/rWMIE/2Rnw9nmjUaqgvIegHJUfSf7R/
HNIkiGJvbtsvn7Pl0yW02FOWKfj5nKRc99JezwusA/NrUdqXvIQIar6V4bco
XnSO0ATkdra/yT8S0DWhw9xv5/ZqvMjw1TCS28+ERFWADkVQr/hZNNk70gub
Y+e80larfgZ+fgoJ4Iw0OVTHXfiHnCSHvR9DOKZb0fruJ5r76a/x+0K4Qbf9
78+xSTUiDqJYKcToWMo0qTZAdaniYpZxwX8YfLsNeNRqQiAh8uPokVew2Y7r
G9w67QExFbmz+XDoJs6rhb+2LMNh+vFnePDzd/LtT/Tdyzk8su0NwWES7gUh
P7aNtyvvVRk5zSzowdqfC0svSdbH2xV9qE5gjtBx5pHARtS1d4/CxxEh1zKM
3HObn/Zbvp1Ji2VRKScmQ0dgixYpcOvzYRtA5XXGuqPVnFXb9kMtK3jiViMB
fNs+Poy/fgW5urEwqw72uym2XQ95gXYTtALxTzkYUi7BO7l+rWeK88+IcrSe
sL07/l3mUuPrMSS9E0RzdqDFyRet4cxc6n8+ml9DHfgCAJ7idj/99aeysquf
3A17B9/TvPeqn3KC9vyfhdtFvx8TkexellfgguFt4VibznUJLTjMKYNL3eNM
lNhWaZSkoUQIsUDUaN+tkzB0d8NpTT5P8Y67lYgRh6XWT5/WoryOifSMnTYw
LlZjZiTTG3RUVRuO5r+kvPl3llRiULD0mBgU26N94gLHrUyVfF3UtEopDaQD
izW4tWlbFNb0zKSSwV3bl/uf6PYtznxMqxzzkzHnj7vdBLFUBoqtRvyttvvP
zgrGNkt+D/+yeFl78Ox2lS2dZQw/jef6k0tgd4qWJY3A/E98CNFN3NgQG9Yc
pIzzAJDGmK2L+TBQyxI+3l029lo8u1kMbDX4Wjxc4oCbpG+s7FzU6b1OgnLK
64xjv2BTZ72wgY2CVrC/L0SfFXpvHrbSuRc5af7EG9Adgy53lUu8x4spYlHO
vLyORLtVEMQ4MmsgF1vcXGq8DTc1/7heEPMdabiK7UqDb0txFgdl3gg6WTVz
tXYkxIeOHzbscjNdWARlpKGzlMySltYnIWluXgcLBFDfDhbl5fnBgDSc4TDx
9XFrKXeuCrvsgU8q1VupZelY+Vy5HbiG80G4E6tHCUug1r6+WygltCvwlsL1
H7amSGpoVW19URVTfU9o6QtWgg8mP/wwpBsFstl+irK2tGd8CaEzGupZfpFt
FkxndaB4FHvw+bNFDjkxXsrNhDlGliMEM7G4sehsOf+fZrPN4ZoLMdkut8TC
X7uqR+cFx2Amgl5PbDIvcgdAfNIPQ/euh9GTPhApMzoSD9VlJWWm6/iIGY/U
aiElrWtVqPvr0YxinDUPnAstiOJ0+FdV6zk+SwSt6W1iCMvLErsBe+CI9HPM
9ltpApZLvW7cFxJEUBdcjE3N7VqKpiVMy0YuviQGC3s9aj5l/a5MB/Er43R/
mP7r9z536BzOPe2+Yj3FnO9jcND/AR1A9LfUcQ/Lw3msCUilGPtqklsk3syo
l5Bbvp68bsLoZ5KoqcigSqur7x0WEAshZgHdN2XgjJHFCelflOUH6wC9xk3I
Nkbeg3O2iSwmCdk88mnY3hW8Tz18PupjM61dEJ7kf5g7L1hPcF+6kuVhn9D5
M6EoeLnKYMjLCW6Vu1bIke7RIy2Nxu4nH0kkvX2TPv8otbJDH1D61MKWvUX+
RKH26ZtcvhhnwRdjDnQe8z9MaFI4i/tK00NF3uO4Hw6B1AHEWuYdBH0W2XES
MuIgvGHM47GHuBNBFMQ2ul4qqySaOCIT7NpEQ7zMCqr7pay11nnQCXOwozm0
hpA7Q0ngLH02WRvfrBVxp4CfmGzDltHYzMYHeqq9G4c8ZuwP2rPHMksyLUsS
DVYRUqxgjGbDlVltgycMKjrYALATZnu+Ia86DLoDm9XCIBzfWe0myyVXOV4X
n5kfGFHSsPj2nPdgW9XaoKLNcMuGbEVigkVzDkkE1j0aM8K3SD2ohFhZ71IQ
lBC+dIoQ8+qg398hriLL3zCqeHf4gC35wwTFN/tS+yqh1Jo2EERHgjhy7xMp
m22ttZt2BXvfE95nUy4KOHH8RwMOMGZD8sEDoC1qdgxHSaMpWZ2zYbRvHQtj
tGF8n4slkcykphaW18YnE3B6jgRlYSZquXQWnCyAu7enGloPLABav5BIBz5U
lQ2lq9sq6kruFitBku2yZJJ5nX3IV5EoE1nFvCSjG+m120n5x74nAIUvC/Ur
+weIVK1Mer5j0yOT3Xzwbhi2m7h2Nep6sDjBvqB8LWve1xNINb+eZMa85vBS
R13Zp+MWvOOvx2GHs7XsKYcR9wph89e/pn+H38Q6K/ZT+q1T2ae/dILlrgfJ
5QFEMX5clSEul/D7J9HBfp/uTw4eBDwzQgQNb6N5IRDWGtarYt0IGQEVXrcJ
2zjTmRwjCW3lOCjwT7TswOhuCNlsP7b97neymmwlgSyiRt3slpJjgIXRrdSh
uhg56KIvgePecNtBDbu7CqzVXOhJUBzYcpNxWBw7NdVgAPo9D3qh1K7xI25r
uWLikqNIrkjEmiXqe3sBfee+AkUa4avjsa6BQdhhlrtNcmmFJMgA4RwqLjE5
th5Lm6YcNxvcTdVeeQtjXfUVwnrKBVdXhmpZiP0U8U0uZItYgNV78Lqm69XB
XPwq4x2WfETgfknc12TSEsycDmoymRZPYrOTl8bMQOIEsdBSQ9qRibNL4tHl
3CvNnCob1omIggJMvva6MAzvgYjj2qq7YijaRLlVREU6p3ExCXZTMneCLya0
Ca4dHgNFrM1UVDFdOOcxVxtzabHnLDHhvAgPh6zL9gvRzP697cNl54WLKFrt
FpIZikNUhdT9QA4+gdDVomiVheJsFbheMymWJeY7Jkxm+cGmhUYFNwPVEdzG
Jza4C3XzBYqldDjphBpmoOnaPGDQaQgxG/m8Foo8webPGH1Js/t2aONryqNb
mTcygOjho+NV8+ShhLOsGk3Ba+Xa0RQwyLqOwNE22qhwUVp66bYTZL+XjzmN
E5w4n6JcaQLFrPkCAo22a1OCCMmXEIEzUMKfzNeLZBHRVHHSiZ20JDj+V9BP
evxwMnDT6g2Scx4a1FmNKWn6hV4hx9qCXl9M2J+pPv1zlC9CzCeX8KRRnI4k
KJVdoND2tcRGx4VxSdG9BAGVLkii4crFt5bg3AbjI7OP6VyOIJiYHRkcAkLY
l/Q8s0LsNEaQdnxdLjYi/0uCTFjmaMiXgyN0uRVMgICpq+MGqTeCvRP960j2
994E4WHwmC60uge4WUQLllJRMsgRXZVFTYcGGy7HbTnYMEJ3Sxb30Vovgydl
3O2Y8CXs7sx3vjdi3FEi0zWRx8ueV1U5LDxF7S/bj1t9hb4iFxt9pEIGf/zp
Gx7EAhHymGmp/UbgDHgGRiHOUue/5a2sWIahKIksztu0NJqsHbHmOL601puH
Zls06roppPNfooEs3uzI7a0kAGN/sjeys6YJ/XZlap+HKL26aOlNIlU4ohHL
C7FP4q3AFkzy/+TBCNG4JM6k18AeQstbOsyELbut9z9/Fuz0QoXU2nHo449N
lgcVkPSFxPFUH+7ftdGqDahroeUHd9hn27sS8RMOAY7ZJbwNhM0vmGydLL2k
P4DdKdtOgwF4j6IPSOSGDiR6AgusKnB6LPKd/bbEAXN6tNONIfj8Gb1PBMmT
CMlH3BzRsodbGrMHfT9S+vYycqEVbTRTTyzt/jYEpEiNLxNXu0+SMnxZIcR5
ovHnVKiPtoiMz5yjlTCeK/oRXLrEldArECyEscBtrJyhCj9MSuaI7sXNisv/
zSMLbTIXczOHwGRBEjeXH88lOAMWE77uKgrWeUQYdNeJFNu8F2ylHokY7SRT
n1tN/EEJLwc2sJ88aGObcKkR8xtxyCU7zuJmVhb+5VJkQy9IV6sN/P2H/fZY
OdGgyFFY0fwfnkh8jfw5IKIzjEqatvA7/f2Wm/A70tPapl2FYcs1Lz/CJ6/e
6yR5RQc9XnBhffOcQlsS/lSvc2Uu12Cly+xP4Ia3angrKmvtwBC20c0bqkkK
8c+jyEMbhsE6IywSm+a5/0MybBBKnN8dCzuywPS+AFjktTgjmktGkVIhLoLf
h/OqJdECymnam4xU80xq+HEuCDBx5gKH+xivlMwrU9+qubPtpBv9m0bRvxwF
rfy5bmVijMLsCI35acdYZ3G2rLMGqVGJlxh6WMMDQiiSHR+8gu1QZdGLNpXW
UJtqs606HSAdc5RKxuQolbRGsRC+Pxm2YxsZgTaVNQKLczodrpIqvPKpL9E7
UiTSzOJYY4RmaxK9NurFnAcZJomAOAwOr39bykmQZJF8TcKJiwJNuasbMCpI
NrH4tiRcpOJBOlA3kkg0O70ZQ1ofoD9r6PMQYop49tmVHkpDTDSTNqOnte5N
Hu/5PDrCrsd735pcg/Drs6s8EnnYxZ7ohwKxiomLU27C03t/Ys2wbD6Y3EaG
tFarpDOBWj+lhbD3MMkcQRIIbyuoYWGJInK5vc3GEDrItSNkhMUlasV9ZjdY
2PUWPAz4t1yAxAki8nEsPLC1vGz3dAwKB7JVDNJDEhgapBPJBQjFVZ7Nq7Jc
ctPPfr0O9knXooTtt/mSXd4s4UV7cNq/iL++51csKtFnTlgitQCx6k3Y2scd
92BfHHKTn3WZw3/9vneRKJsLACODMGz6LQ1xRRGMurvgHoRVl1gjryUYDrVy
RDOFvXyzhiGALcdYUljlr4mOxjwwSDYpOXsol3izCNDm++JwKUm34eccDdPN
K6LBfM1PflGMNouyqbtvs8A49EYyrUsEY2UqmT6py/Rh1NEaxQ7lwYt8B2pk
9tSuu9k0TisSy0/Y2G2bPOzUd26v6YJu0y2+IxVw79LWXA2vVnq9o/fC8bsU
X9/7DTR/Z6Z5SDvJ346Yjlq6c70twynZzj6YIKMWVu3SnHqDkQze37lQsq9M
ctqW4pT8p1KcOvwjiflHW/dlLTpgHft7Me94Uza0hNuRpMkkhYapaYgI7KVc
hFACwEusqolq1+pdTwe4+exorzQ/MpEpzjkLjimguR8d2XGPRmlQ0wy1pnVU
ox/cSGpbEAKG1UPlJ+4mSTpg8SEnEU8EhHlgqQGGBrHtGpInYUIi1DGKIJM7
RIxEbWIdIW4LrwpLwGJo+ogEbeu57ECtlL6xWtqJXWWQXE4jtlgMpiHeI8/l
is7wxilboDkB3keIiLK1M8LJfvqEkcZsqW4F01qIrV4q6zfDmkWjnNSUC62Q
0SEeWjjDEY8qRyCG5hF5jpK3P3DhSLTFHfnIx26EF4oV9yTM3ew2VPfZkjA9
MdIT/Qs8+e37xOGNTj9g2sXRSew51JAPoOmYUVjikoMI/j4kT7YhOY3IS2Ke
gd1aZs2O+rbU0Pohv6WZtJpoqgFRSeBmAD9u8svb7vGf6hMCXHTc9gUdcBJn
bRCcr0sUagiK2XGoXnvXyu9nZQUvHtiN9hrRYqx8gdn3FifDStFrK1YNYZ1L
Gy5ug+Aif4DfIZgcwc4MzJ5yBOkzMyi4qxFZQyPC0G9j5xocLIw52abl1fBR
by1kdT66rGOSM2obMCKp3Z9ognLDhQ7DsCG/6l5egrvs3SnqlQpt6e5mye2J
TNW9llkxOSe+FJLmTZahOsBBJH2xL2byEX9pzZ2sJCk5CMAzZYoHrk2kAQhY
p6hLLoaNoAAfIJ6EcWvtkiFeCewFUqFxxDxSwnVkxBu9TehJj6R0N/8sQiHI
8JU4h/MVtBANzHF3hCs3MIMKJdyUGQBflKgLr1CQMKIt8Xm0FdcM7q7KWspj
uptcyhqu8ks449SjIeXMOYxCXQeFZIWbNBCszUvVtGZeJtvw3cphj+Qc8kTA
JGZHX1dU87lDL5SJ20jztgR+afXee8mGSSeEKior3KmIW/QAwCNqMi8qqYzK
rpyWJBoh8CgQp5OBgtpRiu7huPxcnhVZzkMuJ1DSgbLwL3ElXEt2ttACnM1V
lJJrRnORZs3oImwUY01JIYeT9DeoYl/UfVukSfCvTS/EcpuI77Vr1uaYQuM9
JoT5kdo+bLrxdjm9XcoW4E5vQEe5JbRpyLoJ2n2gxToX092suKl5iYCRGRvL
lt7HGyRN40nRGMvtSpMGOZQxp7km6XFYQ5s7R4mFWfuReFugJKkw3Q4FSw24
ItnxYqExLj1g920IwtMm5adRtUSjf7iQoc8Ab0tPRHed5MRZN9LnszWZiEe1
k49GkRlHimtxWLSWSk58kj28Gj3SMyT2rHCkPVR7RhrC3WuUEiNRj0H0Swwu
CVj5SxiQEKaVFRYuGiVfHNgkocnBab5cybE9O2u/c7bVjDihTTqxsy/rJota
QDBKmm/CNu7F+ZAgX2kBSkSTbRMppBxxQXSRxnYaJW8taSXvB9EdcamD703L
T58whOmXRH/pWdHW1ehFZ5eFa/nB25VYBmINyWWVzTdh83eVTcRQo959zmIQ
f0qXwPuS7Um3pDMbyXq/C+hYcO49BrNViNzcml1KP3EZkozFinqhYa1EuBrO
/KPfLyQeYxrp0YjO5bwlFkcRMibxO0QjzeqlSTs61orDJkSY8S2FLlCnUvMB
Ja3LmTv04sPcU3CZlswJFSpmsAEa6kmdabHgoOAVxys6BNcv2JyFofqqNdmM
xNlho2DsGjDKc85xu28EGwxhWhpqTf4ubDRC0UHmbhAkqZwKDBzjeo24z1+P
3vA7YcRvbyB5FE0UNs9BAhpvDIVyilBPsfC8sK4dyRToRkbfvR3sj/b29vB/
Q+MdPwoB6KwQWTOwC9Bn+3vI38PF29/bWzIXzITfPbqHsRwbGmi5qv0He3vj
6S1sVWfvFZLZHCya69l4B1e5Ba2DdD64oTnHrQ6S3JjR/iiw5V/rzcyK/bTi
qHpGl1Qrgsv+KD0YpfdH6aNRuv8QrZXyZjYyFMyDS1+vUbWOhLMCCU3sBRQT
J06G1lnlUQBZKuLCVb5Y1xpQySkSpVM8MqubooUGObdQMgvDaFOsaZnnLsRR
spFEnwEPg+LKaNOynXG15HF/pbpekcndUC7RHdAUj4/mlOaKr07mrEO5ki3z
fQEfgY4lsjS7GiQ4PrnLqaDxSlLu/mkgdjQbK78G33QN9f6sfFE+K2+QN8w6
Pzvthj7HMogB9xZhVBKUSPLfmh+EBgVlYFqOsrw8//n6rJw07cnLGX19Yk63
sx40ppGPLihqH6MfW9Odjs/D0Iz9N2fgtkZicInOyJAZXJaIlBGyKkGuB9Wt
hnN2uWPbr0HSCiQI0dE18XerVguSz73o1AZrpqjeUP+whWk7RF7aCLziGnVn
gWDmwlhH2+JYm27wHlMRiWLdEsMqMu2XzJEpibpfNkUmyTFqJRp5CFZ3IR0f
eaigJivMQZrNIsFAGlR+kVwJ+9A6eUIEF/llNrsNSuU5NsyNUWz8yNyaWEZb
YE+sivoDV4BTS0gUOKqiSfsBkCGJdWPxjPFGfWgEujlJd4LetU6sY0MXwpz9
ur0sIB1AF+LowxKUEuTmDmPClnK2mRYzrQ3OvT84KZ1FXW+KFB/Y1hkPA9rJ
KhM7G0ykc7JPZ482JjbFQ3KGV7Zy9XxBqNgIzLGD5hbkkmjSwSn/2NS+qMUt
G3vLeXbLEurLFa1/DN1sxorx7itIMGK5lOprgX1NuF2wQl/CT7yVWuDALYVk
itfOsX1/74Uo32uuiM9uCzdv+hzkdcXcgIsPcmLv/b3UxJODPSL26W462H+w
fz+FAFIP039/kj58aCIKtnOCIgAz7IoHkwu8BOX6bXsC/bHChNLyRgSt9ODB
ayxogNO+/0v6h2Kel0Opemo18+/tISta06RpoMBaLjiyXMIB/OyN/M2yQoos
E+iS1XUxy+sICDolweDeXrrshcGBh8GZ2CcXaMHO4mcgYtSy89oMkuG2OW0J
xFtLGtdg3atSrmsg3MhyaWW1mV9LzhbygyoK5qvroipXHMigJvdgOo+xjIvp
AO8RtYKmPCWKrNlxmEdq7yH0cJQyJi1uk3/fD0RTAFFUViUHR+kz0uzGIMNj
IcNHqpUxxEO68Okbwn68HNBjU+GYMAdXOigJ5259WHc4q9HQuPbteziZH0UE
UcuL633WUfpvE4UPdnQcFwLFsrDRc+Fop5w5o1UPO6oYe9pUZne5+EPhxVYB
2rm5MHxjxpPYrYjMPtWPCjQzbdd2bV8eoQIc9E+LCsYyhT+TCgcSXEO/xuvW
WhtSESpaaxRQoPR6pMmu0plDSkh6JknHwC6K8UrLNbonwpUyV5cZyx9jmBkr
0P5FiXM5n06rw/ZIgilyS/kNuqdegSABel7iH3PCnKogRTxoU3bA099D7hNd
V54BPObQ1uFx06/EJsKL51MNPSb68uHcpjoXeXvw6SFpPQ/vfR7KJ/IBm1OW
xcpnQIhBaBiOq6/YF6Noukhx0VUzxohUxSu1DcYHlaewzHGm/aFNJ7/LfAw9
m2yYaHHv1pV0VhW2eZCkgKqPjbJM87oxBpoV8gr9RuWnLgYo2tE3Wcq8HWLh
tUOMHe4blEu7w3K1g2pMm4Va38LCD/SnWdbYfXDJ05jAy+cJJf6apmR7q8k1
k+QoLJMtrs6R627FlnWRtGFQ85YHjvvktpGsDidGhVB1JVesA40W/yp9IKiJ
XekZe14k3jU6MzDfZfHRtL++Osaesj4THfVYL10QM+hkVTxrIBj66l7cjgUf
jmfRh0Jg9UN+hg/HfpGfeWOqGDMEtI530OlIC41ITIedtZRCjUW8PMFXqXwF
ZBilinzM8vm2/yzlVyRPHVc0jGe5LgsvzTJfFJ2Nuab4zSxWAfKDo43q0mF5
8OzsCeQDNuYQJTVxgkSJaxElAizhJsMca58yv0RjVV9MSdtb1dlNU8p5I9NN
DIqKPuwpUpZptVsP/vV7rrKSsn9D+622c9TUQtIDQ9kgt1yY+jS9QzFcQUaB
EBTKLJM9kloAyJbYQo/23DMYex8LQfyelDUX+XrPjkDt3yoRZ5px75SUQpoZ
uqRdiYHFHi9c9IgrgL1ZuYIC0xvE4l1tSAhXQuNyBgiJAkqPI5AoOwLVVbb2
AXBaHMtnFqBS+Tq7zDl0hnvhcqbwDb+yILQeal6X4XKQJeFx2c53C0MwgjlK
IBxpFo9I4w8Pxg/veTtXZHkJLy1Ew2I1Z7R3lzfT2FYWt+wUDIMe7imdA1m6
FYS7uQr4N6Bk6513VC45PKQWtoT0KZH2OfCMffQSpZ70aEIyOYwIQd1wizLT
++eZD0R36L33f6Gx+HJNerdqpnQ4qkS4lXngKAHZ5s3xRcoziTODEwxytOJX
6fyo+9/2pXzCZPhCL60F5wXtMNLBKjZBCU4PNUZWC+MLSUnvrOEhLkoG+798
284R5QGsFn87vo/4C30hwTTvw7buXLwx5TQ6SfGmQWD1QGDiHaX54op82sUF
yUs+WUnsRvEyxO/6xiqStFuqWjwJ/8k2MIiute6KDR/sXnaPh//2BsA/gshA
MLsorLQoDndeuNKGWqRBcuPB473nSI9URPA3T/bv/RAVDECL2cgT7wLsVs60
DXQRggwiMs3AfbYYx4c9XuW4shA6IbAVap2bI8YCQIJLgSADlhU4tUQIN+ej
VMUl0k8Rmhw5Otnmc9gFH2ve/3aA4gx4NNnbH/4bYCA/0zD/DtK9ty9mcb5I
WKFfn8UztBYoVkD1TrPlwLArii8JoKGBDlrNHE5TG2MsRlK9mYlv9iGkQab0
NqXIrYZ1u+wz9XCo8d735fjCgQsyyzb7KJQmy0pRob4lKT+Q4jlF2AenCBDU
Z0bxOplCs348ivXIxJiiwbz0FLyxwOBAFs7qgNL+GEyReFVfxgJvZCWdPTeC
VSySaiG7XdXXGmEOq1AxFMgMbCGS3fbwIIItW0PKmzFECLVF0i8HY9GVuGkI
O7Lcc+OkLc7GBgRJZJGq910/h0a1u8y7fN7bf3D3+/RlzR5jFSKCHnrME9gp
AunXheHIrf1HyRsN3BcuGS7K0vs5q58vSJzI5y9XUpRXAyqlRjgSvqN8PB4Q
YA1sGDwyJhmEGXicCxmTkkGrYA1XSAn+lvresu13pkzMXUdCaf3NrRFY4qXN
Yn0Suotxvu83GlsYhRy1pBFydSAnjU/4awXYSaFVh3/NfI3sXe7Lg6JZouV0
hCYp40BfHIYZu8JIzpm6nRtU0ycGBh4KldLPVXcd7I3S/eGPmG4Px7pvg/n5
cYV0DYE6cccaosN0cx+kv7P5CZWzYAUIoeQ0TCxiMtlnEZjGS754+od6mKS0
Kwfl5M4zToxxKZ6CBO4Fp/PLLzAeDEOk2A7G3z/RURRlfoZH3iJ9YGW+QXFg
ieGVuf9RsOTtpkpbic2b1UJ1TtcMeWPK9cShRv9OZJ295o2wqKAlEfo+ENI3
tPZxR75X6Fjah9afrWWJfe2LMUhuZx5WYLMCYi7TWb+K+xx0899btOErktXt
veiWgxi5ZFuhi5ySdAk1paz8ssStDuEzWkvPYG5BXyr3r00NNuvzGVcds8hu
KYrVuFqj/b5Hf11at9/DA9k1ccMFRjrRA4ObGNw8EinOJUWHPjk6/uX0/OTd
26fPz0/P3p6cvHzzIgIityMIJrQSBCE8kh44CeXvK14Qv9H3rZHlGMrbmiZ4
SkJQdvd3r/cxl64JH3pYKChk7t8KBfmqby/vT0IYdKYzyB+9O/vCnL0NIO6o
WrZ1re9P3JCwihjZeKVId7qgG+yuS3hBW3rz0zsq6OvtCtq1JJ2ieKkviheU
BQgSQeICoZH04W5xXEPU32kd4tDRyK1L9XLH3wUAbZdlDWrZcbzuFUq4a60f
r7I6AZXdOv7KHcF0DwC30ZenfVm/XB3pyrSrQmvazqxx4j5Pd6jMMP2/u/vW
5TauK93/+yk6dE2ZlABQlO3Epq2kKEqO5UiWQlLWydRUqZpEk+wIRGPQACla
0gPNc8yLnXXfa+/uhuTEOTVz/CMRgcbufV17Xb9PKoeJZxKMFf7MH9d9aVx0
rY4mpi/3rxcKSylDmnapkw0IL2uVeMYq3+iQpEg7yp/+q7qavIoPr74KWjiw
PGfWzcC4kTJ5kasvX3TG61ERk7FG+aOElGAaDk4Giou0dy9fDEyCJv4Mz2zn
4oMTBgYLBsD5pKxSkI2Pnryenco7TkSMiRuvgWwb4kaSMiVUh/pf5yExHT72
mHgDPvbYyxc7oo6dbDJZ4HN+b9dQ0Y0c5YVLJ/qjRwt5BXqQpvF3jjKDpsnt
L845K152CJhDjdNeUwIB1IeH2k8aSNRKfY03dzbPzUtykeMMdedFt2EyLy0x
NCkgKEJei+tdcDx7qkJsC+dQngbjGZOrNqtLn3QzVjhNG8Az3eRQF24ql/RW
fX5daVUVRtmo2jyigeZ2I82WZOBEdVaPh7ujI0h4tp+TZzZv/d5HZfv3WtbK
iJV2i1XJP3aUzLuFPq8WMT9f5sBv7NDUdHeu18cKe7Z18nr9Dq8nZS9gaZpw
ot5Jc6Lxju8LyZEFTRgh/KOjk5NWPoMHbe/2ntQUJ6i7vR5E9T3bJQkTmg3t
AaHr7d3BUNIoKe4pBlLgU8tNoPm6HRnbO0bJ0svqP6vnL+pFdaiRbHJbYGpR
f2stKnyEoIjB5MWypiRKnapNOiJPGCeHv0ZftsF/8aCL774bUNA36+5ms3a/
vIsJzF/cG7SvcNJE1Ox2e4ZeDJmRJ5qQPVBkMDAddKDy6Xi5QACiRJXsysD3
71UGfrdZBCawySsTM4NWIhUITJv55yuFqRU51Gv83CV5SLnSTHMX3J7vPC3i
2k+ydBMkKhzyB/0v2e38KhR9K04Pjx9IY3c63yc/7C7SXfmlH0JXDfuYoSNX
H0L5E64YC88yUoLHG7dQ9GGKWQyd4rhb+vT+VMo4g7DXIlS2iB7tUq18qZ7G
sDn0UF2Nqmwmd+KA+fn948ePHqKN9mk9Yy/B5p5VrDZrLxhmM/aOG0506B7d
El/xu0F4cc+69qx8+/BGWC5FfzbTilfnpGkwiSHqY7t3GJZ5KSgDCKU6KQ6Y
l4+LnSTsvKbjPemq/J90RXjTZN6kLWsdWNo0jHj19nU9f+31v2F5MXQ66KD7
hrLe9yoJrsmNoq4DXYfBB3EYLlerD965iF89v8YUTjD44zPjRj77QOltWDji
oCzo0DGf6KhgaP2yHWM2pqKrusgPp8quFOgOXyhWn2WpnDXimJR83LqZMl3P
hooNRkWjciRFr1sRtJ7rXKcUek3JDRh2vWnGNyUFhBYlMxkGYuOeIMQ3otNi
B6ToVCNaUsBNMW7OWdHSVmVft/dtc0aXpB4wTJtsJ/yNRvuo6pWrPtX/S8V+
VA0XfNUvTlucsCuEmiQ87zNYrWoqmHsNIkGQJVGdn2P+r9a8hN6aF6Jx45YY
3iGZOynjQPNTqo0FNwyXsm+SuUB93lZXFELm5CYFRmxMERxxUhKFXGDHYGw5
zGAD8M/hkXEOpMNKIm0cFlkSQsv2E3OL1HOdFks9z3tKjFEvKTUanwwc2NPW
nogKDWf0KwF79yCE2aKMHDLe/ALBnBUwhOhDc1Yy/ZkBuBKmrUPDS4tOL8vZ
OaEJO3BpiT64zeBgHXWDh54NXugGPzHIckw/xSr4DpoebAtLpyCKTZbfTysX
l72pBb0vnzlD16YUCk6DkiMkpT9lXDuhZAxp2H0LNx7Gsrc0q9gWkTNgatug
U4dWapXCIW9/hDNxXQvMhG53sLftyCpJriiCtDCUdxYUno1wLjJYDYIqqi8x
oRzWycHUGtRqxDNjbi5Jg5bZiWdK4Rv8WAWZVTQbUiSQPxYhdaAfs/W08seC
b3Jc03z0bTp8DoDTHJjI6gB3m6cW26kT0p4QHhP9nS/ae8IY03wUm5VlJMYm
Vm01O1fIiBCVUCSVxgHT9ozroeXbhP+rJLtCZQDH7EKOWRkYdRJf4UTWDm9j
o0j46OmmqGFH5Eac09Y24ice6r7KRnZeKu4EZZ7G816uQl9rjDWB+dAJ/YF/
9JHUHhbb9+9hucUOJwWvQmRKGKAP4SgfOQCyUacVlQZSW5/nzEfCIId1+lIs
KE5eyoQwnwnuiBu4/1IQBXyb5OcelZymUXk1RFJwl/qdFDRlPHgEGhZvgxLu
73MQTqjSxR2gcuv+vzE8vwDD9cIVPaFLnzJ2/Hs2wPpiFg8lXGzbCukLv/m9
cUe5RB4DNVtWbUzwoRuFajwFtL6igjJYjGnVnJ+3kzhMLBNqkzze7Y2bIzwo
dHdYFcQUHW+t5qhSsujIJ+xMa3T3oAOafDWc61ItgwSRVQFT0GtKvpX0t4ty
wW+ixUF1cgZ2gsVH7NpuiTxxpoAtpxUZSBhOtPTkRlIVskI7WIxyxrOvTSLQ
3w2V7/GrCLhKl90VUpdY/HKRtMZQnLh7V9Gn7r5/QIfrqr0DR5tP5E1xt9j+
Cmzkgr7YkU93wML+qk19P/DjyR46SL6cfN3u7MKP1J4mWfHN1/Z3JOGMUIuU
G6R8s4PXw0j9S/62ZnBjQZsWyac1D/k6dGTkiCueokY1ocPnoLRy3cBwL2mn
0Kags+f3o0lg3YV89ec3nd5vCd+LtiFFN7SHTe0lWhtcYb59TPkccZxT/y6e
vTw+KQQzKRiecG8XYWvRLdGSWbD5EQakSvct95BRnLwjtD4PmotNjBF0tGCr
4aliWEOGwvCnW6stXbZ3cGm1dnxfVacjxKltVhVXV09ROSR+65GU7J1drudv
4HQyWBLVdc6pLHxWtHAUGSeM89qYFYqr6xQmH9dLFiqFpfucLfZAsFYKyyUp
cq7cQAvxUas15oVUVxLWlU2KgtKFKB5RuxKbEWuMWip9pOpBpxS0ajRWxUfV
j2V1jkXSVWvohAxugVq+bUO2i0DPVqlpRLNcm0iB4UXJSYz39RpwYiWoEGOt
SNTyK8ypRLq50/XsjbQMytqtJcBa+TdjPYTenWk0WAdU/dssMLPwF1FjKQce
rx5xUpHqWE9nbBUi2zQrsJL4PDRdREb5dlEjNwqdMII55fCZQSFBp77l6owp
rm7bBrq/K1RFkJcFz0FN4KyMDCsuATFmU+AG5bc9yUvtQJu5uHAZTKRyzq+b
2TVJK71BTfpRWimhiyAiXhakGL5p8MTdshatN43aH1E4nVxuELeMV4JMt38n
6iW0GpYksTKmkgmIHBGiU1XwOrKbPQwuR76n24xgPHsTpJphGcsaWHdmbJX5
GzHScAtSahvjsNol7mRDyGXDSOHxzpDECr2L3LSvyCq+54FeEX064UPDeamo
apdfT1UynCGhI8XwHbkG0tL7qJmgthRwKrjW5enj54hxUc1mCLCDtcvwTwSr
owQN2AE7CtvXChI1C9xW8muDpNnjxlgaS9VIgIZWmfXqKZsFjg+uSk6Xvg28
v1gSdeDV54hnR1AcqT+twopcBpq5YeYpokFR7JfpJNefn1K6gFOaKX8ANOXn
Q4wcMeeGL/BtUtTEo25waaQaYJ7NGtNaghR+DEkChXZLv6O4JJsjQ76yJJ0v
7VUWSMJGRdhYTqoPhfa8GlSuXC6aA/roeIJK1B9BA0uYRuTz7wbviJxrpNO7
mPYx1AT5f3EWBp9L0gbZay4zt3EO9BkdfaahJVGEvo59lyiYbqTZyz2PpT6t
uu5A291f2CAHhi/K8A/Cdd2zDUiUKpcy04BSGUFEvvmUq4v3J1bwVlJXI7f/
iFimBBYIT2zUt/iyZOlX0G9cYkZEZzX9evKR80jEOPqTbbGc5tNZpcVXCywa
IpKAILX2mf+zrxig23Af4c7vHsS23Enon3L3ACXl4Ty8Fk2il00nHmXOgoKr
BH0b/jP3mino+bYv7n08PpYkxvYnaPYHVaBbrgc/0Fxn/U3ZbmHsfyzu+V77
oUOvfE5JNvyBTFv4Lsq4hClszz7Psqo7zibOuf8qptvnQ7FqhCcXc06uvvGc
xm2uv1glwLNy+ebQ4JUPFounHOTLprV/7TKp2qFv8uNIAn+YRJ26ujpuLiyu
pTyZ/TSE2t8T6wPLybud9ja9OyYafvylHG3FV9tmGN6USY5Tb8fhVNpu+2gi
40AvrFymb7nik0lbicBglq7QI0r6+LsGx+G3Qv+VHX/gY+isfiBo/NtVd4N+
/N60p464ujAXPOihdYfe114ob7CLoCoim4ukviLcuOxRNoX6MekGPajhV3hQ
6a0pIHjD0FNpF8jqt7y9PElfqi4rpmnIseMVVAu9+q7GxepMGTXZZueKrtRr
BNHFIQQ8SOVSiKpI/1X8aQdLDUN5uGa7IQYe6jYig6LVx/Vw+AztggyqY1+h
cH3orDz3xif1LSKBC2inf2VEawudomatlrp1GR2XQp2CckIj9JzQOylkQAyY
jnjJ3uzmKRVzQDkmMEjW2UE3FSHZqM+HBgZPBs2gjhObBQZlcK59NGzIE4RL
5gyW6NaXhF52q7Xrq6tyWf9ScdTT1eLkJyKPCrkSoeRcfUKtSJ7tmaX4P+zP
nu7/0pLWVc5mWomcKH/ei6PoCvkeXSFPphSIiB6SMXpIxnjjc/5EcaAkjbjW
R5knRX4uRI5jfGY80NarQU+MCxaoi9ZFayROWbbjmlzxCFdv/A5UeV69LQl8
nT3LI2SGVae45IW2yhKL/zT86RWzGQs5lfEBSjYEFhdTPUU5K9VfUhaGcTgp
jtF+PWc32IhcnWlw07hDZHAZvwuOposWYA8tK4KPmDc9YOMClMh0nwQDf944
9yS5z1q60iU2aBD/D6msn4reRQ/TdJR0EtGbNmubTf4zkrt4URo6uYDkpye+
nscYNQ4NzREiQsYTncYNk1AseS8SKveLpmoDvTpT5HgS2MmrBfckovk9iNuT
v+n0llZLSWRkgBnOcSZatHBfNqTsHtwQgWP1kfhAUdvx5oJlFW8K1hrJCkqC
MkzzM/zJYol1vJUsKE1tFNrG7l1k7N44t6YHy8nEQ42HEm496Y4RLClgQ0RU
SETZQDO9pQai9cJU9OTE9VgMTj3Db+F6W5ac+fla/Yev9Vmv0Azk5KXVJtWK
WYaP4DtMpkZrYXtPn0n0z0HLaFAhdClmh83VAu6LVsMvLyQHwwDsKi5X5U9J
7KksFHS6LSc+SXJGq6PdwuCKwgi/e3f0/eFXv/9678MH5Zlq3SkM8RTidY0+
yvSAMkQQI4aLezxCKXA+hoJ23SAYVwt3oTh7XZVvM18tm5nfiXgyEMF+3nRS
zlz+R4pTxxKMMOEskApDaSifi2Q0O5qRsCsZyRTZSOaYagSHD10JXI9OdXcU
Z0Ho5tf3Cbz5P+7gYnLmT6vw3NMIeEen1GM1S4gUxrPVGs+C6FM4sVtxhoQ7
iIgfCHYhTKvFrLmlSIugFwlqBCGpoJ+1bq9GDjSCMJC2oKOCZDRDrjgKIRHj
nntZPadiPiQgoX/I9Lacw1TaTuMrUuUWyjMhyixXdg/GIA1DWxgE/lsJ/Wy4
3oLybMCv8SLx/i/yKHH9Bmc5FIZL85OQXr1AhzMD1CRQYwZKI+xYY9SOBZ0m
gRaT7IeSdfTI3Dm4OfeFkMyIJR0QbGBgGdmpnhKYI0ycxCieLQIUZbyZZYkx
S95xJnuDhDWuCMQVc/lw3azzI4noamqc0oLT3SearHXaUq4ky2CtyZZ0oUyK
VxXdPBdgwS7EmJDXsOuNmYDp65bAonrqUeO04gMO8BRT6oQ8q/KFBlw6Q7vz
E39ghDYsKKMBTybUPue1043N2TknlJ1DXgL4DuFF3n12erqk39BPkJ+LfjH2
+TxjR/QNW+Q4hfbUREw4nJT4PWc4i5gx1xA2Hz625VvdcqxVrUIUijgQ0GO+
xttWpDNFdLgJBylURXblWGFMRY207+Ar/skknx/e52fKDExblPBtgp39tMMJ
MIscRFRcwA6lxovIunRdgwVQzkL+SmXT4TPj8uQ4j4ErPTJIPpRJwZMilcvT
GuZmeWtBQJrd7Ge0g7mXBZO/h9y9zJS71yXuJsU/wwgi+QWwjaVKdtRdQKl3
C0JJH80adTeF/4sAhJIHljHjeadEU2xRJ6GFrdAsBadtC95zARdaNd8i7DG1
HlaN5BfqdM0Z25Z0YZzYdigBrWX6SXXciyebQFtKonWCozrTTRUYgoWtUFLX
+ALEuCxcQnQ+RZljoDcCup2vjOarXFmbQTeqEaI4NRLWZD1tzmB69gUewyvv
rEDIjZnoik/gKzq+h3LEvOcXvRfqczOPssc7cM69xIGYb1L8Detf1DVVZEXe
8qgY8mVVuZxn72JBhlatu6QLdyzOYFCseDVYs86IuUlJgeMUHFYAmqCFDShO
x4uJH+Vh/Mv3Hd5V1XwB81KyEs54kW49PzIgZsocGhAHNv/pARV+EFjf9GLS
1r9U//hwUnOGOc40JBTSDWkl4WJPlh49z+kdLHeRKbGRVGWUtIGtNhk4odP3
iMqeQKw4rKOtI6LaRUQeDO7t3D3jvd+dTX33bv55jt3TfeJ1x4xKnT29Byp0
OvTRE9rdu3adc+nZfvHYfHDPpLQjwvvTHQ4Pnt6MzVU3lgqQiKHIup3WsnEF
TQ+Zn1aO+JrcXjTuwPvb3Qitpbin+tykEGRfxpd16l6IUg+60tMgZ6xQSgNd
/pT00a64OBizTtKEruApQyMbLx1rzLRYgiatbJv+zTpo7MFYfxaYUIQ97VLJ
TEx/cB8x8y8xonYooVEXjpOXtEZBVyx1glP2ptjydsDWqJvrJlcN0UOCvsVw
kJwv5d5AOZE8KFDB0PO4ZEnG+rN7khrA2x5TbDTDa1afV6FJXT7kjuG7lfgU
yJaJl7fbR4JiVlHamSAUTvLti3u24BQBv1nhf8ecZAd781GyWBo5RE+cMi6q
O1qsq94dGno5sUai/xRb86ZuK8TpBBuG8TFPFZk8qkF4ey0ub1t6S6KDLphM
ppJcSPjltG54M4KWhPiI2P4O7kwpLGpA8jMwXGYATYripNEcQ1Y2q/Nz0qdd
DixMkTyzz+UDw2sQkybJZhYYXV6Yac/khrZiuO/M40YbQ34He3nVLKWkx78y
9ksTNQlGmBLhUKuTtAUQd1lCIWhFqOvexHAVxTnE1tv4BiRvYEaX6KCgrLsx
Jq7jxYv6ndgmKfohs3qQyp0lijL6cRKNFXqrpH8MA7siUMst+cYcRzuSppkU
IrkcVjxEZT1v42tQloWid1lcitzK8C+jOqGOTYUSxvLndo3cSsyY6YP82ynx
u0B4kp2B2CGT4nhNTGL268EuYW/QQbJcU/Ig0ZggEnA5MyuPOWCJzJXcSqHo
5Xgj7rUbSb1dVgwDjVWL5D7L3o5xnHOMMGLH+AQT0LsvNwPzA48a+a3EVFnq
YcZXJ7TTI4rP4TjjmZVKhDVyWC8pF69g7moyglzQUqApqreXYO5wNQwlRENr
pyBSz2GdQZ+ATYdmHnqXmF4z4/vbmcheLGtGPD+t+veke/P5bH0G1qSZaE6W
fN52Fk1mipRRdERHEpG2WS+ZopknvcO2yxs8bY4sv0zOWS74TX1ec22oJkGu
qrPLeYPqZNWiKiagoohRCneILosLySRnfE40Un2zErCUmkv2ii3kV9ka2KvI
RLqcVmpRqnXdkFoKrURrt2cl8vx6BeOlhUTJRnc/kRNFEFuCUaAM2WSFBkhM
SrmSYgpZwrMLTTmsEzp5QriqBxi5JHu8v2yzpw5h2efCoMPzhMq8gkPLVuG7
a49S8umf9zVndVQsmll9hifGPpFKAopc82cTLc4zv8lZSQfISJqsVgJrXtC7
t7Q7y13jMPvrGaXC5rM9chnIWoJN90QonF5ibm5xyS4Vy6GejyWAwpBzfo+w
Ksc7X5U5g3KTc4QBpH5uwVzTwbcd9GCRm3qDwZLiWPaqU4aQZ6snLDnutznh
Qk70ZDllvXFNdYR2PGoCfhActJI4GM1JBI9YoaXoCYOO4AmTQKKblslHhdRJ
XV4Y162IsGc3IrizXJ4Ur5IroQyp689RQmPVyryR25CJhkei+3alAZZCLt+0
Yp464ucOofM21jT2BoaHHRmTIldaIvveGZxRKdYlJLpWaIXsko8FSB1xIb7e
lODYvUWop9ug+iT1FI5bRq8tn4MJg70gqNSRT+jA9PO6hZ5O29CHny991aSS
U2qFnFMNdflWe+JfGvSlLuyVvBHrN4hPfPiNWsIbC7CTtb2En1ZzuyacWkPr
LGSGTkaohIjpMRQfRwwIddv4Xz1LFrMyHgSqdaEDn9ABDNk5CQ9Av9GDXCYx
Y7cLkkk2r3PrtInniENRWIPypmpDTG5HOJQk4341tEsHkmuSdtR5kThserJX
GV4NM1jzZPfsgQe+L+/f94G8pKmH8ugDnvbqlRg50D8BfUlrIe0/nuYHsYnX
/EkHN0v+o9jPg7zDQ0/j/URtM0AaOZmGnmXt5jVbMA8cXs3Tar5j/r3+we30
4LKLSvAXkM4ztOJ+AtHSzNrPncOSXDEa/iHHDpxcMAl31TTk1Aky/GmHgXZf
lVdsqNG+c3VtQWPZJyRo27NlvdCQeZroxtFMPsexx+JNxdQau0VB7D2t5+u3
xbt3f/np9fdPnp48PgKZKifMIleESqh+paGjZlErIqdBAqXhMyeZk5Q+M9B2
qpOq+qUhHaImiaGOHcHBX1JaBdGu+vAX7g6GepE4GAtPqqEKuQ1cPEYXO7cn
BJGWh48ISvTcyOcMo3uWgg8hSZ5jKmNir+WkKuW1i7lKVPODEWz0p6Eix2VD
sup0ZSpVX0o+asmJOyqAxAvMTgl/HEiKy7zxsJSXbNOMT4qfmpW79lyLgeLi
3krQeReeCmE3P0VS+5gL2X0TthOsHWqki3LBhBCzZtXuSyhTEk9S34CYMkHv
kOTbRLD2AVY5ePM4zrt3O45gzgFIPMF8ojmeO8Z4Lp8W1irrOZZqOYcwP+1i
uXRa1C0sKQaDfmHHLbDIIV98xm0YCOynyWmiaLNAEd9B/xsYNSn33spgXLSF
g4nx3ZlXS8i5AuaoSY6iUpCkHmdx3XAqhxDBJLz08uqQBXoEnMpyBUcmWFwv
krQYT2EetstiC2FPixdEgs0Ts3HO8Va4pkwI78klgR5+nSdXlj5x5cZXywJE
ZCRKXs/3nFo0OGjZoYpZqO2/+yzbeqpJk8Cu7Edj272JhuSDr0l6IiVdWjp6
zOpwyDUkS1Sd5GVMIQvoyMeEPf8uYvLweQSUJO2ieeIdJclEeBdcgklNEkRO
jCd2o+iFRtE5ZZPlCNYgcTBAQ3tZYgNoi61DHsH7wHWZzHG1b8MpU8HgOInA
jSjEteyBXdPaBePzaxfrJQm6tLPtKPgQkXhc0Ym4aIgFB2bSwwXVvl6dJL9w
/S7IfRAGpuLbgkF4DietiKTYiq3w1Lm9cewSbo8QHFW5nNV0e+XvIIOIzNcS
QaTMGaCEQjBpgnzFh7Vf4NCEx9TgFTnzOUUH87hG5sYP2e83zpmOPdvN2ATW
cqVjLvIx08N9I9aaE38e4fpIgzJ8Y1yhs0SVJb+aq4qTb4nFSeDwPvlaGIVf
G4IpPhKCCRtDMCNDduuNvki4n6Lp1fLzloUcRuZRDxAG0RH95TzOo6JanaFo
Xrey3qRKcD1GDOB4EJMr3CExgLNR9Pp9wmLYzX+xKVwj0kmAeXLQFEGhmoTt
Z9FJx5RXOicaeqKG8GaZVoKQm5reHusi4HNaNe73sOZyIXLF5a3oE3FbWTak
Uk9jxlGoNSxrkPQCSxeV984Pk761kx0WxBInkmH6aY6Lobo9qzudGXtQ7N0T
WJYTcc9yzKn4lTEn9Q/lL6jbj8LXxJR4yqDn6BqWEvr4FM56GqCS8BR8EROb
KXaqUSqMP2usihCIsmBVOdeAVSLLN0Sl2KebxqVczvpAIZEuEGcIUCiKq1WT
oNMVKo/Il3ROxBozQ12i6FN/6EnBN6GtrkxKAlB5gESFkx4HnAwKP6NX/rQ9
W1OVwWTTyg6HdfhqRT3RxkAubXNOY+gFpTqrm87kgg5MimdG6DcIvaj51223
TZHbK0qasAWhei4+rMxlzpZ/vORoUp2equokNCsCLovR+iFTNhyua94KLcLK
ADGQdGxZ/UOziq+g/PAEqIhqUHBbbgnq0JbQSTPMB4GMCP6QV8Kj9MtXhueQ
5i0UOnOcZM8vmN1SkEV4Jqm0P5c+oy7Oj2obBe5uSgUvMBiHaDWY7qERJbwM
064oXioOmp3JNAmLZoExOAy9uunwidGE3XT04pCiOITShHcefNfsMnKTOuQ/
cTF+VbyMomQUPOY7wyJlwxvaaxAF3feSyVowmlWMSonFuVSaTS5CiIojO40Z
koYFr7E9GvhW7rB6dvA3dlUzLGZ95hIs4lVi/gO32JNg7u/SA8uAcMPsPRp5
i/V0HJ5ifGHaodgubh/mNgCBj8eW6XxJU0KT5WKZeZ878LOapZ5jOARJTpfL
Be+15oyC+gYHp6I+FkZk7fCl4mNfcBciKuzr0/UUC/pImxzLh2P+UNwM2ZNy
6Xdy0EQJykglQOWBSQGTNziKY7KknbQk0Y6WHM7sbrNUJa+QV4udtcYyZzpE
kg9l5Qtp/hpMwqo5wzAXprZbpWI1v66XzZywi0W1Pjl8UTyXdzyk4bH1Cp+P
qEN4gXNIjK2h4uT4OR7DP8P/MQwoa6McHnl6xF8ePZ+4drL5E88wW1ScV9rx
5zvnvXSPe6eep6JvCR8UXxTIIoP9fv2fa9i56ytHJXRenmH4ABbmC6qmNqpt
znTw+KORDR0kyl4hbemuUpMbGlh93sZMDtiL9WJGrF6o1Ke/FQjH6kKc4T1t
4dzu4sSCLgit9P0eRRBDSRO8ilW/Y5qB9H09T7J2TfR420G7Duu1C4u1K4bE
GAM6+nLaH399+eSwd4PgFyMbBIkg9MjCfuysNyYZKQKiDlmH9U+veO9qY2B/
ysWbIx6CbuUruo24Ip/D7XrCzMc/4v0gouoKJ888DxrQkjhFNPBkIs7XmMdI
rgySUzwf+I28JkjHj394/vLpIwqQKIKVj66u+nKySzs62p8pBUz4pZlHUGyu
bJBWbobH08OO0STFs8zT5PaRjXzGRTEr5kDHhPmj7w+/uXfv3ggRnEmK730x
uf/hw4i1qFJKwaOxCHts/P3R47++fPzT4d/C9rt3f3oyfjSZwhW+GtfV6nyM
5RNjeN3YkOw+fNjhrLyWQMpP4nwWiUwJcTJh/MjJcgDNPKaqLgrLXOIl1WC2
xhEH9Tk1uYDHMNkB7u47zu2+wyKbU8TFdFN7WyeGjEp/rbhqVb5T6AP6W64T
/wQV8/TREA1cMpYaeYG8ROg2dallaZKVJGySL1IYDUdZ3EtEcBss+CWviTYP
dRZ/i6KE9cnbRrKgenZo0AMuwH2q0pxW0ob5fVGT4fLGxhL3xwy+TN7DEB3a
AntJFiV3uDPTZevMQZdKKlmj0dss7obH+NsD/KlpiA/Aeu6W43D9mILjBRSQ
nGBGGCLOStTBOrLklRlsNuPmSSHcLLd2AklOAS4OGpRW3x9xOHg8XCNqeLGB
U/mtlFcjOuJDU4XYT5eU1XJGhttYs9uxpS9claAArlAaqcVMaUIwAUm/xejU
w36KiUlawYhRqJXcp5QANh0jzgbI6af34QoCi34bc/FGlog3Kh49Pzx+crwz
IsEBn+L/w3YaXzKAYibG2rANd+YItZERXmYjVD12eNYqKqFsyXi8qqfTWXXa
vKX+Y3ILa5y7iNyKt6ZUZZXrC44JStW+8LYlU6f6QjdK8qaqFkEvdEEdYPRn
IukRnMiR4BySQFupNbiignC/Ej6RBE97kZ72XjAA9vo4zaonQwL6chCbcUBY
39c4a29RXZXDKmfd9rYe+GvLANRdqDSAEUd+mwGNxh8p4Rc4EnlDUmAi0aw7
kb5NOkp4KfFd9Xlhx7LWM2l9xo/3PU1h3p/4yu8e9HQkzero/2GEZPuViAUD
I4/Fb/3v62XEElH9YMPvxj2vS36KVyb9OxJv+2kbgKFhkxITRfjtPcI1r0NK
frJHqwq3l0DQjApiNLLM2T3JG3CE0P5EKpTZRxNtXGqNn588v4Yzanga7ENL
nHHVWfHbNFUmDi5BhEjyr7jancrjuMyZqZ2yBCwuZ1/DU1rajFneH0jD5QAj
GchW3o9uDPJybHMfYI189qOUujjfSsD2JG5r6Y2opiMMSveXUi7dqskpJT9y
4wTpTExhqOegNl05P5UlcrAb/pxqda3JGwVWZsoTTBtrBLgyvs+VV03F+sC+
eQBYLYSN1WZPbHMJJCzhDTXLWGk2rucMJrFe8PzaZoxANflsXFUCTaNtM3AM
XPXQwTLS41CaS2tP8e1ERfKPCMKZ3ODv3imQmnSDYQzG2dp/2Em4InqHqrkc
G4ZKxYOnN5Ja1Fblki4gDf9bvlWPaqpGCKrjp4jkPSvPENrMooqpHjopDuBm
vKWLSiF1jDzKFhUVYSwXQBKX1ZIc4pwEz3XUCL3Nv8iAlyZYkOQZgetWYIhK
bThExCjWDBQghMMTnEcmXixCQZL0sLPIYMTsb6BtGIFp8fLFTqwHwOZ8aqaM
bGZsbaaALoQDA/kHMP6I920lZcWM+r7AfCs06BqCwq6vqy4UVJYzGxI4KKWI
ua1WKhNYb+nZwaY0XjUkcTl+5wbJI3e6LB9Bh9MnY812CpwDUpOSjcGDxWhL
rvSozmrBMYmYqqBhXyrieSnvDOEG5QxyCeP6o4YXQMcq0Dg+yoMiBrNkMWdM
eIJ7+PdS8ltMDp3hbP0xZ6sT4xEFOVuaOwkocsGJpfJbQkT63eEEpvc1qG+8
0kQvubOTQJYOdIe1ow5P971vI+M7XtMrIvMqiMHvRsjqnDI0zFXYO79CV5qN
eyiB1Bo6Icw044PekYl6WK1KyZp98nGGv4ebqMIJDDlfDrGh3XrT1R7sOnGb
fnYrOq9mJwhIAkgvx42APS62703+8HaH2VcoAmycY1nqe4T955Wg3JfpssQL
kf37FEc9fPnwyeHnbcBmleSADvZ1FSPa7LwE0X+MsS7a14zCnlT8J2GTI5RK
5J/5+puvv/rwoRC3Hf06xF+rC4ecNhzBoPaxCkGCfjeKAGvpH3AOKf+EkJra
CDsnlwP0e4b2XkG7naJfijYhvWfiq0ngEEdFGEZsRUoZDjkJ/HLq3RT12Zb9
Q1gjBJItcAiApU58EXsfFLKeUC8lu4jC3niDtVfwK5eOH5L3yn1AI9FkKu58
2uUEM2N5VS3TZvKqlBnyOiw5dE2lJv0njrofUph/Ck2aj4auQEmBWlMGQ7tW
yKdY+0s+oGzlKaLJQD3ekoxep5SHi6NVdqm1XKaYyTwq6KgjSwYCxigIqq51
AHucYmeaL0wBXPV+R9ceQj9sLyu8EdlpjsUuCm+D2TjkJIqwJKx5uPjQDs2N
DYe+W6kTMLuyRuGG0WdaZMeg3baFaw4itjqv326Zj9fgJ4jhhFPebN0w2zZY
l1s/gzi/EkbHfnhIIH1P8lNLmKKdx1kbnDOjGb1ZfDUEYhhEkFQ2NjF2G69E
ambX90iqGuLiiTR9/HZRzq2nhLoK8ju2tG39Le4Wbo6Q5An+6zYLP0lmKPuZ
vPYYJl71Wfc1vZ/gBKqEhso9gq/cTt9JfgP/ynGcZehn75W1vSf+CneFx5BC
An8TvSqY9o3Pu/oSQoDHc4LHn5S9nYKuZUYzYfknWzWmxP8pBXCm32+G3YgX
I5wuQjtEg9J1ptMPwlD32gU8vGIDVFBMonoxNLBc68hYwIUE3PL8SLVmMjPF
Do5VYzLi/OZGFJnkg907qSyy3B3XAu1vnCD6xxia4B1/x50iMU27P06VIOpA
+tG3NgkbGJq743iI8Ef818GKZnOJ9ak81XEBB1Q9Yfo+WLHGLJsdBJbM7bTW
um/RrbPd+yfj9c46cXRsneD1Rswe6K7k4hKCT8F79pViRkU93hBkTYz9KXoA
/ZnrEDrDsvS2TYuzodkocB7YUrumvCj4DSSByIHJkZcD0mVdi1yr9GvRdy92
Z6jEW75FpX2DbPSmR/7LBJidalHx4PV7Bd59RmjL8MC41yegkMtEUiVAiAQF
6YxgxjMTtut4hrfJiRGJ8kbBq+gjU+UZgFrqbxJvFU2DYJOS77y5IYc6IU6w
n1ySh2LJudh2CndqdmgffgPFIsrzhEfYlBJVsadmCWT2NhXcUChwWi6wQift
JbKLq0KH5mVuwk5iHXl0VASW/epyI7Q3sGtVf8xR5x2u4kgtP3qGHkGzOFEe
7SsFxTd4evJlsOyg+Us9a+JZWLgdFMNZHNqTLGMuLy0uluWUqzNobjTCr9Z8
txRW5iRQkZdVug8EQOO15+vKOoCcwQJ78S7dyZLNjCF43yAMUJyVbP9hK7Dq
RyfPQ/zr5OkL++5b+1VfAXCD+durDoU0Ob0cHmYSCORivlHOVK2e1HzC9jl0
DMvOu2wf7tc9VxVFyDM4fzZl7nzmxaE7HO1yxsYntpmuU0hYYLBRcmerP0zO
I6FrCuyprA8urydMIKxrDD2CwSlAJqPIfK04dZRWTYdhu+8EFP0nYGcgvago
kgbYr1HYNzwjo2j034nejPTVO6H7Um0um+Ckuf7uFjuWsETZ1ohvJyZ1Zgd1
xgR3009VuRTVv+GgmGwfOM37pgpwnOQpdUkRHo7ZR+adS07/1NBVjEvZFMm0
8ICTb0aF6UTp7rNGsulxjXQmLtlpvo57saybpVOQxfboU569Z+vXKNd9M7EX
3IzLFh+ab6mf3DjhGmHLX+X73J3zfGp7XXc6t34KxQSgQOqheYWHNoEzHvp2
gP+0+9p7MlO86xIfNJOie/qvdItu6lm32j81vHrmULQo7Az5s7hGThbPVSbH
MOMBXmpPkbHxIWEvIs597NT2jrhaxb9KTYzkJnSxyu4c8ow8T37IIldTkVNv
fdxJm/uT7KUnrWiCD19FA4XnoLvlzE6NWwhRW927PEQlarXpVzyiHyI1G1u6
niFHElzU7uUCkGieZC9Lj0Uqqx8UYM0gpMdtdjy8AI5pXElLfcK6v71esc5R
cTXLphSnFrjwZDVjZi6pWTrM7sT19n5Ypm7wtGd3i2+wR+Tmd1Wf1P30l/Xc
u17E5Nw2+5uHrauxudf2FL+IZvThq++bZc9LJLUh7omuFr0jF+nGUP0jdp1S
ue2xlptS4P5INMUNEXytT+2G8sn9yT7ksi+Nmz22muMuHlh9oR4spGbQV4yk
IDPLUVLHIdGycqqpRlUj+4u68sNNtRSv9zZ7thN1uaAauOf29w5Tc5crprel
xrnHwbHSRBVPwsTlamg4pEpLSJh+MKG5Mggs4YH0IQOJ+/Y3qJwtFO9GrqDg
sxMYXBt1TDVemJ9SEv/6VoXHKpEOLpjtf7NUndjaUCxVMjaQI6b1oPTQBSGV
ZWOqXvplw45P2eHQiolzAdbraiYGpDKnVUsaGsikADZnyYzKViDd202rkj+m
ao+C/Im87fMdzvUgY+om5idge2NtD7d0Csz6eds3fcJFMbRckhd3G3iafUzJ
xrEdg11C03S7eZOOAmeb1Mwg0r+wO26PFBv3SBimGIXr4ZjYoZnWaU5bVAdr
d2GbbiI3NL0xejy7UbCBxGte8xtUdloQNX2oV87mIjD9Sa/Q7ZfGAz90UePe
2FYUuJ+x6vARwXopz0RJ2rvx+vcT+ROyWR79BqdwaPVFFyrlraUiD5XZMdT+
6YLz73QOcAoeabzukxRzunARlAPVvo9ctE6/sC/VbOvdOr9WkXCtbthXfRZh
mnDQ+11fm/qduWthR1gyWSf8MFKbYCVFy7TknWzRPjrgIvkiPYw5Pak7pqBr
ShZYkleaEfDx5wkXVdpGNzuimx/x8oVlR3iN5lAQEPuJdgQfscuuIyQhmIc8
JXjMsxVd+JTeKgzMhq4Yf73vUS1HwZf+sJfVWQiCo8Qa2HFkzNEeP1R8SwnE
8gX1TIhr4Fbin4ybcxuGQmJixhtqY3xjSTmh0GzFeO6K3MZG9NgSAw8OHpQ4
B2nRHSVGosnBpTqDXC3shq4IOZULI+BKXtVnrSFSoUNtzlXjiRy5w6NjBCe7
QyQNJaXsGTG83qI+c2VlW8dyB5xE+u1HdXkBPY4V9gSIwUWzSMl4tk95oCv6
t5bHWYLYVjrd/IuWgvBF8e5dMrNjbqP98GG/2KIQwhZWGwgrUIwpYHnWFshH
/J5q1Ojf9hXuj62z5Zqq29Eixn9ysv6dglncCqRv41niKIwBtuCh8UDSMjGM
TKPc43cKpL7d3Ehkmh5ogsBRD8sFtyBw9Xw1kPeZ3HWULWTpbNEQkTfKkzAF
ZPlT/r5Wv+A7fmbPq71FPLGf+p4sASp5C5u0Uiiie9uwZDXZUw+F9z1OWDuM
Pq9YaORlhnWz7hyL7miFpbBjzfFukLKMem4WH66ft/lY4o3T/+4O/ePuwHP2
Xzh22hX89142PvxDNuB73kLveRvE53jFchP6ffwH05TrP97j2vrn4M/fdhiU
emzN0yHEf9yf/OEP9Ml9/uKn3QPfDfhzwxjSf7zPnvsN+8950q4LJCnwH0jy
7vufKDrvi67W8A8Pp+hRfH+7EfqYaT7Cb+79/zRCjgPD21i0wz/2Jvc2reG9
yddf3elaD/+zRylJ1O6k5aMc6tU/PcB/wWhevpA3udHc/4pHg//4XzQa1N/1
TXbGbG1InPzv3oGu5EjuqCNSJVMS53ef8V8MD356uuQ/8a8PRHhOlD+GM6+l
sQmtj4T7N7MuhYTThpJSiZLbI0RPtKfVnEBlhEWGQcVKJhBiKHJSf6X8vWVT
oowJUd3KfNFrEpgFy8IwzH2sFnfwD1y0qcgLrXGlRhZoxypHyKBakC9l4kJo
NDI0PPUlI5dW/D4Ylm9Gt8wqJhmu24dMwEUwB/g8Q74X5+s5afFB4UdiLrtl
3mFOl6ayqFvT22O9vov8fWKCU9niqPMt2p8vJv7h/Al4QPYd55YnSWaUj+q2
Zf/7Ox/dTZrUkHmTZqF4lglQHiQFhA0oraf2nAucaY1wPnAmYWYZsQJjBQLT
O72dl1eg+11QuhPyrJwxl0xEFvG/rle3I6RsnXFVgVoimpNTWl6QukLrldTu
FB7IfZuJz4j6ponhs5L4OAONpDT6xWRAA+gwUp6fwgZ9ju5wbqJjkhfbW4eT
J/wtKrtbgmpEtiFVD/z+m/tff/iww9DJ2o4gYca8f44JIhD1L9WycUBtMeEq
HQbNV4dKMvJ3q4Nr3mC9/Ox1lD64Z1yPYZttS3f+pP3aL/auJHc1lYwP2Bcl
KjO/jXT0O90XydY7YCbohAJzRKFtlBMk6TC8TUaRS2f5tN1XCv8TQV/npVey
ZxDFU4oeiI+5r1yXcraW6+rbwlD3ZIzRJcDiC+TgrN3BLR64ALNtHP5Vdm4m
xVPMg1IJ58dkY5VNi/B2jOTXCUw1xnqDPvwkv5Dj9MnQedTBBoaCHyGV+pL/
treyyUGg4pRCiLPx0MFrifd+GLTPFX+H0T8tb84NAteBELoJ9LIvUSzn75Dc
QM/E6d8r+4dAJ+CGYhJMB6GVcNlv74AYqRZD7Aj9vPfumGmVG58A7zm5U1AV
/vbevXuSHs8tPSuXF/X8RbXEOwdT/+GBxG+a77/37wthW0gOnLow81PInNLd
3ju3dv+oDr3jZyfeDqTRECRmkkgoKZxYNmQMeFyUcUPhmfnZreSvOx0hYMlg
kYAMqsOlZ3ao3IgQIeGO2DMsOuZOekh4egUDq62KvX9zsCJ4/tFTh1idF4qI
WxyjG/WvgqCUQSChIxQBM+RP1Ozwb/nzQxpDLa9Q2PxCAUMJS1NK5WVVTlsi
6Zhdd7GqtDZzG7WpcPji5aj46cmh8wMJ3yq5hteLlfcQMd0ZyBacv7GPU37k
gjL2LuZfODl8sSM4RIbO5CBVqiBwoPhOqamLIEbb9snh5PjZ8fFOARuB0VcZ
y7KNoPyKLCUx8Ag3ovBbPagkO0YM2MM9FGIS6FX5Rqm31fGG2CSEKBWTsEeq
eCRY0qSRBl30jk9a4hkLaO68Vvx5BcXBFTfWq4Qe2+ZPo8vWn9qhzZURTpZ8
1QdWQWdIaSROzzE0MCU0NTpcRhUfs7jrleCytuxJJoMIHpXNyJR0I/uYj+fI
H9a4q7SWCr+it0B7sVaCkA64t6f1xYXrrCVYEg4p9ZdIJD7S4YC5/PI6OATu
4Ejqq0GzG06emiCUq9BwtdllvRA6IGiQEVZa24kpQQSliPXkQtMVvYJhwvie
zx3cE4U31/NWZCXKDREbCdVOJkFg/4eBOyRpYV+Up+THDzI5fgfVrP4Hr0hW
+w9Hxe+/LP7y8HZVmWaW/wZMkfw39wn1jw6y6mM08iFgTETGmJNVKXCRjEbK
Vdgw21n7gSm2QDNDnO/IvZKtOQZ26gXFwfAJRZxLQBKZq505FXHRUvDlpI1e
MC/D54Q3cD4+SBANJumjBPpIkglGiwUDZd9sCAIdFqFeLVZopOPcUJt0lkFT
Pwd9rz5VGVjq7RMzDgsGW4ErJ4q5MdsMHyyekNsSAiyjIisVTbFAYxKeWAV3
99sQ6z262FpF3V2wK8Wl8ln2SSlBnnI/UhaDRK9UMIFaK8jhZ7CqkSqa0VlZ
sSNzNIJJqfiMmezpygXqh5pCSZDv8yiJW8lDueTwCavOgqVO7H6eOA17cbHE
0hPpBNUbcAVzJzqkeLTUtzZgmRMynKhYJNcEAZiSzZtCmAqyRqslyDwNDJeF
s6BeoBRN25CqSMVmk0i55GAK11RG/3krR1MyKCQHky4nXA5+iytaossJr2Up
msW3VKue80bZaf5yNf8B4qtiioYhlrRqfafZJRwdXpK6YWwV1BCmhrYJ2bbF
1GwjULpdxnYcDI9eOOvxFcb5s6zGuuAZaZyg6TrsYcozCwhIs46bEOEMl9id
2S0rKTH8LTZgK1xZ8hKZidAzdYWkllkEkTJ5nqgbAV/37jMxisf4p+QSWDEO
Rwg9PD/ZvOt6hkUhfUek0CMSKIlODliuPLXqAqKnnRukXOneOuCdyh6LZHt5
NwRHG1vbNwZmiIwftQB1tmfNQoUwo8ipSFIoJJyqxDeAj0ql0LxJJwBRWzBd
VKrtSYdEQHeCvamYx2HUbTEyd/R6e0Lu7Sk2eHty5879b77+8IGOP2rdus6H
yojSjTLjfSBfCp+ZfvXB8Ai7vxJhv14QWAnlNYnvKqsP4znmgrp5kOuC1VX+
Gcr/2Q3SKiMjB8IT8WmaNoyrKHVvKNnposw3N9UFnqIV38InLSjQ0/1OlVwu
C0YG90EUXVE2u+MZAXrgHK6v1pY2c44MzATeHm+H4BJMqEBQNUhBLic+naQI
DyV9IhlSoW+Qmn7a+1TGjMOlu1LzpAQjAruEAe8DyPpnYnJtO2eDOgpUePVk
4kuCVubNQ08cZhXU04TuAc+OK7w4nS48jKB7k6+qFR+H/EAcDqTjytESHGKL
2ZyVi4xfMgMsTst84XHRW/1HnLyWwhn/ih8+q+fIuXXYgejbkCKWNHz3QXH/
DivNvSXG8ExIK8eThUv72LfCvtVPmVBf+JPsqU95Y5Kn0zONdzuoiLZJko39
4JO6GiHFt0ydeQ17Z6tgxJal0BswTqicVvU9pORLSLYb2ItU9ehHCZMAyACP
TM3CSNiEC2EdPL0JHEGarmPKqdv4oMVIzGK94LMNbxGU05joxElEIWOSIBpw
Mg6iR0X5apCSI+bkC4eyM0iCIY+rFhTxoQVVeURknggJNiXnirkipizp3r07
2PsKuTZp4tkkkRmXy0BmWoKCcaKb1H0Vu2PmeKTNIreeQ+Yn5pV67rHzBegn
dP0+ejUqxyEJX1wPPKszdka9+0w5DfFbKsJf2LfCHddVzMWAa4S+ZNZwuCkT
BcX2l7rPuAIkfFn8h9nGckXyjz2hjlTkkrGzKkivxa0FiypaKHohgq9MmdVg
diJ3gBK+2fJiWT36qwmAjAv7or3NHYGOB2Y0j1rQpkOiXlc2OeJkoRc/RC8+
zVF29hTHv7ifEhLH9gXTuOVjM/C4A6FjvciGK8evWtrSc6ooGRuSqpansV/Z
I7wF6m4BBS03uk8XSwJdxxbIKXlZK2Qwz52gWYkNIbwipPWQ5itnLRgPkuIC
Msu3IECzt6FjlRrmXkFw0WBIGBLCacW2i7d6tP67HCALLLaPTp5LbUVmLWnp
SlRXysB5kdRxpL+UBd3TPWQ8qsrgJCDhbPCqQmRBqyBNR4iuOfqdZ5ExBHqI
1A1Tcb7DhGzRL7jIj8Aln+lB30JMw3fvaPVK+35sgoAYgV2QVx3atDLf4waz
7YB1qKq0GxyEGmC4LmHrE3BjKQ333btPwY61rmHHGJc0NWnZwckVEXFBKFxI
IAsIfQ9aU9NMt/wiyaMgV6YK5FIp4oS2nYEyECvE7FaxFWLJf1+3nP8VNcQW
nXbLalEyJxFeSgK+AtvdPctBM6bW3MIk+i1BPSslZIs+aoMry4pHAj9KWIBS
kCH5JBS9mMHXM3FCkJhfW5K95almc5uVi9A6bjouiRb9fE51AvC5D4GBrYKy
P9aSDoFBSaceeOiau8WeMXlj0EFXLNmkfZ3AB/T7X90b/0YhgE7nadsXbaEy
kPQHhEhVk4OTzCLZAyHZA4iR1/i9l9X6UHAN/hjzdqaZ4d18U4VLuNXXyXwk
3Uvm44hblEHzyCITdP/wGAevVYz4FDhAPGkMhNwLLTCQ/Mxy8lQwNqoicToG
bxbmmDLKDqk/wg74QDSflfRo0MuiJ+e06qAoatp3/yRGZoK4ZxTHI5lTUlXQ
BqbwrYGCCzECPmcAMOMom3S3b68iMo/eAXOt5yCA/3RpQHpolqDknJOf8Zyu
hqHYug0gmrW/e0KbPR6RTs2QvcbVnDOahR1T8nPidw45vfOY1kfFDyN4e88W
3U9lAcdT+FdpKzsJvFSPXmPZlL0qDfk+mA75CTsx0Ts3raIiwYipmhyUUn5t
YoYMzAw5Io3lokEGAGoLeWPemDaR4MWh7wczyEOWrkgBRNFDvXnlPHIdA8rj
TXPiCB4nFi9SfRGvzUxB5yQBMgCg2/H2DGLBROXWTcAGa8LmZMTFPOl9oTPn
F56R+2Agun16LGktj/e//7MZ1p0WcAulHw74KMQbkD7ry/dFnul7cxSI1Lfh
jk7czRRRtC6kE5Bu6EHlDr2Xg4odC63cGa0EhPj/1xWZ1wT4XTNYOob5wdqd
TkHrUvjfILr6pCiyvMXW+AFBojbN6hLDCSjsJDtKOWFok7NmF+Taq1MEXfMI
lymZtoJ4TRg0TpWVNjIkh45nBKF+rzVQw0GZGKoRpd6FGeorJHqBt8xuNfLT
cTzmb2fFu9/2hcPrtHgHO4YlhOV6xYW5I1X4nQ4Gx3reONpxihkqcVHgdA/P
l7Ytflm2TCiQK+Dcs9sd0QrEWYpozJxgQdGjmPCH1zXrk7HltthKOTu2zLzE
nE/8PCFod8H7VRKm4UC83I2Y8tUmkXwcdB619wJgwHG7IW9rwykD/TEd1Kiz
fjt2gVH78rvvur639+89/BI8kfh8O50Y6MDm260jkgYlT9+3omdF/584L/qp
lAvQU5ppHUP5jObFRZeF3ImYxWgoeeIEMfLiyBfpIgQbsz71zmADcuEdgJ1k
0FgomqaA6u2q0cOY+JnHselNeIBs1UaSKEf3IOrk8SY8pVAXJ7rEk9jZBSQJ
KzM5zR0a4JKaFM9XwndIrBy56OhrkFyJ6v0DGxOnTUBIWmRgR1UgG0SxTSR3
jM2e7EGbfIv0oC/l7+gjTgLqPn1boenSURJdJiyJgIH4adY5U+JdnUt8iOex
X1BOkMCJ5ZDzFoYN3sLaZwKSe8XpH0w5jelH7PJFb2D4x5QSIyx5qEOhXz3U
fXTE3iuXV/LuMx0165PGhMKOrnEMWqLH1A871cBsq6qDzH5HadXm8+HDELZi
me2W1GarM4yLVu3QfLQ6PSTl0jYlaVSsr4AV5Q7HOBIYJAUePpC4DjVPVoUH
AMisiwi6Kg9Z8KQHjCMR073qVs5mUXSfGn6j3jWYpftDVU6XTXPFwGGE04A+
odu0Z7EAa9vQXWqeE1B0NdAnUTK4kiw4NoQwEQNqw3G0XlXSRX0+K55keVTw
/2vEUEgj7AQ1sG4/CJKjCle0M5dC98JP4B5iszXPd1UeGs0mCGqGErcnNqrK
B1fzcz4U5SBh1mPTkrVGrcDDT9DSAsV1/AiZNjkRLc3ppNqDtpwl7kdOAfjD
N1/e19BLdrfkvdZcfx2xsMQpB00JC9lyJPrJ45Pv1fFsaaaRsKCmPy6WmjiM
3aZLEfqDBWPoqa/C3FwtdMXWNmjMx0JKnusaVcU8+43ubqPnwC9v0dfSLFs2
+oQ+FbsIctUzpcIlVp2fYxAErxJOElgIjQJcH/X5beAMkogbFHE76bUrBb1u
11K3D2+jyaA8lvp0Dda65N8GSzaQKSwlJHO1plKalXJZYGGH3LOY8Ad7ogR9
gybCCvE6G4xuxnqJiX5YjobGyBEcTUpbRN1mel0LMW2cZk6YyJvCOET1tiaQ
rgNNMcIfuu0zKra4CiS68xFSp7rRNElNZb1YNmvSagPulos5cRgnyhW2rBwP
kotEORvs1ZvDOSG7HbTnOad9gYEuCTXYU2a9R2bJa9Gq4GG0xknIM79Y3CvY
tfOqmp4iXbq9K1yVkv9jc4F1XoIh1bLFcVUyH3DBuYwYc20kwch2pow62Kgl
EbPOwKdIaNxS7Q4Mbwsn2mSEHvqbJbkbR5nJ0CNZQPhdfxETXcJifTqrz8h7
QgdrSondlmsPsvFOcdysl2cVzWbx8ugp31V3isvVatHu7+5egJBan07Omqvd
i6a5mFW7p6fL3dNZc7p7/cXu0eODR88eT66mv+pXILB268X1l7urs8Vr+GJy
5nqyX/yZfkEfPcOphttyH9cAbGPyFOEXT2u4+lt4GGd7POO/pvDbF0+v7xe7
xcPjR/QcXuTl2Wo/9owWZMK9og5Od2E91lfYw/G0uubm0TPN1hc0+hPoO+Se
vH9/VNy/d/+LgNOIrDt9M9jmU3jWTs4uwcKu11eTZnmxq3/stsszmgvYFsvp
awx63u6iv+uyoq/wn7tw6qrdqOG8lqw47O391xKlPDv7f/3Gy398wWBl4Bq9
led/k/V5frZqaHn2aHn2wj93itjWksRNcpUbQW/MS47ZZP/EIePdOtOpcadn
8gbv9RmtHrxgtz272p1hk7v8BT6zCzcKyLdpy99M6KNlVaXnixIMdYPY5Pe8
tGfir9oLnfbd/3Pv6emjxWr2/Jdm9/HPF2+O/v7zDw//evDjv+Lo/tYb5LgC
85NP8Je2RX7FCe6KMzk0JNEwlUE/2HyCtFJQSFZ6Tu6/7E3/M0/sV7YaWP92
Rh3BN3iPC5a+8Tfj1BejaripuFx5BbqculrlXiZmrdmtpEPzS/Ccd0Ai29As
e4q7HBUblwAQ7bVxd1ex1bSDQQjMSYeScp75FMsSN72Dtauvfv/1HoX10Tg5
+OmgOyt1OS8HZkS1KK08pwbKM0l5J1q9G/ZyaeoEFhpcLokUUtg3qmkADZaZ
5vbu/x41PY3AP/QseFfNFNRjVDuJ04MdVzf1kvnSSnLIotpu7nWZa1S6l9wM
0kpOp+guFd5Y0N4wlYAUJCkGmK/x/LYTyaKhGkauzEXFmNbbiCo2TS6BNdrK
B77PNJFYLQdu3mfeuGz1kaR1cLIQZ7XF8QUd30QjxmTbFI+nNYjr/dSMskSf
q+aaDTa+TEw5LOf4Y94FB+ZHZgX53Wdl+onEMcr1CixkVvYJ+eN8TVXnT+Gu
+susqudgjL0h7w7p7bgmiiZwWTXLW3dcQncqJ8Ur3jmg7Fan4tR7Wi6xbGJZ
vrlqmFEQflFxfFheQPbTz9VFiTl/D198g9SFOOk/waLbx5gKuPn8qcKP1ld5
Qcx/MPeVIn+wDszLif2gjn3ektFzxm67i3U9xdJUidMHDntUJYwYdt91NWsW
V0JNE+fypllj9cKMvItvZKeV8zfFX7Bupvgb7BMkyKlvS9hZf4dV+/GyHBV/
a9bt+g0ItjXqHU/X8MMf2rpstNgMjDSdfky7g558W/wIp7p4clvNL7CE/eea
MCh+LtsabMrrUfEQ3vZq3dMAcUBSCzhIEOor/B9KDeSVDmivIiFwsuRJZQNH
xlfMaCO7ENfl2bO9b0AW4crHiQjpRBxOiuNVtUC/55/BQBsVj5f1WfFofVX+
gnx4PyH13KqED968wVq4elS8KG+qWfEjXEi/3L4ZhYcgMmC3N7ej4lEJVhS8
DY1SMClHxcEVDPvn8hJuD5jHqpnDErbFX2B3NPh2cj+w8wPHBNN+skaLOV5J
I0ea9JjRaOTQ801YHB89LlZVedVqEV6NPj+zHilLgQiHwLoHueHmIp2GIxTw
YAkf0XTcgII9Cj/WV8WrErQ2eOXT5gL6izHDnyvkxZsu0Wv5A439BWxLENgl
ErUfTJcg34t/L+HyBqN/VBxeLvESKefhh3UNykyJQ2qKf4cJh731YzMvybX9
DDqHLdIGeA7LBP/+cT2/bKCBBib9sFy2WF79kAq057BpQSOoS/jdGvGff4Gu
XRYnFVwF8xrnHYcDLzrFvn+//O//mv73f+G6Pv3v/zorMU7Bs35Srto1mBRN
cdKu/16/KW9KSr1UA9suFdxQPO/VtOYcTDx5NagCWEkZNXB3iaH0G4/HBTYU
/i+J91gLmFkCAA==

-->

</rfc>
