<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.38 (Ruby 3.0.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-cats-metric-definition-07" category="std" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.33.0 -->
  <front>
    <title abbrev="CATS Metrics">CATS Metrics Definition</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-07"/>
    <author initials="Y." surname="Kehan" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="C." surname="Li" fullname="Cheng Li">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>c.l@huawei.com</email>
      </address>
    </author>
    <author initials="L. M." surname="Contreras" fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author initials="J." surname="Ros-Giralt" fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author initials="G." surname="Zeng" fullname="Guanming Zeng">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>zengguanming@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="May" day="08"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>CATS, metrics</keyword>
    <abstract>
      <?line 99?>

<t>Computing-Aware Traffic Steering (CATS) is a traffic engineering approach that optimizes the steering of traffic to a service instance by considering the dynamic state of computing and network resources. To
enable such decisions, CATS components exchange metrics that describe resource conditions affecting service instance selection. This document focuses on compute and communication metrics for CATS and defines a
hierarchical abstraction of these metrics to improve interoperability, scalability, and operational simplicity. It does not aim to standardize raw infrastructure (Level 0) metrics; instead, it specifies higher-level representations that can be derived from raw measurements using aggregation and normalization functions.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Computing-Aware Traffic Steering Working Group mailing list (cats@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/VMatrix1900/draft-cats-metric-definition"/>.</t>
    </note>
  </front>
  <middle>
    <?line 105?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Service providers are deploying computing capabilities across the network for hosting applications such as distributed AI workloads, AR/VR and driverless vehicles, among others. In these deployments, multiple service instances are replicated across various sites to ensure sufficient capacity for maintaining the required Quality of Experience (QoE) expected by the application. To support the selection of these instances, a framework called Computing-Aware Traffic Steering (CATS) is introduced in <xref target="I-D.ietf-cats-framework"/>.</t>
      <t>CATS is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. To achieve this, CATS components require performance metrics for both communication and compute resources. Since these resources are deployed by multiple providers, standardized metrics are essential to ensure interoperability and enable precise traffic steering decisions, thereby optimizing resource utilization and enhancing overall system performance.</t>
      <t>There are already well-defined network metrics for traffic steering, such as Traffic Engineering (TE) metrics and IGP metrics (e.g., link delay, link delay variation)<xref target="RFC7471"/>, which have been in use in network systems for a long time. In the context of CATS, computing metrics need to be introduced to enable joint TE decisions. <xref target="DMTF"/> defines some fine-grained computing metrics, such as CPU utilization, but directly using these fine-grained computing metrics lacks scalability.</t>
      <t>This document does not attempt to standardize low-level fine-grained performance metrics. Instead, it organizes computing and communication metrics into three abstraction levels and defines a metric framework based on aggregation and normalization functions. The framework specifies four categories of Level 1 metrics and a normalized Level 2 metric, balancing metric expressiveness with scalability and ease of use.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>Computing-Aware Traffic Steering (CATS)</t>
        </li>
        <li>
          <t>Service</t>
        </li>
        <li>
          <t>Service site</t>
        </li>
        <li>
          <t>Service contact instance</t>
        </li>
        <li>
          <t>CATS Service Contact Instance ID (CSCI-ID)</t>
        </li>
        <li>
          <t>CATS Service Metric Agent (C-SMA)</t>
        </li>
        <li>
          <t>CATS Network Metric Agent (C-NMA)</t>
        </li>
        <li>
          <t>CATS Path Selector (C-PS)</t>
        </li>
      </ul>
      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they appear in all capitals, as shown here.</t>
    </section>
    <section anchor="design-principles">
      <name>Design Principles</name>
      <section anchor="three-level-metrics">
        <name>Three-Level Metrics</name>
        <t>As outlined in <xref target="I-D.ietf-cats-usecases-requirements"/>, the resource model that defines CATS metrics MUST be scalable, ensuring that its implementation remains within a reasonable and sustainable cost. To that end, a CATS system should select the most appropriate metrics for instance selection, recognizing that different metrics may influence outcomes in distinct ways depending on the specific use case.</t>
        <t>Defining metrics requires carefully balancing multiple considerations, including metric diversity, granularity, and rate of change (e.g., update frequency or advertisement churn). An excessive number of
metrics, overly fine granularity, or high update frequency can lead to significant signaling overhead, reducing scalability of the metric distribution protocol. In contrast, metrics that are too few, too
coarse-grained, or updated too infrequently may fail to provide sufficient information to support effective operational decisions.</t>
        <t>Conceptually, it is necessary to define at least two fundamental levels of metrics: one comprising all raw metrics, and the other representing a simplified form---consisting of a single value that encapsulates the overall capability of a service instance.</t>
        <t>However, such a definition may reduce implementation flexibility across diverse CATS use cases. Implementers typically seek balanced approaches that carefully manage trade-offs among encoding complexity, accuracy, scalability, and extensibility.</t>
        <t>To ensure scalability while providing sufficient detail for effective decision-making, this document provides a definition of metrics that incorporates three levels of abstraction:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Level 0: Raw metrics.</strong> These metrics are presented without abstraction, with each metric using its own unit and format as defined by the underlying resource.</t>
          </li>
          <li>
            <t><strong>Level 1: Metrics combined into categories.</strong> These metrics are derived from Level 0 metrics by applying aggregation functions and, optionally, normalization functions to form category-specific metrics, such as computing and communication.</t>
          </li>
          <li>
            <t><strong>Level 2: A single normalized metric.</strong> This metric is computed by aggregating lower-level metrics (Level 0
or Level 1) and applying normalization to produce a single, unitless Level 2 score within a defined range.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-raw-metrics">
        <name>Level 0: Raw Metrics</name>
        <t>Level 0 metrics encompass detailed, raw metrics, including but not limited to:</t>
        <ul spacing="normal">
          <li>
            <t>CPU: Base Frequency, boosted frequency, number of cores, core utilization, memory bandwidth, memory size, memory utilization, power consumption.</t>
          </li>
          <li>
            <t>GPU: Frequency, number of render units, memory bandwidth, memory size, memory utilization, core utilization, power consumption.</t>
          </li>
          <li>
            <t>NPU: Computing power, utilization, power consumption.</t>
          </li>
          <li>
            <t>Communication: Throughput, bandwidth, link utilization, loss, delay, jitter, bytes/packets counters, and other network performance indicators.</t>
          </li>
          <li>
            <t>Storage: Available space, read speed, write speed.</t>
          </li>
          <li>
            <t>Service-specific metrics: Requests per second, output tokens per second.</t>
          </li>
        </ul>
        <t>Level 0 metrics serve as foundational data. Some of the Level 0 metrics may depend on performance monitoring, some may depend on active state, and some may be static. They provide basic information to support higher-level metrics, as detailed in the following sections.</t>
        <t>Level 0 metrics can be encoded and exposed using an Application Programming Interface (API), such as a RESTful API, and can be technology- and implementation-specific. Different resources can have their own metrics, each conveying unique information about their status. These metrics can generally have units, such as bits per second (bps) or floating point instructions per second (flops).</t>
        <t><xref target="RFC8911"/> and <xref target="RFC8912"/> haved defined various network performance metrics and their registries. <xref target="DMTF"/> standardizes a set of computing metrics. These Level 0 raw metrics are not standardized in this document, but can be used as foundational data in CATS to derive higher level metrics.</t>
      </section>
      <section anchor="level-1-metrics-combined-in-categories">
        <name>Level 1: Metrics Combined in Categories</name>
        <t>Level 1 metrics are grouped into four categories: computing, communication, service, and composed, with the possibility of additional categories being defined in future specifications. For each category, a single Level 1 metric is derived through an aggregation function and, when appropriate, further normalized to
yield a unitless score reflecting the performance of the underlying resources. The Level 1 categories are described as follows:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Computing:</strong> A value derived from aggregating one or more computing-related Level 0 metrics, such as CPU, GPU, and NPU utilization.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> A value derived from aggregating one or more communication-related Level 0 metrics, such as communication throughput.</t>
          </li>
          <li>
            <t><strong>Service:</strong> A value derived from aggregating one or more service-related Level 0 metrics, such as tokens per second and service availability</t>
          </li>
          <li>
            <t><strong>Composed:</strong> A value derived from aggregating a combination of computing, communication, and service metrics.</t>
          </li>
        </ul>
        <t>Refer to <xref target="aggregation-function"/> and <xref target="normalization-function"/> for the definitions and examples of aggregation functions and normalization functions, respectively. Refer to <xref target="score-meaning"/> for the default policies and guidance provided to implementations.</t>
        <t>Level 1 metrics allow to focus solely on the metric categories and their simple values, thereby avoiding the need to process solution-specific Level 0 metrics.</t>
      </section>
      <section anchor="level-2-a-single-normalized-metric">
        <name>Level 2: A Single Normalized Metric</name>
        <t>The Level 2 metric is a single, normalized score derived from lower-level metrics (Level 0 and/or Level 1) through the application of aggregation and normalization functions. Different implementations
may apply different functions to characterize the overall performance of the underlying computing and communication resources. By consolidating multiple lower-level metrics into a single score, the Level 2 metric significantly reduces the complexity associated with metric collection and distribution. <xref target="score-meaning"/> further describes default policies for implementations.</t>
        <t>Figure 1 provides a summary of the logical relationships between metrics across the three levels of abstraction.</t>
        <figure anchor="fig-metric-levels">
          <name>Logic of CATS Metrics in levels</name>
          <artwork><![CDATA[
                                   +--------+
              Level 2 Metric:      |   M2   |
                                   +---^----+
                                       |
                         +-------------+-----------+------------+
                         |             |           |            |
                     +---+----+        |       +---+----+   +---+----+
 Level 1 Metrics:    |  M1-1  |        |       |  M1-2  |   |  M1-3  | (...)
                     +---^----+        |       +---^----+   +----^---+
                         |             |           |             |
                    +----+---+         |       +---+----+        |
                    |        |         |       |        |        |
                 +--+---+ +--+---+ +---+--+ +--+---+ +--+---+ +--+---+
 Level 0 Metrics:| M0-1 | | M0-2 | | M0-3 | | M0-4 | | M0-5 | | M0-6 | (...)
                 +------+ +------+ +------+ +------+ +------+ +------+

]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="cats-metrics-framework-and-specification">
      <name>CATS Metrics Framework and Specification</name>
      <t>The CATS metrics framework defines how metrics are encoded and transmitted over the network. The representation should be flexible enough to accommodate various types of metrics along with their respective units and precision levels, yet simple enough to enable easy implementation and deployment across heterogeneous edge environments.</t>
      <t>The design of CATS metrics framework has the following principles:</t>
      <ul spacing="normal">
        <li>
          <t>Semantic granularity and extensibility: It adopts a layered metric abstraction.</t>
        </li>
        <li>
          <t>Metric source: It follows <xref target="RFC9439"/> by introducing a 'Source' field to distinguish metric context.</t>
        </li>
        <li>
          <t>Interoperability and flexibility: It allows implementation-specific aggregation and normalization functions, and adds default policies to ensure consistent cross-vendor interpretation.</t>
        </li>
      </ul>
      <section anchor="cats-metric-fields">
        <name>CATS Metric Fields</name>
        <t>Each CATS metric is expressed as a structured set of fields, with each field describing a specific property of the metric. The following definition introduces the fields used in the CATS metric representations.</t>
        <ul spacing="normal">
          <li>
            <t><strong>Metric_Type</strong>: This field specifies the category or kind of CATS metric being reported, such as computational resources, storage capacity, or network bandwidth. It acts as a label that enables network devices to identify the purpose of the metric.</t>
          </li>
          <li>
            <t><strong>Level</strong>: This field specifies the level at which the metric is measured. It is used to categorize the metric based on its granularity and scope. There are only three valid metric levels defined in  <xref target="three-level-metrics"/>. This field can take two values: 1 for Level 1 and 2 for Level 2.</t>
          </li>
          <li>
            <t><strong>Format</strong>: This field indicates the data encoding format of the metric, such as uint, ieee_754_float.</t>
          </li>
          <li>
            <t><strong>Length</strong>: This field indicates the size of the value field measured in octets (bytes). It specifies how many bytes are used to store the value of the metric. The length field is important for memory allocation and data handling, ensuring that the value is stored and retrieved correctly.</t>
          </li>
          <li>
            <t><strong>Unit</strong>: This field defines the measurement units for the metric, such as hertz (Hz) for frequency, bytes (B) for data size, or bits per seconds (bps) for data transfer rate. It is usually associated with the metric to provide context for the value.</t>
          </li>
          <li>
            <t><strong>Source</strong>: This field describes the origin of the information used to obtain the metric. This field is optional. It may include one or more of the following non-mutually exclusive values:  </t>
            <ul spacing="normal">
              <li>
                <t>'nominal'. Similar to <xref target="RFC9439"/>, "a 'nominal' metric indicates that the metric value is statically configured by the underlying devices.  For example, bandwidth can indicate the maximum transmission rate of the involved device.</t>
              </li>
              <li>
                <t>'estimation'. The 'estimation' source indicates that the metric value is computed through an estimation process.</t>
              </li>
              <li>
                <t>'directly measured'. This source indicates that the metric is obtained directly from the underlying device and it is not estimated.</t>
              </li>
              <li>
                <t>'normalization'. The 'normalization' source indicates that the metric value is normalized. This type of metrics does not have units. This document specifies that the normalized value range for each metric is 0 to 10, where 0 indicates the poorest compute/composed capability, and 10 indicates the optimal compute/composed capability.</t>
              </li>
              <li>
                <t>'aggregation'. This source indicates that the metric value is obtained by using an aggregation function.</t>
              </li>
            </ul>
            <t>
Nominal metrics have inherent physical meanings and specific units without any additional processing. Aggregated metrics may or may not have physical meanings, but they retain their significance relative to the directly measured metrics. Normalized metrics, on the other hand, might have physical meanings but lack units.</t>
          </li>
          <li>
            <t><strong>Statistics</strong>: This field provides additional details about the metrics, particularly if there is any pre-computation performed on the metrics before they are collected. This field is optional. It is useful for services that require specific statistics for service instance selection. The 'Statistics' field must be used together with the 'Measurement_Window' parameter to indicate the sampling time interval. There are four kinds of statistics:  </t>
            <ul spacing="normal">
              <li>
                <t>'max'. The maximum value of the data collected over the intervals.</t>
              </li>
              <li>
                <t>'min'. The minimum value of the data collected over the intervals.</t>
              </li>
              <li>
                <t>'mean'. The average value of the data collected over the intervals.</t>
              </li>
              <li>
                <t>'cur'. The current value of the data collected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Value</strong>: This field represents the actual numerical value of the metric being measured. It provides the specific data point for the metric in question.</t>
          </li>
        </ul>
        <t>The value assignment and encoding rules for these fields are specified in Section <xref target="level-metric-representations"/>.</t>
      </section>
      <section anchor="aggregation-and-normalization-functions">
        <name>Aggregation and Normalization Functions</name>
        <t>In the context of CATS metric processing, aggregation and normalization are two fundamental operations that transform raw and derived metrics into forms suitable for decision-making and comparison across heterogeneous systems.</t>
        <section anchor="aggregation-function">
          <name>Aggregation</name>
          <t>Aggregation functions combine multiple values into a single representative value. Aggregation functions can be applied at all metric levels. This document supports the spatial aggregation and temporal aggregation that are defined in <xref target="RFC5835"/>, and further defines cross-category aggregation which can aggregate metrics from different types into a single value. The following are aggregation examples supported by CATS:</t>
          <ul spacing="normal">
            <li>
              <t>Spatial or temporal aggregation of multiple metrics of the same type to produce a derived metric. In this case, because the input metrics are homogeneous, the resulting metric may retain the same units as the inputs. For example, CPU utilization measurements (expressed in percentage) collected from multiple service instances (spatial aggregation) or averaged over consecutive time intervals (temporal aggregation) can be aggregated to produce a representative CPU utilization metric. Such aggregation concepts are consistent with those described in <xref target="RFC5835"/>.</t>
            </li>
            <li>
              <t>Aggregation of multiple metrics of different types to produce a higher-level metric that captures combined behavior across resource dimensions. In this case, because the input metrics use different units, the resulting metric cannot retain physical units and must be expressed as a unitless value. For example, CPU capacity (expressed in Hz) and available memory (expressed in bytes) can be combined through aggregation to generate a single computing-time metric that characterizes overall processing capability.</t>
            </li>
          </ul>
          <t>Some common aggregation functions include:</t>
          <ul spacing="normal">
            <li>
              <t>Mean: Computes the arithmetic mean of a set of input values.</t>
            </li>
            <li>
              <t>Minimum / Maximum: Selects the lowest or highest value from a set of input values.</t>
            </li>
            <li>
              <t>Weighted average: Computes an average by applying weights to individual values according to their relative importance or priority.</t>
            </li>
          </ul>
          <t>Aggregation functions are not standardized in this document. They are implementation-specific and controlled by operator policies.</t>
          <figure anchor="fig-agg-funct">
            <name>Aggregation function</name>
            <artwork><![CDATA[
    +-----------+     +-------------------+
    | Metric 1  |---->|                   |
    +-----------+     |    Aggregation    |     +------------+
           ...        |     Function      |---->| Metric n+1 |
    +-----------+     |                   |     +------------+
    | Metric n  |---->|                   |
    +-----------+     +-------------------+

    Input: Multiple values              Output: A single value

]]></artwork>
          </figure>
        </section>
        <section anchor="normalization-function">
          <name>Normalization</name>
          <t>Normalization functions convert a metric value (with or without units) into a unitless normalized score. Normalized metrics facilitate composite scoring and ranking, and can be used to produce Level 1 and Level 2 metrics. The following are normalization examples supported by CATS:</t>
          <ul spacing="normal">
            <li>
              <t>Normalizing a single Level 0 metric to generate a Level 1 or Level 2 normalized metric;</t>
            </li>
            <li>
              <t>Normalizing the output of aggregating multiple Level 0 metrics, to generate a Level 1 normalized metric.</t>
            </li>
          </ul>
          <t>Normalization functions are commonly used to transform metric values into a bounded range (e.g., an integer scale from 0 to 10) using techniques such as sigmoid function and min-max scaling <xref target="Min-max-sigmoid"/>:</t>
          <ul spacing="normal">
            <li>
              <t>Sigmoid function: Smoothly maps input values to a bounded range.</t>
            </li>
            <li>
              <t>Min-max scaling: Rescales values based on known minimum and maximum bounds.</t>
            </li>
          </ul>
          <t>These normalization functions are also not standardized in this document. They are implementation-specific and controlled by operator policies.</t>
          <figure anchor="fig-norm-funct">
            <name>Normalization function</name>
            <artwork><![CDATA[
  +----------+     +------------------------+     +----------+
  | Metric 1 |---->| Normalization Function |---->| Metric 2 |
  +----------+     +------------------------+     +----------+

  Input:  Value with or without units         Output: Unitless value
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="score-meaning">
        <name>On the Meaning of Scores in Heterogeneous Metrics Systems</name>
        <t>In a system like CATS, where metrics originate from heterogeneous resources---such as compute, communication, and storage---the interpretation of scores requires careful consideration. While normalization functions can convert raw metrics into unitless scores to enable comparison, these scores may not be directly comparable across different implementations. For example, a score of 7 on a scale from 0 to 10 may represent a high-quality resource in one implementation, but only an average one in another.</t>
        <t>To achieve consistent cross-vendor behavior, the default normalization policies defined in this document should be followed by all implementations:</t>
        <ul spacing="normal">
          <li>
            <t>Score directions and semantic mapping:
A common 0-10 numeric range should be used for all normalized scores. Unless otherwise specified by the implementation in accompanying documentation, scores in the range 0-3 indicate low capability (not recommended for steering), 4-7 indicate medium capability (steering optional), and 8-10 indicate high capability (priority for steering). This mapping is normative for all CATS Level 1 and Level 2 metrics defined in this document.</t>
          </li>
          <li>
            <t>Normalization function baseline:
Unless documented otherwise, implementations should use min-max scaling to map the aggregated raw value into the 0-10 range, based on implementation-specific minimum and maximum expected values. Other functions (e.g., sigmoid) are permitted but their parameters must be documented.</t>
          </li>
          <li>
            <t>Measurement window: There is no fixed default measurement window. For illustration, a window of 10 seconds is suggested as an example. Implementations can use their chosen window length, but they must indicate the window length as a parameter (i.e., via the Measurement_Window field defined in the registry entries).</t>
          </li>
        </ul>
      </section>
      <section anchor="level-metric-representations">
        <name>Level Metric Representations</name>
        <t>This section defines the representation format and constraints for Level 1 and Level 2 metrics respectively, to ensure consistent encoding and interoperability across implementations.</t>
        <section anchor="level-0-metrics">
          <name>Level 0 Metrics</name>
          <t>Level 0 metrics are raw metrics that are not standardized in this document. See <xref target="appendix-level-0"/> for examples of Level 0 metrics developed in the compute and communication industries and other standardization organizations such as the <xref target="DMTF"/>.</t>
        </section>
        <section anchor="level-1-metrics">
          <name>Level 1 Metrics</name>
          <t>Level 1 metrics are derived from Level 0 metrics through the application of aggregation functions and, when appropriate, normalization functions. Depending on how they are formed, Level 1 metrics MAY retain physical units inherited from their inputs or MAY be expressed as unitless values.</t>
          <t>Level 1 metrics are organized into semantic categories such as computing, communication, service, and composed metrics. This categorization provides context and meaning to the resulting metrics and enables consistent interpretation across implementations.</t>
          <t>The sources of Level 1 metrics is aggregation and normalization.</t>
          <section anchor="combined-computing-metrics">
            <name>Combined Computing Metrics</name>
            <t>The metric type of combined computing metrics is "computing_comb", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-combined-compute-metric">
              <name>Example of a combined Level 1 computing metric</name>
              <artwork><![CDATA[
Fields:
      Metric_type: computing_comb
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 5
]]></artwork>
            </figure>
          </section>
          <section anchor="combined-communication-metrics">
            <name>Combined Communication Metrics</name>
            <t>The metric type of combined communication metrics is "communication_comb", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-combined-communication-metric">
              <name>Example of a combined Level 1 communication metric</name>
              <artwork><![CDATA[
Fields:
      Metric_type: communication_comb
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 1
]]></artwork>
            </figure>
          </section>
          <section anchor="combined-service-metrics">
            <name>Combined Service Metrics</name>
            <t>The metric type of combined service metrics is "service_comb", and its format is unsigned integer. It has no unit. It will occupy an octet. Example:</t>
            <figure anchor="fig-combined-service-metric">
              <name>Example of a combined Level 1 service metric</name>
              <artwork><![CDATA[
Fields:
      Metric_type: service_comb
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 1
]]></artwork>
            </figure>
          </section>
          <section anchor="combined-composed-metrics">
            <name>Combined Composed Metrics</name>
            <t>The metric type of combined composed metrics is "composed_comb", and its format is unsigned integer.  It has no unit.  It will occupy an octet. Example:</t>
            <figure anchor="fig-combined-composed-metric">
              <name>Example of a combined Level 1 composed metric</name>
              <artwork><![CDATA[
Fields:
      Metric type: composed_comb
      Level: Level 1
      Format: unsigned integer
      Length: an octet
      Source: normalization
      Value: 8
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="level-2-metrics">
          <name>Level 2 Metrics</name>
          <t>A Level 2 metric is a single-value, normalized metric that does not carry any inherent physical unit. While each provider may employ its own internal methods to compute this value, all providers must adhere to the representation guidelines defined in this section to ensure consistency and interoperability of the normalized output.</t>
          <t>The Metric Type is "norm_fi". The metric value is encoded as an unsigned integer, carries no unit, and is represented using a single octet. An example is shown below.</t>
          <figure anchor="fig-level-2-metric">
            <name>Example of a normalized Level 2 metric</name>
            <artwork><![CDATA[
Fields:
      Metric type: norm_fi
      Level: Level 2
      Format: unsigned integer
      Length: an octet
      Source: normalization
      Value: 1
]]></artwork>
          </figure>
        </section>
      </section>
    </section>
    <section anchor="comparison-among-metric-levels">
      <name>Comparison among Metric Levels</name>
      <t>Metrics are progressively consolidated from Level 0 to Level 1 and then to Level 2, with each level offering an increasing degree of abstraction to address the diverse requirements of different services. Table 1 provides a comparative overview of the defined metric levels.</t>
      <table anchor="comparison">
        <name>Comparison among Metrics Levels</name>
        <thead>
          <tr>
            <th align="center">Level</th>
            <th align="left">Encoding Complexity</th>
            <th align="left">Extensibility</th>
            <th align="left">Stability</th>
            <th align="left">Accuracy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center">Level 0</td>
            <td align="left">High</td>
            <td align="left">Low</td>
            <td align="left">Low</td>
            <td align="left">High</td>
          </tr>
          <tr>
            <td align="center">Level 1</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
          </tr>
          <tr>
            <td align="center">Level 2</td>
            <td align="left">Low</td>
            <td align="left">High</td>
            <td align="left">High</td>
            <td align="left">Low</td>
          </tr>
        </tbody>
      </table>
      <t>Since Level 0 metrics are raw and service-specific, individual services may define their own metric sets, potentially resulting in hundreds or even thousands of distinct metrics across deployments. This diversity introduces significant complexity in protocol encoding and standardization. Consequently, Level 0 metrics are confined to bespoke implementations tailored to specific service needs, rather than being standardized for broad protocol use. In contrast, Level 1 metrics organize raw data into standardized categories, each consolidated into a single value. This structure makes them more suitable for protocol encoding and standardization. The Level 2 metric takes simplification a step further by consolidating all relevant information into a single normalized value, making them the easiest to encode, transmit, and standardize.</t>
      <t>Therefore, from the perspective of encoding complexity, Level 1 and Level 2 metrics are recommended.</t>
      <t>When considering extensibility, Level 0 metrics allow new services to define their own custom metrics. However, this flexibility requires corresponding protocol extensions, and the proliferation of metric types can introduce significant overhead, ultimately reducing the protocol's extensibility. In contrast, Level 1 metrics introduce only a limited set of standardized categories, making protocol extensions more manageable. Level 2 metrics go even further by consolidating all information into a single normalized value, placing the least burden on the protocol.</t>
      <t>Therefore, from an extensibility standpoint, Level 1 and Level 2 metrics are recommended.</t>
      <t>Regarding stability, Level 0 raw metrics may require frequent protocol extensions as new metrics are introduced, leading to an unstable and evolving protocol format. For this reason, standardizing Level 0 metrics within the protocol is not recommended. In contrast, Level 1 metrics involve only a limited set of predefined categories, and Level 2 metrics rely on a single consolidated value, both of which contribute to a more stable and maintainable protocol design.</t>
      <t>Therefore, from a stability standpoint, Level 1 and Level 2 metrics are preferred.</t>
      <t>In conclusion, for CATS, Level 2 metrics are recommended due to their simplicity and minimal protocol overhead. If more advanced scheduling capabilities are required, Level 1 metrics offer a balanced approach with manageable complexity. While Level 0 metrics are the most detailed and dynamic, their high overhead makes them unsuitable for direct transmission to network devices and thus not recommended for standard protocol integration.</t>
    </section>
    <section anchor="cats-metrics-registry">
      <name>CATS Metric Registry Entries</name>
      <t>This section defines the formal registry entries for one CATS Level 2 metric and four Level 1 metrics, intended for registration with IANA. By providing a common template that specifies the metric's summary, definition, method of measurement, output, and administrative items, this section ensures interoperability among different implementations.</t>
      <section anchor="cats-level-2-metric-registry">
        <name>CATS Level 2 Metric Registry Entry</name>
        <t>This section gives an initial Registry Entry for the CATS Level 2 metric.</t>
        <section anchor="summary">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name">
            <name>Name</name>
            <t>Norm_Passive_CATS-Level 2_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 2: Metric level (CATS Metric Framework Level 2)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value</t>
              </li>
            </ul>
          </section>
          <section anchor="uri">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description">
            <name>Description</name>
            <t>This metric represents a single normalized score used within CATS (Level 2). It is derived by aggregating one or more CATS Level 0 and/or Level 1 metrics, followed by a normalization process that produces a unitless value. The resulting score provides a concise assessment of the overall capability of a service instance, enabling rapid comparison across instances and supporting efficient traffic steering decisions.</t>
          </section>
          <section anchor="change-controller">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition">
          <name>Metric Definition</name>
          <section anchor="reference-definition">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/>
Core referenced sections: Section 3.4 (Level 2 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions)</t>
          </section>
          <section anchor="fixed-parameters">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest capability, 10 indicates the optimal capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect Level 0 service and compute raw metrics using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes). Collect Level 0 network performance raw metrics using existing standardized protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>).</t>
            <t>Aggregation logic: Refer to <xref target="aggregation-function"/>.</t>
            <t>Normalization logic: Refer to <xref target="normalization-function"/>.</t>
            <t>The reference method aggregates and normalizes Level 0 metrics to generate Level 1 metrics in different categories, and further calculates a Level 2 singleton score for ultimate normalization.</t>
          </section>
          <section anchor="packet-stream-generation">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect Level 0 metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles">
            <name>Roles</name>
            <t>C-SMA: Collects Level 0 service and compute raw metrics, and optionally calculates Level 1 metrics according to service-specific strategies.</t>
            <t>C-NMA: Collects Level 0 network performance raw metrics, and optionally calculates Level 1 metrics according to service-specific strategies.</t>
            <t>C-PS: Aggregate all Level 1 metrics collected from C-NMA and C-SMA to calculate the Level 2 metric.
### Output</t>
            <t>This category specifies all details of the output of measurements using the metric.</t>
          </section>
          <section anchor="type">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-1">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.3</t>
            <t>Score semantics: 0-3 (Low capability, not recommended for steering), 4-7 (Medium capability, optional for steering), 8-10 (High capability, priority for steering)</t>
          </section>
          <section anchor="metric-units">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on standard test sets (fixed workload) to ensure the output score deviation of C-SMA and C-NMA is lower than 0.1 (one abnormal score in every ten test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items">
          <name>Administrative Items</name>
          <section anchor="status">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-computing-metric">
        <name>CATS Level 1 Metric Registry Entry: Computing</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>computing</em> category.</t>
        <section anchor="summary-1">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-1">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-1">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Computing_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Computing: Metric category (Computing)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the computing category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-1">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-1">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>computing</em> category within CATS (Level 1). It is derived from one or more computing-related Level 0 metrics (e.g., CPU/GPU/NPU utilization, CPU frequency, memory utilization, or other computing resource indicators) by applying an implementation-specific aggregation function over the selected Level 0 computing metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative computing capability (or headroom) of a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better computing capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-1">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-1">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-1">
          <name>Metric Definition</name>
          <section anchor="reference-definition-2">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-1">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest computing capability, 10 indicates the optimal computing capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "computing_comb"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-1">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-1">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect computing-related Level 0 raw metrics (e.g., CPU/GPU/NPU, memory, and relevant platform counters) using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes or equivalent telemetry systems).</t>
            <t>Aggregation logic (within computing category): Refer to <xref target="aggregation-function"/> to combine selected Level 0 computing metrics into a single intermediate value prior to normalization. The selection of Level 0 computing metrics and any weights used are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="normalization-function"/> to map the aggregated (or directly selected) computing value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes Level 0 computing metrics to generate a single Level 1 computing score ("computing_comb").</t>
          </section>
          <section anchor="packet-stream-generation-1">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-1">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-1">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying Level 0 computing metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-1">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-1">
            <name>Roles</name>
            <t>C-SMA: Collects Level 0 compute raw metrics and calculates the Level 1 compute normalized score ("computing_comb") according to service/provider-specific aggregation and normalization strategies.</t>
            <t>C-NMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-1">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-3">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low compute capability, not recommended for steering), 4-7 (Medium compute capability, optional for steering), 8-10 (High compute capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-1">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-1">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative compute workloads (fixed test workload profiles) to align the mapping from Level 0 computing metrics to the Level 1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items-1">
          <name>Administrative Items</name>
          <section anchor="status-1">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-1">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision-1">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-1">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-1">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-communication-metric">
        <name>CATS Level 1 Metric Registry Entry: Communication</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>communication</em> category.</t>
        <section anchor="summary-2">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-2">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-2">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Communication_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Communication: Metric category (Communication)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the communication category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-2">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-2">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>communication</em> category within CATS (Level 1). It is derived from one or more communication-related Level 0 metrics (e.g., throughput, bandwidth, link utilization, loss, delay, jitter, bytes/packets counters, and other network performance indicators) by applying an implementation-specific aggregation function over the selected Level 0 communication metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative communication capability (or headroom) associated with reaching a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better communication capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-2">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-2">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-2">
          <name>Metric Definition</name>
          <section anchor="reference-definition-4">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-2">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest communication capability, 10 indicates the optimal communication capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "communication_comb"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-2">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-2">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect communication-related Level 0 raw metrics using existing standardized protocols and telemetry systems (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>), and/or using network performance metric definitions and registries such as <xref target="RFC8911"/>, <xref target="RFC8912"/>, and <xref target="RFC9439"/> where applicable.</t>
            <t>Aggregation logic (within communication category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected Level 0 communication metrics into a single intermediate value prior to normalization. The selection of Level 0 communication metrics and any weights used are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) communication value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes Level 0 communication metrics to generate a single Level 1 communication score ("communication_comb"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate compute or service metrics).</t>
          </section>
          <section anchor="packet-stream-generation-2">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-2">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-2">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying Level 0 communication metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-2">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-2">
            <name>Roles</name>
            <t>C-NMA: Collects Level 0 communication raw metrics and calculates the Level 1 communication normalized score ("communication_comb") according to provider-specific aggregation and normalization strategies.</t>
            <t>C-SMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-1">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-2">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-5">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low communication capability, not recommended for steering), 4-7 (Medium communication capability, optional for steering), 8-10 (High communication capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-2">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-2">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative network test profiles (e.g., fixed traffic mixes and path conditions) to align the mapping from Level 0 communication metrics to the Level 1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items-2">
          <name>Administrative Items</name>
          <section anchor="status-2">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-2">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision-2">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-2">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-2">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-service-metric">
        <name>CATS Level 1 Metric Registry Entry: Service</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>service</em> category.</t>
        <section anchor="summary-3">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-3">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-3">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Service_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Service: Metric category (Service)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the service category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-3">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-3">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>service</em> category within CATS (Level 1). It is derived from one or more service-related Level 0 metrics that characterize the health and performance of the service instance itself (e.g., service availability, request success rate, admission/overload indicators, tokens per second and/or requests per second, application-level queue depth, and other service KPIs) by applying an implementation-specific aggregation function over the selected Level 0 service metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative service capability (or headroom) of a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better service capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-3">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-3">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-3">
          <name>Metric Definition</name>
          <section anchor="reference-definition-6">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-3">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest service capability, 10 indicates the optimal service capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "service_comb"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-3">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-3">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect service-related Level 0 raw metrics from the service runtime and service management plane using platform-specific telemetry systems (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes or equivalent monitoring/observability tools). These metrics are service-dependent and may include availability/health status, success/error rates, overload or admission control signals, and throughput indicators (e.g., tokens per second for AI inference services), among others.</t>
            <t>Aggregation logic (within service category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected Level 0 service metrics into a single intermediate value prior to normalization. The selection of Level 0 service metrics, any weights used, and any gating logic (e.g., forcing the score to a low value when the instance is unhealthy) are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) service value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes Level 0 service metrics to generate a single Level 1 service score ("service_comb"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate compute or communication metrics).</t>
          </section>
          <section anchor="packet-stream-generation-3">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-3">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-3">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying Level 0 service metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-3">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-3">
            <name>Roles</name>
            <t>Service contact instace: Collects Level 0 service raw metrics and calculates the Level 1 service normalized score ("service_comb") according to service/provider-specific aggregation and normalization strategies.</t>
            <t>C-NMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-2">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-3">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-7">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low service capability, not recommended for steering), 4-7 (Medium service capability, optional for steering), 8-10 (High service capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-3">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-3">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative service workload profiles (fixed request mixes and known-good baselines) to align the mapping from Level 0 service metrics to the Level 1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds). Calibration MAY include failure/overload scenarios (e.g., simulated dependency failures or saturation) to ensure score behavior is consistent with operational intent.</t>
          </section>
        </section>
        <section anchor="administrative-items-3">
          <name>Administrative Items</name>
          <section anchor="status-3">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-3">
            <name>Requester</name>
            <t>To-be-assigned</t>
          </section>
          <section anchor="revision-3">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-3">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-3">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-composed-metric">
        <name>CATS Level 1 Metric Registry Entry: Composed</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>composed</em> category.</t>
        <section anchor="summary-4">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-4">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-4">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Composed_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Composed: Metric category (Composed)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the composed category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-4">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-4">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>composed</em> category within CATS (Level 1). A composed metric is derived by combining multiple lower-level metrics that may span different categories (e.g., compute, communication, and service) and/or multiple components along the request path.</t>
            <t>Typical examples of composed metrics include (but are not limited to) end-to-end delay, application-level response time, or other synthesized indicators that are computed as a function of multiple contributing factors (e.g., the sum of compute processing delay and network transmission delay along the selected path).</t>
            <t>The composed Level 1 score is obtained by applying an implementation-specific aggregation function over the selected contributing Level 0 metrics (and/or previously computed Level 1 category metrics), followed by a normalization function that yields a unitless score. Higher values indicate better composed capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-4">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-4">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-4">
          <name>Metric Definition</name>
          <section anchor="reference-definition-8">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-4">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest composed capability, 10 indicates the optimal composed capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "composed_comb"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-4">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-4">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect contributing Level 0 raw metrics from the relevant sources across categories. For example, compute- and service-related Level 0 metrics may be collected by a C-SMA using platform-specific telemetry systems (e.g., Prometheus <xref target="Prometheus"/>), while communication-related Level 0 metrics may be collected by a C-NMA using network telemetry and protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>), and/or using network performance metric definitions and registries such as <xref target="RFC8911"/>, <xref target="RFC8912"/>, and <xref target="RFC9439"/> where applicable.</t>
            <t>Aggregation logic (within composed category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected contributing metrics into a single intermediate value prior to normalization. The aggregation function MAY combine Level 0 metrics directly, and/or MAY take as input one or more Level 1 category metrics (e.g., "computing_comb" and "communication_comb"). The selection of contributing metrics, any weights used, and the composition model (e.g., sum of delays, bottleneck/maximum, or weighted utility) are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated composed value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes the selected contributing metrics to generate a single Level 1 composed score ("composed_comb").</t>
          </section>
          <section anchor="packet-stream-generation-4">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-4">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-4">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying contributing metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-4">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-4">
            <name>Roles</name>
            <t>C-SMA: Collects Level 0 service and compute raw metrics that may contribute to the composed score, and MAY calculate the Level 1 composed score ("composed_comb") when it has access to the required inputs.</t>
            <t>C-NMA: Collects Level 0 communication raw metrics that may contribute to the composed score, and MAY calculate the Level 1 composed score ("composed_comb") when it has access to the required inputs.</t>
            <t>CATS Controller (or other CATS component): MAY compute the Level 1 composed score when the contributing metrics originate from multiple agents and are combined at a common computation point.</t>
          </section>
        </section>
        <section anchor="output-3">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-4">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-9">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low composed capability, not recommended for steering), 4-7 (Medium composed capability, optional for steering), 8-10 (High composed capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-4">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-4">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative end-to-end test profiles (fixed request mixes and controlled network/compute conditions) to align the mapping from contributing metrics to the Level 1 composed score. The calibration goal is to minimize score deviation across measurement agents and computation points within the same administrative domain (e.g., less than 0.1 over repeated test rounds). Calibration MAY include failure and saturation scenarios (e.g., compute overload, network congestion, and dependency failures) to ensure the composed score behavior is consistent with operational intent.</t>
          </section>
        </section>
        <section anchor="administrative-items-4">
          <name>Administrative Items</name>
          <section anchor="status-4">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-4">
            <name>Requester</name>
            <t>To-be-assigned</t>
          </section>
          <section anchor="revision-4">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-4">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-4">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The CATS metrics defined in this document are dynamic and potentially sensitive. To prevent stability attacks (e.g., rapid metric churn), implementations MUST support aggregation, dampening, and threshold-triggered updates. To protect against disclosure or tampering, metric collection and distribution MUST use encryption, integrity protection, and authentication among C-SMA, C-NMA, and receivers. C-SMAs MUST authenticate the service instances they report on. False reporting SHOULD be mitigated via secondary validation.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document defines several CATS metric registry entries. IANA is requested to create a new registry titled "CATS Metrics" under a new "Computing-Aware Traffic Steering (CATS)" heading.</t>
      <t>The initial entries for this registry are defined in <xref target="cats-metrics-registry"/> as follows:</t>
      <t><xref target="cats-level-2-metric-registry"/>: CATS L2 Metric Registry Entry</t>
      <t><xref target="cats-level-1-computing-metric"/>: CATS L1 Metric Registry Entry: Computing</t>
      <t><xref target="cats-level-1-communication-metric"/>: CATS L1 Metric Registry Entry: Communication</t>
      <t><xref target="cats-level-1-service-metric"/>: CATS L1 Metric Registry Entry: Service</t>
      <t><xref target="cats-level-1-composed-metric"/>: CATS L1 Metric Registry Entry: Composed</t>
      <t>For each entry, IANA is requested to assign a unique Identifier (defined in each subsection) from the registry's assignment pool.</t>
      <t>All metric entries have the following common attributes: Name, URI, Description, Change Controller (IETF), and Version. The naming convention and structure follow the definitions in each respective subsection of <xref target="cats-metrics-registry"/>.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119" xml:base="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC5835">
          <front>
            <title>Framework for Metric Composition</title>
            <author fullname="A. Morton" initials="A." role="editor" surname="Morton"/>
            <author fullname="S. Van den Berghe" initials="S." role="editor" surname="Van den Berghe"/>
            <date month="April" year="2010"/>
            <abstract>
              <t>This memo describes a detailed framework for composing and aggregating metrics (both in time and in space) originally defined by the IP Performance Metrics (IPPM), RFC 2330, and developed by the IETF. This new framework memo describes the generic composition and aggregation mechanisms. The memo provides a basis for additional documents that implement the framework to define detailed compositions and aggregations of metrics that are useful in practice. This document is not an Internet Standards Track specification; it is published for informational purposes.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5835"/>
          <seriesInfo name="DOI" value="10.17487/RFC5835"/>
        </reference>
        <reference anchor="RFC6241">
          <front>
            <title>Network Configuration Protocol (NETCONF)</title>
            <author fullname="R. Enns" initials="R." role="editor" surname="Enns"/>
            <author fullname="M. Bjorklund" initials="M." role="editor" surname="Bjorklund"/>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <author fullname="A. Bierman" initials="A." role="editor" surname="Bierman"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs). This document obsoletes RFC 4741. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6241"/>
          <seriesInfo name="DOI" value="10.17487/RFC6241"/>
        </reference>
        <reference anchor="RFC7011">
          <front>
            <title>Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of Flow Information</title>
            <author fullname="B. Claise" initials="B." role="editor" surname="Claise"/>
            <author fullname="B. Trammell" initials="B." role="editor" surname="Trammell"/>
            <author fullname="P. Aitken" initials="P." surname="Aitken"/>
            <date month="September" year="2013"/>
            <abstract>
              <t>This document specifies the IP Flow Information Export (IPFIX) protocol, which serves as a means for transmitting Traffic Flow information over the network. In order to transmit Traffic Flow information from an Exporting Process to a Collecting Process, a common representation of flow data and a standard means of communicating them are required. This document describes how the IPFIX Data and Template Records are carried over a number of transport protocols from an IPFIX Exporting Process to an IPFIX Collecting Process. This document obsoletes RFC 5101.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="77"/>
          <seriesInfo name="RFC" value="7011"/>
          <seriesInfo name="DOI" value="10.17487/RFC7011"/>
        </reference>
        <reference anchor="RFC7471">
          <front>
            <title>OSPF Traffic Engineering (TE) Metric Extensions</title>
            <author fullname="S. Giacalone" initials="S." surname="Giacalone"/>
            <author fullname="D. Ward" initials="D." surname="Ward"/>
            <author fullname="J. Drake" initials="J." surname="Drake"/>
            <author fullname="A. Atlas" initials="A." surname="Atlas"/>
            <author fullname="S. Previdi" initials="S." surname="Previdi"/>
            <date month="March" year="2015"/>
            <abstract>
              <t>In certain networks, such as, but not limited to, financial information networks (e.g., stock market data providers), network performance information (e.g., link propagation delay) is becoming critical to data path selection.</t>
              <t>This document describes common extensions to RFC 3630 "Traffic Engineering (TE) Extensions to OSPF Version 2" and RFC 5329 "Traffic Engineering Extensions to OSPF Version 3" to enable network performance information to be distributed in a scalable fashion. The information distributed using OSPF TE Metric Extensions can then be used to make path selection decisions based on network performance.</t>
              <t>Note that this document only covers the mechanisms by which network performance information is distributed. The mechanisms for measuring network performance information or using that information, once distributed, are outside the scope of this document.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7471"/>
          <seriesInfo name="DOI" value="10.17487/RFC7471"/>
        </reference>
        <reference anchor="RFC8174" xml:base="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC8911">
          <front>
            <title>Registry for Performance Metrics</title>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="B. Claise" initials="B." surname="Claise"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="A. Akhter" initials="A." surname="Akhter"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This document defines the format for the IANA Registry of Performance
Metrics. This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8911"/>
          <seriesInfo name="DOI" value="10.17487/RFC8911"/>
        </reference>
        <reference anchor="RFC8912">
          <front>
            <title>Initial Performance Metrics Registry Entries</title>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="K. D'Souza" initials="K." surname="D'Souza"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This memo defines the set of initial entries for the IANA Registry of
Performance Metrics. The set includes UDP Round-Trip Latency and
Loss, Packet Delay Variation, DNS Response Latency and Loss, UDP
Poisson One-Way Delay and Loss, UDP Periodic One-Way Delay and Loss,
ICMP Round-Trip Latency and Loss, and TCP Round-Trip Delay and Loss.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8912"/>
          <seriesInfo name="DOI" value="10.17487/RFC8912"/>
        </reference>
        <reference anchor="RFC9439">
          <front>
            <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
            <author fullname="Q. Wu" initials="Q." surname="Wu"/>
            <author fullname="Y. Yang" initials="Y." surname="Yang"/>
            <author fullname="Y. Lee" initials="Y." surname="Lee"/>
            <author fullname="D. Dhody" initials="D." surname="Dhody"/>
            <author fullname="S. Randriamasy" initials="S." surname="Randriamasy"/>
            <author fullname="L. Contreras" initials="L." surname="Contreras"/>
            <date month="August" year="2023"/>
            <abstract>
              <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.</t>
              <t>This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.</t>
              <t>There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9439"/>
          <seriesInfo name="DOI" value="10.17487/RFC9439"/>
        </reference>
        <reference anchor="RFC9911">
          <front>
            <title>Common YANG Data Types</title>
            <author fullname="J. Schönwälder" initials="J." role="editor" surname="Schönwälder"/>
            <date month="December" year="2025"/>
            <abstract>
              <t>This document defines a collection of common data types to be used with the YANG data modeling language. It includes several new type definitions and obsoletes RFC 6991.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9911"/>
          <seriesInfo name="DOI" value="10.17487/RFC9911"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="2" month="April" year="2026"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   functional components, describes their interactions, and provides
   illustrative workflows of the control and data planes.  The framework
   covers only the case of a single service provider.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-24"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Guanming Zeng" initials="G." surname="Zeng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="1" month="March" year="2026"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a given service
   instance by considering the dynamic nature of computing and network
   resources.  In order to consider the computing and network resources,
   a system needs to share information (metrics) that describes the
   state of the resources.  Metrics from network domain have been in use
   in network systems for a long time.  This document defines a set of
   metrics from the computing domain used for CATS.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-06"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="2" month="February" year="2026"/>
            <abstract>
              <t>   Distributed computing enhances service response time and energy
   efficiency by utilizing diverse computing facilities for compute-
   intensive and delay-sensitive services.  To optimize throughput and
   response time, "Computing-Aware Traffic Steering" (CATS) selects
   servers and directs traffic based on compute capabilities and
   resources, rather than static dispatch or connectivity metrics alone.
   This document outlines the problem statement and scenarios for CATS
   within a single domain, and drives requirements for the CATS
   framework.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-14"/>
        </reference>
        <reference anchor="performance-metrics" target="https://www.iana.org/assignments/performance-metrics/performance-metrics.xhtml">
          <front>
            <title>performance-metrics</title>
            <author>
              <organization/>
            </author>
            <date year="2020" month="March" day="19"/>
          </front>
        </reference>
        <reference anchor="DMTF" target="https://www.dmtf.org/">
          <front>
            <title>DMTF</title>
            <author>
              <organization/>
            </author>
            <date year="1998"/>
          </front>
        </reference>
        <reference anchor="Prometheus" target="https://prometheus.io/">
          <front>
            <title>Prometheus</title>
            <author>
              <organization/>
            </author>
            <date year="2012"/>
          </front>
        </reference>
        <reference anchor="Min-max-sigmoid" target="https://doi.org/10.1016/C2013-0-18660-6">
          <front>
            <title>Data Mining: Concepts and Techniques (Fourth Edition)</title>
            <author>
              <organization/>
            </author>
            <date year="2023"/>
          </front>
        </reference>
      </references>
    </references>
    <?line 1236?>

<section anchor="appendix-level-0">
      <name>Level 0 Metric Examples</name>
      <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as Level 0 metrics. This section provides illustrative examples.</t>
      <section anchor="compute-raw-metrics">
        <name>Compute Raw Metrics</name>
        <t>This section uses CPU frequency as an example to illustrate the representation of raw computing metrics. The metric type is labeled as compute_CPU_frequency, with the unit specified in GHz. The format supports floating-point values. The corresponding metric fields are defined as follows:</t>
        <figure anchor="fig-compute-raw-metric">
          <name>An Example for Compute Raw Metrics</name>
          <artwork><![CDATA[
Fields:
      Metric_Type: compute_CPU_frequency
      Level: Level 0
      Format: floating point
      Length: four octets
      Unit: GHz
      Source: nominal
      Value: 2.2
]]></artwork>
        </figure>
      </section>
      <section anchor="communication-raw-metrics">
        <name>Communication Raw Metrics</name>
        <t>This section takes the total transmitted bytes (TxBytes) as an example to show the representation of communication raw metrics. TxBytes are named as "communication type_TxBytes". The unit is Mega Bytes (MB). Format is unsigned integer. It will occupy 4 octets. The source of the metric is "Directly measured" and the statistics is "mean". Example:</t>
        <figure anchor="fig-network-raw-metric">
          <name>An Example for Communication Raw Metrics</name>
          <artwork><![CDATA[
Fields:
      Metric_Type: "communication type_TXBytes"
      Level: Level 0
      Format: unsigned integer
      Length: four octets
      Unit: MB
      Source: Directly measured
      Statistics: mean
      Value: 100
]]></artwork>
        </figure>
      </section>
      <section anchor="delay-raw-metrics">
        <name>Delay Raw Metrics</name>
        <t>Delay is a kind of synthesized metric which is influenced by computing, storage access, and network transmission. Usually delay refers to the overal processing duration between the arrival time of a specific service request and the departure time of the corresponding service response. It is named as "delay_raw". The format supports floating point. Its unit is microseconds, and it occupies 4 octets. For example:</t>
        <figure anchor="fig-delay-raw-metric">
          <name>An Example for Delay Raw Metrics</name>
          <artwork><![CDATA[
Fields:
      Metric_Type: "delay_raw"
      Level: Level 0
      Format: floating point
      Length: four octets
      Unit: microsecond
      Source: aggregation
      Statistics: max
      Value: 231.5
]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact initials="M." surname="Boucadair" fullname="Mohamed Boucadair">
        <organization>Orange</organization>
        <address>
          <email>mohamed.boucadair@orange.com</email>
        </address>
      </contact>
      <contact initials="Z." surname="Du" fullname="Zongpeng Du">
        <organization>China Mobile</organization>
        <address>
          <email>duzongpeng@chinamobile.com</email>
        </address>
      </contact>
      <contact initials="H." surname="Shi" fullname="Hang Shi">
        <organization>Huawei</organization>
        <address>
          <email>shihang9@huawei.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+196XPbRtL3d/4VKPmDxZikJdm5uPW8tYpsJ9qNbK8l7/F8
WBdIDkmsQYDBIZmRnb/97XMOANSR2HG8j12VCkUCc/T09PHrnp7hcNjrVUmV
mnG0c3R4dhqdmKpIpmX0yMyTLKmSPNvpxZNJYc4bT+z0pnFlFnmxGUdlNev1
Zvk0i1fQ0KyI59UwMdV8CI+UwxW9MJzZFod7X/fKerJKyhL+qjZreOn48dmT
XlavJqYY92bQ8rg3zbPSZGVdjnvQ+YNeXJgYBvEir6skW+z0LvLi9aLI6zWO
LF+t6evh4QU8F53BGObJNDqtjCno6ddmAy/MxhFOYhDxoMpeL66rZQ59RsNe
BP+SrBxH/xpFfzXLOKNv5nWa8sTou+hfcU7f58UizpKfY5wRtLpMsjg6ySdJ
auhns4qTdBxt4vw1vvbnKT6wot9H03xFz0zzOquQgPR2MISjUfRj0uj/aGmy
hX4ddv9DHV+YJDoz02WWp/kiMaU/iuko/fOSHrlJ3z+OTkbREaxMYYq4bAzi
x1HU+jUcy5lJzTzPkmnsDyGtk3KVLGqTwhDk5VVdJGma/7myb/DwvLH8ZRS9
yMvh90kRp1VjKH+B9UyaP4dj+Vsdp9DkKnpcF/naDKLjbDryh/WfIi///FOV
jH6SJ1sj+H70v0D2Rtff13G2Ar6K7G+3Wo+f4a2FNHHNwvSIWsmkrpBLhxH3
f5Iv4f+z6Lu8nsazOCl6NIJx9KyIswVyoPS04gdHE33wzzk9Qd1pa/+bZ4s1
8tajWttp8LO0Nqt/lkdb/Kxt/QCtR6fLRBtiSrgmymUC22HxrT/tXpLN82IF
tDuHfR9Fx8NHIyc/6tJM49KUw8L8VCeFWZmsKvGxtSnotWxqRMjQ11EkEq3j
946lOs4qU2Smig5BHC0yIOpTEkNldEiiIak29BYJpehg72BvuPdguP8t9xQX
C1ONo2VVrcvx/fsXFxejJM7iEfRxP6YGabj3O8bS9d3ozbJapdD0o5OzJ8Fk
8IuO0T9KSuYOGPcJdLwg8kRncfk6epIXU+ONff/bb7/ZOurZqprTqOGJ50UO
41maOqSn+7pLAKZ5DaSjNYysOIYx1NmMHgmIuH/QOZC17WGU5DiSkyQbruI3
QyDkKk9mwXB2HsVVjE9AP2OUSFOzrsoozma875KfalNGuzCColpGj2eke/o7
1ARLfZleFA2jnb8ktGOBfXeCL7PouUm8r5C/kYGjsxzVSpsQOyf0d/TXuJ7D
2sL79SRNyiVw1AAGFyWrNWikKsrn0eO0NOeJKXZCBnsQ0GZHiTPLE1qg/b3R
/t7+V/ePgIwPhnvD/W+++mpv+NVOr5f5m+jFk6OD/f1v5eOX3zz4Uj5+dfBw
Xz5+vbdvPz78Wj9+s//1Q/34rX0APh7Ix28fPtB2v5UHwi07L0AWoH5u/9Sy
Bsa93nA4jOIJMHI8rXq963R5tIsqvB8lsNZRJT+CSEoy+T1eAx/F02VULWOg
87pKVsnPwArAWGCqyENAfn23yqGh0hTnydSgzK9wR0aTTYQWSDLj5/Hl2QZk
HLwAT1QGW5haPkeuAyGCc44KUwLPTU05AibpmSyepNBxDQOamWmCNg9wAtlS
+H6eoYCIzJspspVRy4QHPzPlFLa3sW3imJiTYfbzuZlS763Bl6BRp/gUDGEJ
hALjrCbBMIcPIEujPJPBGxo6ar4aFTC+Y4cA0onHiY/QisGbcW8JLBsXoACm
cWrXDd9Dmi5N6c0hJ3bPz3FoIGZBBRcxqAyQqYOohNftH9gD/YoNQbMlvJcm
U/htFB0DHXLoOcurKE5W2CpOcxaD+v8ZSBNfQOvAcjCQelrVwDK7P5pzk0Z7
fR3Jn4g0Jp4NoqSKyjWswxx0crRMFrAxhyk9Xpg1kBmoFDN9aQWmsGWB/sgF
5yBj5yChqMeVictalFFUl8QDi0VhFkxC4gfcjqkIBjAeMqJSOWKGXyWzGejW
3h3UQEU+q+nXXu9U1hLJhswHFC+w/3Wab7AXx3PTeM30w5nEU7BkmMeVD3H5
lnlZyZ5IZXlL5sUYuMJTHofHEb6U5vEMuPPwxf2/v+Blx3kXqYG2zw0sOXyC
1VrluIUqFGqwPpksOw+SSAIWdp1WyRo5v8GcPCEgNg0Iupahn8dFktcwuqQy
xDpo/Be4c3CXJsi9OGPkCZoa2BOwVCT+adpiHszI6MOHUMK+AZ6CV6H73b/l
j/uwzWDtsVPY3viSRxfcrdDZep0XFcsK3USOse0cgAaRlXIwrjSFNm8huRJZ
c3gryaLLyy3y89074Bbagu9X2i1gUbNbybwspp11M6EHSwpCAnZ9BcKnLetk
pXwDLZA5E+CshkgSIUUCy+vqNMF3eW3s196G4YW2rGj31MCXIDPbOb4InA6D
TEAGORZsCi8ajgh2kBkg1I2lr6W5J+xxoxgYiKwO/moFOpDSyghuFvTAlBYN
Nh4wVlRuoM2VTy1gijNskgYcp+AYzzbRhUlT1qvGLYtP1uYIB1YQKKM+9vhq
9+xx3xEGBnb8/XP7964ZLUaDKE2y1zDPNN74n2kn04T6l5diWrx7N4guQHws
o2UMfDExwH7A+DXtKTtanimPNo5SFDJAMKMiBpmzMm/IdmI33vGijgyGP8OV
mxh/j9FS0nL9J0fj6+yxW54RbD+0rt+9szquBDM0wo/DRRETPVsdOeIdPX/p
L+IgAoEKohXYoko3ohqYQ69uMUrj6evSV4u0zL72dmqwAjqtq6YqTPML0WVB
Vx37DEnqFKJYsNB4uLm7zQIgYA4zKowJtD91XIa2grzjicoJuHIzND9uqi3B
gDHe+053z2H/RAJC4d/AFKz29wO2jW3L0C8/cCAPwFIBrXmzyUBBP8DOLFE8
osq7SEASeSvCOxSmgL0B845Qf4PrcY4ig2wy+N1hZ2VzAcn2Qk6e5yksFrGG
KYDjdd9erQzQVr6plsFHxZbwPpJ29f/GLQULaHUA9YDyWh84kgeOVUkcP4Ie
To+Oh8eP+q2nGR2MDhc4292j4enJoXvoqWzz5kNP/Yeex0DyU1K9IAXg1+c4
F2SC12aDNsqsBB/r5enZzoD/Hz19Rp9fPP7by+MXjx/h59MfDn/80X7QJ05/
ePbyx0fuk3vz6NnJyeOnj/hl+DZqfHVy+K8dNlJ3nj0/O3729PDHHVyrKlhd
XA0rekwBnETmTWnNeFrf746eR/sPIxKN6KGB3KHP6HbB54ulycQezkB+8J/A
MhvU9CYusAnUCmALJVWcoiUCQmOZX2QRqgTiyEcGkQdw2EE9ouIDPrxzBzYS
bNkhbwHFeS/v0EZmqaEoxLte7xD2U12l23iyE5NBGc+GmKi2VQ76QB0Zlgi0
xLo9afWAWrzBUjNgfcsCE15KwFZAP4CaZ7lQIIyU8cZEQsAXcZmzaEealXWJ
NiH9PQXbl6wRasxkMzTaaACiUYFqdToTO4+GvoJX2KJaowoLzZK2dzWA7qf5
ImOVzhNNwCsrkBv01RVoRPBP0prMUKAqSFWDUpTsb1ihKrqIN8gkaxgjKX7W
diLqpqQlkdywtixbPKUhCwCSG7gPAcqNL9TU8FGrjh0AkPnZNK1nntyboZVf
kisGWiOr07iwflmh3i47qKL76zUCFiCZYQAwMzBuQGfPoJUKrCHaDtNlXWT9
UXSYoXfLUjVijB+a61lFinYODBsZJOwc/Rdw0NpdoVOWgvIi/QecjlSKoUf8
DJJebKclqTfwCGoihi/G2aB3kxc/CDkM1r7Kp3lKRgeBr+BYDkK3nDd6Hs3N
xQA/9KZ5XJRW4dLIedAzeg7dUxo7WgTID/M4IftSDFLfx7FYKHKBc0gMO/tA
Qd9NdkYM4iaEgIHzk25IqydoDSHd42KDTfEeBNsBaQd8DsIY9ewspu2VqgIH
2iiaCpxoyCQoEvZwQe6w8ytLh+yBhCRX0DnQ9Kz48KCrZ7h9VuD0EhuyQwq9
4BPZArjzPIbNobsU5FoJHFCJolQj2Dq7G3m14bsAAX7IL2AGhVpmkcOYiObE
CKYpUOapeZOocmdHlPeCYVGhew8tJn0TnfJqs0b8A9azNOa17DmU9uKOGYsf
6LZcETyLVvjMDPP5vBQ3Gqacz9Szx9HQvptOazCsumASMIBBTCbORHSessfh
YGxbh4e433HYzFTIfijSHFcpJw1X8WvyDULVJoxahmR1rCICG6ZSALvK6qGF
6JjKsxXJjvniC0FpxtELx1OjL75Ai88DkXCzCV8BfVHwgwz1WxuwmWbQCZYd
zVY36g/Ui2DCVkQ63lmskdneEhgAdgHKIN8xG/lj3B9bjQmrNBG1CHvK2Z/d
Aw9wI5mvfWBCOp279c1ha/zioAfkNeJux229xUrG7Y2T0/FshlZ5tPyVK0z8
YM4H4+hQd6hnQnN7PFvgECF4ou0yTe1soBewcy3EZv1HIUUPeFAo3GdrXekR
TpRFJe1flRoDWlYCptSqL4H7jDMNdI054EUmUMBxJxoDbq4L7sjVOkZRQFsF
JXog9pz6RF8PHbIU/HqW9myjP385jr5DP+GJKi1wNnKwLogX7FdWG0Y49HJA
/wu9yZVZwXqCgMlmF8msWtpvSlgO+0fwyhopTjq/Xq15XYfR9zikJ11dg7UC
bErkLH9Vf+1Bd47gKY7AhYbomcEN3jvyWXSMZmxeL5bQysAfJeEPQWMpCPOB
whP/ScBlhu4mG5BN99fgapuq5GCrRGZmosUUjPC95gQMM+g/L0oc0Cl8AFEO
++McuIPhfWjRoLEBRglsPeSYCzBiDP8xcg5Xa2MCL+KalDAa6BG0CSL8A7QT
YYbAT69BuHu/jNrsiqrQ4M6e22AbmgZxFY+iU0QyxNppvodKkY1OtDgDkCAH
XsgFI8IWwkdjVhkUBmHK2Ycm/DUICBSGG2vggNuPUqLbtglQeGdbuO3Hrpbv
NZfGgunNaQliT2oVFTKpzHWOsIPg9Fl06GBfDGuC3baicD4Fg+cxwsWHz4/7
TmjG4BGenoEij+B7nrP0U2mMfzOkr0P7wi73KHpkPQOHVGIbBIjB7JKClJWd
P+mzKWILJBBrCmgGJIwnqAn5VaR6zWiJp4KwffCyyYTacE+yzXVikyRgvGh3
si77aL/O0zyWfZpkjA8UtWgb/3l4Dt6AdWAn9tv9fXBikRD69wH8jT3PrEBW
pL9rp/nQDc8MFAnZ58ZH6jzUqyRrsAqhaQtzMT2URzwpTuoZRXeABDd9ekbz
ZKnrkt351j7D18haJCsbNb7wdBTwtK+DPIviyFkU0ZG1J5Sx94MBU8KTGh8N
BGzs5j8I9fpAzeWBBdJxP4jhhDsL/laTkmy1GQcZ49RH2CaGcW0LVc1rCgoo
j8eC2D1By5K4V6yRgbP1w0mh3aAmUsVyHbdnlzHEthDCIb6DPoDfCxbbzkSp
8t4mMSlif9ZGYNsAbPFUYqY0a4/xREh2WIKCQerIPYKwhafYDnEGCqhS7Fur
68ZgLR2KmxNYhL6dhL4WhrVwnHYhh4VJyY1sSLkAfh6gcuelfRqC0SM7EE+D
/orBuLevH1AIGldWW8tYRA/eehTCwdf331KZrKHEX4xZZROnu1XC3XCjEcVi
/8fq/2zfcn6vTgC8MKAEUE5cXnpcPlQut7IzsH/9nymSg4E5hzKLjotR87Cr
tc2Z2OY/oOWCuxj1eroZRd4oaeMMVyZG2CnsP65T8AxzjNMbbn5RJzPaTaL3
Z5IC4GlEp7I9yYa7hn2YKUaA8xRGoTCYSAp/21ndQBCDAAhenC0+z9nr5XA4
DwOGNCVJkKd1oJqbrOSLaXKCTll0PXUShkU3w9JhTIEDteqieEKJBVDAWFd5
RjjJ+753pOKxEbVuLveVoRRngjTWpIe2G/leHoAZeJfTZYzuNgz/ZxMAM1fL
0KviSZ58/Y4Dz8BKM95nFrnsohEpP6tRiLADz8K1S+FBg6niP6UEEhVqAYlR
5tMkVmzBshtIcmMVTwARjrp2hWgh1QZle38QhNzaCk+SBarQfR9hAednhZid
kBPzRzHVhiQfvrZM1qiMqwuMoto95BJArkBeoMdffvlF0tmu/HdvKP/uNZ5W
IvMeGPOXb+G/kwP8cNO2/93V9tZ/VzRrB8qj3fL5qq7ebv0r+GXLGO5pT/ea
bwW/uD961pQ4UReQXzrZH+57fb51/4dfDvgL/uMBftgdjUb97WP699Yx/dsf
E/3124mzhTr3dB3uuQe9nxp062yiRY+AMs0P7Sbu6QD8D/jpXvdP9/w12rNr
9DY62YPleRvRhwP98EA/PNQPX+qHr7avkXDmvVt94K17OY7uzJOFJlLKLqec
2P/Z+RFFhaZHWAcj0eD8zrseBaz9H5/Y0DrKuVPflGf9FoTsXCBeI3rLPHSp
fL8bpE5WrhB5mZG+8NPT2KgOc+40Ige+FuPyKbbHag+TilB/5BQLUhcST4/4
IQswJRBTV7+G3Ec1bNj1pYFx0o7LWhhEG1OpNeF6lJwRE5ebZtyA0xw0302l
79JgohB63Dg4M1tga+dJkXMaOOftoJLA8KwuU5u2y7iZJbC2oVzwLb6ITg0o
3AqW2ouXtYMDY0ydjGc5pUVHabwxhQVwG0rhCw3Ks0qmN8WZYUcec35Bz002
Nq2GzeG7p/TC3WhOHhc6wBzhqZPSU6aUt0P9HHflUnlRGB4097wFS7mpwcM2
OPiyHdrYJXdJUIoilriIw3OTzSjcKzF89aTuBDsneoITBjf9MXq63kKiBShp
JOwUgkLXzNSZIhVErdKPXDD9xH6QAJrOd00Ea8YtJTXG8ogXmLGpT8JH1Bvj
FwKl+QNuJL6Ko8bTfHUGO+yLL8aM9vMgXQ4O2VLi5KOn9jpBhDDga0ENoI+8
qBBzCOMQCqNYYxBT8whetdmeFE5VrMgivpQXDPxbMonBp9NcA961Dl6aGfTB
OBt5htHJOQd91nWBjl+DqF4I5Mpps0kK3XFam+epUFSEsoNnNMhEKO+Fi8SG
VgppThSKp+aOBktzbWipJd+P0kLYxAO/J7EbWjSBB8/Azu3K7ng38meF2FYV
vzYUDmZHagymydz5HjSMA++bAyHSE0IiG1QSqFyoROiYjXBK/C0guGOIOkG8
LTHGvPr6y4evCIC0y5EtquWVPWF4QltmJ56f0qVAguTgwACJdykI0KfF8XLB
UZHF2YZDBERqXTbkSOO13LENUxqhjoxEF/B7TCn3hUZNUKx5Ca1EnCV8Sgk/
CJNfXG/QGg2AdWqBnZpzyiAsOMlQiPQSNn+DRKqkebQ2Y100obryzYUAVqt+
jnZ/+LlPj3jxKibN7nf8A42fw0KYtRsiyaVAyfZBMgYQVsDwsNsYlK/Q8sK8
3eGlSWj+pw6c6KO4EomP1vzVHSOXtUgWiWZzB0i6LnQ+weyhxuI6hittLJbG
z5k9GAo0AVQlHTjJnGFcvebUDMyFSWvKhZHd1iP7cBjdzfJVAm3fxbzmVQIy
gBEYq30H0U7snrLixtsGwjfyi8c+GJKh3oGEc3I5u2LfIilHEUO4jCh5UTYS
Ftof9xS/SVb1Si09OtFrE4aYyud5yuA/tj2ykzVgJTD17/IO8r8RM+Qmc7Nh
Zw8/di0p6uP6tZm5Khjuyhpf2yUuP/EHzkZbISCnk4wcDeI0nLzSMWEw0K23
Z7YoFcIvb0EIhzXJhNA29k1jmz3s4kDN40G+ipNuPAiLu6JwOmePeBkX0Moe
suv+HoH0sAv2GgJ6nWOIu9IFu69RCC+1hw22/eablDaPoYjtbzqierbhjVfW
0tCu72TjgoVdaKr095T3oiUxUTbJlgygrZebkpAbwYnY93BpfSSEbUILaB4v
7CJ8C2+NokMZgHdSAUUPnYHZuBVtdcfBK8odRTOWBRuhpoqLTY2gSueUuUoa
u7k/XCztaTMLBJP3Mi8BbEkxmlWyWG4bEo0IE92F/0R2o3yC/TEtG/LbgWKO
MhwULl300w1mHRfQCFpPMP5kzngwIbIZhqLN0LM5Fbhky8trBgzWuaj7DVkB
AgXafdWtDNjKwwgxbg0B/oXV9MCLXfrSTth/esv5PRAKjkDqaK1q2EoTa6Is
DC2A1Z13T5y2f/UP4Pz84i6SBzzMipH9QIyXKOkTOW7Bns85zstZnRRrRPOe
vG03fqe9QBWIDFOlEBhLZANYUjo4QDsrR7ahRIUhfPpNDQHTSUsxotUL82tb
mtaFNASfaG9f0ZAw9d/xiQY/W0+LBRu4L2ATYCqOKWibdJiX4j8FHoXdFkGa
MI2Co/WhZYemL6WYsOA6s5alO6EuR4/ETi/q1FjzsLT+Y+xYmO3pU0HILy99
D2PY8CfpEBs4z4cNr/1p4LU/Ua+91+s+7aOTcZJxcA0QQGm6jSRXmz2rWoCs
Ukydw9wARnU4RBNEG/AJPDiZVIQIkVEbZkzayDq4biUlyXQgQnLAiegREuTy
Tmc4sNc77IzlSRaii5OwLdkIjfgLofbmKNrSIuc4UGgJXQ0CYULfsmUtcP6O
smFMp+aaS4IHlcCfD3+wWdTBuRc5p46GLoFCNqbCTgyjMxZu8NtjH3zqKWsv
dR8tNBfVYsQwpJMQJoRT6Gyd14cNsMq02UpAzqRw/6nMHzdN15TRENPF0qHJ
Rgfpa9haC/IcQ0aUQ3Bo8cYlGuVmGmNyMgsszBXzUdhlvlKms4cysHeX8c8p
0dbdoTEIQFq6RjWbQ12BxmG38Bj0roO9EtKvU2S9hel7QpZW44qzwbsdfET5
SCLBRUxTcaBpzYaLr7OghS7y9y1/O1sqIHZjq7TnyWtwSi6yt6xTW3cihBJF
E+elnyEScjnpicPrWaTJu8G4O3LnNPF8jYijl7A8MWCTJUhKFk32nM4MCJjJ
WcibMhl+6wYmSWWdjAaERxNVeM1ahA6MV1OmAZra1B3ZnS0+tKfBQ75D2IJw
X5ucKehL+BhjQMoWlkjWi/SlVS5JdJVLP/ZSdIj/AtJ7sfLSBcqt3gr9FkrR
pMhGt6tRKsZAYgbsukyzaMUAQKxwCf1TTmmc6fkIUpy8aKwciN9OxKS6H52w
lTaWw3aCauYX6KPJyRv8KFgaJcJsbfUfBm1+XDnepd4IUSaL8eXnu1/QG6Va
omDO1Gr/lBTpKTiBI7dhHHFTFFibEt6yLhIq0TPapihvlOUnyar47NaQA2l3
hNXpqD8d5kaOwDFITMGLrQdR6Kj5TRiSfqsBBYz94tf/Lwys+oHNdrv0rD91
Gw3dHv0ejUZh4FRtL/lOBiHDyu7tX917c6Tbenct/pqJdhOQHj5GdhxHJw1b
KPj3jPKpvSMN9FAYUoXdx5aXhlO7WIpjqGC9hebr5Z0tGVu93tPu6BTn9haV
OyXNW22XVEdeWGSAJGVfTRYrFpu5RV3+eTQHCQlyBiUXoyaUkz7NC7VYwfzl
8z5eSrOioapl/DhAmGJTdhlNoRF+jdmkY9YjY16S6J4VqoH81dG4cET7eMqf
mk0TRsEp9X7ClJ9p1Eoo7O62fRRm+wrHhUp2qgTAVHUeh7/s1iSdYHKxnlrR
w5YEu4Ldi+g6KE+RxwK49bXIgKt6pUi+1M0KkmjRq8aiWtQSvnd52SizJQfN
Txsvg6ZY5Xm1pLNs6zLQAlF77Kpu/L7wsANNoNT3bOzrdUbZ76KeaJwCI1Cr
Ejsvm+wVEhvsv/x3F/f3biKqun9GwejJf5WK3Z5xUzAfkKz8Tb33rPCMCK6I
OmVPS4i+DEyzQIbi6oRCtHtzoBgFIfqMvY8Thgdxb57SSSgy5ALXWfNVTqVG
yOWdMAePYINYj3anyWsjNUIYjbb2NMWA+DgxbKHQPbdBaCBOGKY23em9HKmG
py1sZBMGCCbjuTTPaIfHsUfRP+is5ja+RqGsqsI/v0DyIsxvL72kFQdFDATF
kUcUM554WC8/zKfo9RDsllTRhiEeS2IrzPZrOhvUIaDE1xQHS5yW4U9Spsm6
IRiay5pbkjFsEqCeJUnPoTQj3JkPwWrNoW35HOr+DIL05ZDqNj3EwyXCg7Be
hhIpPTnvCPZ9g06cp8NJv0Rmm4NdavIOSNE1CsXeodr/e0OglsCBogFch6RA
qDQO9NbU/rAuLzPiBCLJBZYkclidhPoaOUxIwSktfcZxK5mknhmxO5H8OhoN
JrxZ5BiTtr1z2bvs5uFUDKkBgralKkl/ED0cfu3eXZlZUq+C1121KsHV+7zL
vhl6ESGuCOC/pi5A2J1gVUJiGx8jF0JpSKjiFcbNVi6gRKZuuUYKDetmjHuy
HvqSmbmlGTS5RZcZ3eqmfoY9BNNgV8+hFygKJG6VSeiGuIfWaeDllGzRcF2a
1lZHE9cuekYAnJNFYo6IndDns9mmkBS/iT2NZiMNpfXuHRkk3cxlI1xQfGIs
0QZaqmievOETY7RLV62nWQwlaVpjGpuIZPkNhRFQQvMQMAJYLxamlGossTVK
vTP9sRO2AnjANKaI32TaKqd4eEE1mloQRwmeZBjDBV12k5EB4p0nseq8RoQm
SNqwmVpy/m0Dgp1OwfX98wliCrxolC68vHMlIC9lieQQZZAl0kjI1BPzbA4h
qZNMMkeu2jj+aZJBd66dDTZQmLyVFMhaqJ0vf8ed4t5+gJsKDHqq0sLNNzAN
T43BszlrKsbyRlKn9uTgi3/IptnpDP/O127hthfYhNWu+Uijd+7YjUvsB6+s
rLPosWE9BRmQY79JjvDg4JWFCG54tqRRl6B9Fm/7wRO/tg0mWtngKodgB63S
XSeH/9oCGVJ4PbFYMu9UBqvRcsUXm2BiCCV2nkBCE0aqoMnpSqumvZNHrRoK
Nzto6XvLhK1KFqBNUeFonga8SCSLRSyivYmqll4JwtLfVg0jdOs+QrddTyF3
VE7DmPlVsTVmvTvu7Ko712/Z8MzL4ZJUFAuztqvfQY879ttX+KBU3JJMNRRD
GGPPpEq3+MMUD8V86YyNYfr7AvRClE+n9ZqsRkr7G0WPefOO2W/j9F2tAC3J
rnwZQTgMeYIoNFZCyZecATlujcq+g3qAy+jQIOT7U0myDkgqv5EfNo6+DLwq
pZskMGi9cHWxZGaM/Foa2wOrDVorghWunieebrqCXbUBeRXdLx99JRtD+Tir
ub91Nb2Ttbdc0xb12aX2lzWsynfNgjbOqtJSyncfdxH9Qfyhlk+PJN9m4UIq
t5dMTyPfXIz6CsZKUfzyNqvWWrbfsm6RE6N2HO9h4XQIN1+3b7YLURzYraWo
R2pZucZpSFiuwysOBA/J/hi0MWS2UG2C5jQuMMEh23TkEvLyMGRECZha1JhQ
FrPCI0m28hVZA5KguMxnfJJXzFKye2VAEp+UguPk18Qz8sas/RG4BXjOm7zc
to+sTkWH0T/ddBv7kgbhUYWhejFThKnwIArxNz73ap7sSI5YI4XTnj8jR6/J
UgOibWIsp8v2KN0Mja0To+EIYfxD6zeSS0klNydg8l+Mrt0KMuSuTXDwoTdB
KLzYnzm4ivW31svV44NekhNVsJOp0sOwAxSp5cJt+UIK6qb+Ge+mJwLc4vuT
FboW9rsD/5wUZzrkiE5Kfm6STbEKKGdeL/BcTHjmmWITsxkOQzJcub6fX7s0
zLPQ7E3gMMJEg9PZgpVyLcZzfNJc2ERA2Qxh2lSv91aPTSPc/1g93yN7Ch2+
9A/uvY1OK90cbw+lFODb3tuxIvfje+M2vj+8t/3Pt/YDjEWJDmP5AdG08B/8
Dg5a559v9XHXyj4HMAjMC1sJvvT/fKsfXSsHrW71tWCA/p9v9fG3xNte5p3w
9BY2lXJxdBaWy8dvQw+8Ah4WNhv42Qo2yZfLY1F1zapRyAlTJjAzOa+4qjwV
IlBfDmTmss6AM2fkuhosyo9xlzKWPFtbIdaOTdB5d9OCJuVpAVf/7J9fG9Ur
eZC4KqchDNOAIOier1ILlw46CUXnSTIjNdfLdf66CTPDpouTlM4voVdtU6DF
EMLyHFh/JCYQpFpSCJpKfPk4DV0LUOTxzI0cy2+HNVqbTqw69LSaUqcprJg+
85x7V3HLyagtaYJ0pEYvG1nFrxk7W0mhGj9J9IZ0PmsXr6ioWa2hqqfGEOBe
28zISbNsBhVoNSB24kYl2XAizTMdg0hyWGkWKMhQnmL2D2lxVKgDe5h70JiC
vY1gTnU47IEYUPD25DWwcmeZ06swRL4lxEYToJd/oFbwL6gITjt38CeVlclA
PLt0/Ly9Uadg7uQrh9HYOrJkz/gFYl0gD4/eAbMzpuUWmcdjjx4TGQpYnrnE
+txxHMnl4wNVsmGD/eoqGKOwwJNDWsPEVq+Sbu+WjZqwV+8K1x1H1WztSknu
2ro9hEc6ZsuMzwVukfVHrcVc5CzdruTc2zDsOo0tIbic8aQuZibTwxy2jHOb
OQn/9+jFM6bE+dty5AuziDlRraziJhv6CDRHQPkUiNaC7qQk+mAmrKjg7rEY
UNVrgQXZuq1s6XODB+2CBWJqcqSEeJmLpfuXn+DzzW0jBVR9KuohNn/213EZ
nfvbwmNgaqup5HNYdyiBS0F5iZeegBZuoBtjoF3JA9crEw2npbBYdpTSm4Pk
/haZIldl6OIXt7i3YhWYJOz7ghiFaUUHP3EB9GatwXU8Fs1q45Ig3a1YmshD
5+LsFFRmwNrMedbx7JzrUpfTJciONMg+1RJ2enNShwZFixgTe5rlraVWkt3x
nlRX57TLWqAzKVhj31b2pBMXfMfQQGZJEV6diq9egd2DAxgUVg+PngKpmsf+
WRDXLQaWeDHvBI/T0e0qLMgdlHx4oaG4xxyKiy7veJfa4W0I/PtVATbalGkr
qkejQWjKC0pbW4BrV9dFc30GNFo7GWlTjkTgAh0fPj2k6lquEHisuQaYJp9y
4DKuGsUNuP27pZaiGnjFJQYCJrAuszFMLVmrRTeQOXkwmLSLKTuDECBgdKDs
iP2Rtb49AcVV4gjBl3B5Nro4oce7dY3wSqyS3cmEDiA0WtMjVR3rI3G4U6aV
tGvPqUj2dukyDcGFMG/YGMEWw47GbH/xjKPjRwOd3dN4BRLp5Ytj+80jOliw
5kWR7440aa3ghZCv/47OgQvb4B0ux1wLIzFFH+QTMAoBgFKiwLDx4p6J/vnP
f1oaPKXLZ6VpJk8D/uE4KqE6GNHSg2J8bxunZMDssZ1Sx4R/cCrlq+cxIQav
kNZyYcnBqxdPjmAM/4QO/vVKE9BecWm+CmsVwfvY7IsaKPz4DbB2JkWMOFVj
7CMy0a6XK0tJOn14TLodB8kJzOzwqz8Y2xijEbtBXRhbxkcexqa9wY/DIksR
aQm+ou4sH07MMNaLaOElLdPNvigTkb/CVpUMdjgehFti+pFSxz7ggZGaAU3E
B7bCJCq/e10Xn82Et0UseacZuyw1TgyjrCUxKYhMu0oWPTaroelG3Xi/ooK3
6ZqVEZ0sDBKymoldUvyRZN1a/eP2IZOzINbKMwiwn4wuewMawUvEH4L83PSe
igEHbemUZbxOuo4Mehcl0o02lDZN7o69wmH7ZXM2MMuXtThxAHsc71jnX0Uc
9Hr7oz0WXlakqJiXJ19Y7vR/at4HJLLVKYl373pHUnCXX7fsW46tNHgwemi5
IcxncX31B/bxh6ODaPeGh0j7MvwnlEb03KYj+Xnh/IoUBkZqjTmDandr5QKv
UMH2IgX2IbrTii4ptuXGxlQKJKMJyKWoCPHqCohW9cTPVmUCpkq9ykrR+0Jk
UinW/1eCK4uCONkt5WxUtiHGEr0aul6IyJiZB+CTpxGvJsmixixZUf/d9Swb
TMOTArJ7Fy54tTXxnBB9tpvb1un1L3z0HCmG5tFwoTtlXDqbu/xaTTnC1Koc
P0jemrvBOrq8dH+8e4fq66/1hC4Cp8pAzWF1lSpvDwvsX77dJvCh3XhkGE8f
nx09e/qEzwLidcx44vX4+ZPjf/JXeC3zu3f9xpkmKgM6jq4tINw6j9B+cVtx
YQm4OI0kS23TDsMiwqZsGfn+mYm2S+gZdU23T4GBaZxO5fqf2JpZpeox2a3I
eoqKdOelPKcLHqLTClzeVfQ9D4lE19P7h/KMXpz3JEkruTnv2QT5T06MPuJC
E/4rp1on4ZFXD7bXs18zveg2cuAD2i22tIIs/rTBWUodhJw2XtqiSrAXdUYH
DJ0MI3qRXOGYkdwQe6ML+yiAlDirTs/W67ZrXgc4ig79s3hX3E04YG06RSjm
+LkNuOixWxc7w/ODdgCjaHcuga9kff5wKK8Ns3z4sxgA8P1Xze/55k85SYiO
DldJossH+vYO5VdKglfHz8eWPrbIhjfKDzGIdn6ntcL8dFZaXMkb3UWDrhwr
C2AplTRN5K8/uRAh6I2vHv4JtSRmx467uCanW//oAkYrZMubSlm9j5vTsDFw
53ZlK3PO549muCQiDxD8GzLz6abHjtFcI1w/2Gien45dbR1CIJvNNc6t0wRo
NERYrugnYyEV2/QLcSn40ExTkTt/G/vVmjZqTNqTah3XjDv/XIUdRsYpmCVC
0jfsu+03HlOkXO+phmvtOs8cezh6AN1OuTo/J0uWYzoisPtjcDRgsAV7CY4G
7J40jwS4q6+az9ORgN0fwqMAg6j7KIBQQvYebbFeT10nNZdBg0xURXh/+BId
r2gHWZZNl+Dm06Xf9imbbW/xpAoNxpKqDXIqu96u3vcMK2+ttTz8eWIDBcxj
zG3Id8A+VA2dg2R7o/1oF6VTPGENKE2AlmVVUmEwEUdR0Pm5vhYdCZGZY0Rm
VLvRHTIwf65xYzmIasegmWodRLwmemZ/P09CZ8J9h2oKOPFg7+Cr4d7+8GDP
pTsxS+PsXhikJ+pZmE4T3NnvBnf8i6QCnGd/6A7mM/e+B6DHXVfCsPgXto8v
7I7+DAK9PxAIs+G6QKD9V3bVfwschM034CCXMnlrMGj/NmDQfp/7l3th9DHL
J7v2tz80amSZwOUW6xSuB5R+G56kPXftwS6cab+FM5Eqv9VtO2q7Hz1/ef97
+O9p88p3LEfilUntuiQP4X32cCzJvJOPer1cP7wUcvvxrc4rkmwFM64f502k
nW9v06xcb1vOb4QFZ8ITp9ZjvBYykzmKbuOcQinp4TORO9eHNUhMPCvyfNUP
sbSmk2J5wqvo3C6lpxXpA+BsRBlFQDV7CF9Oc00M3hfYPbRmeRKbOnm3bJBQ
DM7NHweVuxEs98BunvcNyw0C8/HA9WO1fHBS7cPAeB1rehWg1/H4LaC9YZgJ
2jzjgg80kqLdKzW7g4Gt+n8cJ9wuq304ri2vVSjLleY6dwUS7U2g/d8FYaRs
v5/qBKQOYQBk6aEFJQUCO7E/rgyTZB1Kt38DXFByv6ls4A30Q5iAQ0FaPLPN
l2+gCUDOFgXcAwCOQihO6HoHJbt1ELKYFoPiCxa3l+X4DdDmllPUuzaPgG7y
Zqr0vaE2jlizL+cJm18PmbbJ0VlnrH2Ki3vfbUqS/qeDfXq1srdT4zMg+hkQ
/d0A0a5QE1fEslCjw/f27eMt76S9KzshyftqMd70TpsuHPVpbitLz8QEtr6U
oBH/zdjjwZXYoyzQr8UgO16/CRbZ8drHxyQbpU11kApIWoSSwEL9Fm2ceZJi
iUy0A1K8sIpWX4Cb4PROpyrz94vc0UjH5zndrAF3SuqDLwfA2MqqIBuVatQ2
MstmOWZz0p07mA1JW1G0TSoJH4yVkm8MpDCMa/1XA6Pe0eA2ONo6cfyBAFLX
z2eQ9HcHSb1T7582UOrdW90JlrrfPxnA1NudHwM07dqXvx44vf5mcBXH7iZw
70YhkNJJ9jpESlPQBJhwnMagPv+DpaUKuXnq/ppcm9J66wOvfk5XFPl3AVY7
ymD8scDVgOG2AKzNC7gKPAwnp6A/CuzaPejP0OunDr12ruvV8GvnK78Ngm0W
qPkMw94ahr1C8t8+M5JvCmmgoLfMlxxoZjh32aUPRGe6bVkKHEx2pV/hixr+
hqCLgf3jQK8m8a/E5fqyUjYNDz9eA912qP/+r3aDDzALhKlka/AfSpHUQ//6
i2vA365STh8CAN6iK98zCHw7Ch4oBbXgddg81pQLK3L2bw0me9P+oIByB3mv
A5W9VzwIqykf+1hd/qobeJLSu1CtAUZp5cvEK3EDkjEv8JqWyiES3k1oMvpP
HtPuWJDPuPZnXPsD4NrdqbUhC94c3fZe6sa4WwIiNM1/I759+hnfvhbf3mJI
3w7l3tLIzbDuLS//4RBvtQQJ9FVcW+W3YN+iO1bJG1Gw67iiSgJ87+hNYfBu
BfwZCv84ULhqggYIHtZt/BDwt/TwGfj+fYFvNQc+YchbptABdssvnwbMbc3M
3xvgbu28Xwltq4zYBmq3Lvqj7pcmTrHwP+oPD3bQez6bhinQ1aRze6uCHsri
iwtFmxYsWlFh0An2giqto+iniif3UZhTwNZh3Vhx/7XJyCMTe1GhEWnN/2ng
l3yXiyThoRp10hoReq9EvYzwr8+PPxig3ixD/AeC0h1T/8GylDsG9hko/6SB
8vaKXgGRtx/+DeB4UPL7Myx+O1h8m9bwHW9bIFGXrRDgx6s3GiQbg3litmYl
bwXNf10m8iqH1aTbIe/njJ2JPKEsZ7pVCa8Ss8LZ05QzuuPDyF3yWPBOljlQ
aPdFRZbkRwxUrd03RYE8gLw9iKxOw4uaVNNFch8fVUmMU1tiUaPKngK0IeeW
GsTlPjxGRpGl1cqQGD2gulOk6corQfymcfOR4ftW4f73Dtw3ehi08PqB3Y9S
vEcIJi52XthajSwDaYBYnJMHRDfJVEvfMsIi9cwqm/5/VTRAafnh4gBNfrgy
AqAPK7QXiP/fC/XvBE4+bey/uQifUf/PqP97R/1Pu5Z1aq4o+HHDCIAtzt3G
/kMB8Tm7/WOg/13ewS1w/67Xb4D4d732h8P6dZCtPHbNclc8xSH9dPP0cJGD
ctXLO28E93fo2U8H6PfpT3fmqbE+h+0Cw3G4Ujk1WQzL7N3+uarZvVGTf7rR
18ifKMG0L5zhKv4XT19v4cXRe9fW8b3Ta9HsMZfhpYtW31s8gtDGj56az3cI
dZQs8S4k+lAVS7CLzyGJ379gCV2C9QnHJHQOW8qV4E+fRlRCN8HHKVYSbr9t
cYnD5k1jjRK5DAOQg6I7k6pCCXIfxCcQhCnXcXfdRefEXHHBvcScNHxgu6Qx
Zjz7NBfbSVUrhs7Rfd2s6cYy/9LcxtxKq3Z28VZlvaJXbwuo8j5oj9mwyocG
68TzkYB2uILv4sBrmxOUNrbUSrnJ4P+l3Olq0SF7HbBMnS8L84ITc3+icpcA
qX8wsn14CfUz2FMyL3Rmpc5wQiV5YbRs7Gr+gV+pXn62xLOwDlKvL96/JVdg
VCBH5BO8wUCKHb+/KEww3dZBDuECYHxQ4XVJN3oJBW3ukLK3evFX12V2sRtc
kw3doNYO3tygSIts68/xj/+O+EfHkl5ToqXx9G+s0OLuz/wcArntyYAOCdIZ
/3Dzk3uoxSFyKoqvrhH9YTXVMLiZbVuAHnXfxHgVPEn6cE3F9xdM6eM16El6
06Nw2wb11A7K5arpUCiZ4FeWkP5EjkSERtlHCqcEjPteYimdOhd9be27yR4a
LrDLhg/jRXBIfrriPshR2aZzlQbNWhS0Tluy+1uxny5ybAv8ONs6YQgnn6Fj
IXABW0hk75R0VRPIysxMX99fxW+SVb0ig+1C14uOglafatDHMvP7D+9st9Nu
etSDR+YXKnFq7tOMtnSS4XOo5XOo5Y9SSd354eGldAEaIUgxAV+oHTpKil+/
fzmInvA17jHnCtrLuyWMQirkqkrs24+L/EHngTvOeWoUc2fXn36xKAXsVtG7
cvX51vHYVIRO0QJW6SLJcEpkxVqUQHB7ssgZV2B0DnEGImquVfuYsnSD4P/1
KFanj3fLIk2t929Ypan13h8ukOXhXo1zK9viWFPdBxZxum8LUt3oHMs2o2L7
dmG70Z/OIo/ptlA0jyQ4dYvAl5Oi3j65aTzsfYfA2M20sax2IMxmsEiobGAd
LKDkArqyOGpHoKx5D0BDDv13xsl6d1A81LTRjuQiZ0Yc2BwmqW29IbkhVoM0
s3xaM7MgN/Gdoewde5erl3iRLs4eWDMnpBLfcBe4xhVYZa/tEvIdbOIDT5d1
gahZ8/7yk5enZ3oPm+/UDaIZGGIGwXibjGjKZZ7CzgVFsTCoq+o13lRbynBg
pFNsI0bbCa93n6Y5sQA6kdhYQY3pgCy+wlzkmc48KHDB8GLtYiPhNL68FOcp
XVkOjGtMoq/sReKU60hGzYDxBy2MOzV0k/yIf5TJe2+bzqMM5J7gFcdEI3SA
n8RpaeQLlCenPzx7+eMjtGBXsD7sLYFAEHMrBpUHaolug5aLVym412YSnxH0
XtXS0O17Pvu0LlgdcYN0CzIzPSFj08Kwu4S3Ltt3KpTw4Ct7obFyhx0PeXTn
yBYiPrxAhlSf6FSv5KO4Wn+HjgdgDj9zuIZwjXftq9zNLH0TdzvWv7ycdl4x
+w4xAYbWyzHiytOrLjt9N5bo8JbLUhsNdNyiYVu4/kaOrsbaVedu1KB7rdVo
4xTf9c2J29I5VT/6frOZ4hu9HuGTeB8yrieYEZ08xkKVvTr42o9P73orTe2U
9UTg3r6PlHL/d0tpixPDc7re/DDVqJ9lKlAdvE2ZP9hTJiMUxB8b72CIeaH5
ICbfCoBEuxgAYTDRxuRJ82cctIYdjGJW5RQMFeydutD+aSg+wKiTxbgdThbz
duy80azdyvQw395wOASrafqa1Il6LhrW11jj5R2wbEDnJm9knffeoTfLcsIf
C9FqYgxG5OA50K0z395QHS+WiecdgaNZM0I6wK14YWAd4lIT0kGaFQmhGLbg
Db9m5rDlYW7DoeKqYCWCpJxsqL/Ly0cnZ0/evYPf+YhZzPnmBAE28EJcAy9N
xB5mAle3tkaARl9HnKUi0/FQ/EayCaiUMrzPgqKjDnUAhrYdGGFPPyCE64ee
o5Ug/nAt3kxpDHijUTwxKUdghdSvoPNX3mUaZPJgNxhssX4R7Zjvf/iZG2VH
RXU0yEUwx0h4kQEpcUMxVvOCw8UzN7RoLoFHT/QG0vWXX37pPaFnxr2I/jHx
Xp1RyKhz5PJgEDvaky8Vf9BxsqFr38gW1XLMN37n08qA08G/oOcxxlnL36cU
OMG4FmzDOJVv/46zHUcH4H3huC/H0Z15shhq9ATWZqiLgGruf3YOM903fDF9
m0d23vWIfRrFRbdzUaW3tgO7YFhKQt8VxzwQ3dw9e/Mdfui32QtMqIstjLUV
oIDF5fY4iYBSfaDhEO0mrnslD+4wPxBXwchPwKiLuIXdk+/6I1kjPokgySoS
MqTDoxewCWB1pvV6Ez2UZRIYnS91EV/dJXHsPNLTAOL5zHYseo7HYbBCFoYc
4El4IIPxyapcz4Cd0/wnT/MmfNic4Q058eS7BiO2Zqi/2+mN8acs5NT9vb2A
U8WLuhGndnMj8usduroakywCNuWvKEfodZJRmNZPE5H+LpYJCOcEVdU8rTkK
P9k4mTaAFcsprsQQ1WBroscoelnW5Jpwxgch/tat5mujg7QRdTcn0JgRMCou
CjyhxRAnnzK1dxpqZrfgAcpR4HLGBSlhfalqST/3LufP6Klot31ozK9gIXau
FrSCaMH7pd1R4J6Bn68ALI4LvqcdgzaK2zNejPcGnO5G9MEErDfwBn97zl8X
Z8dvGiL4wf7oy4C1afTXM3aLb4Gh/z8Q9yC3rfkAAA==

-->

</rfc>
