<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.30 (Ruby 3.4.8) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-li-cats-idn-00" category="info" consensus="true" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="IDN">A Framework of Intelligence Delivery Network (IDN) for Deep Learning Inference</title>
    <seriesInfo name="Internet-Draft" value="draft-li-cats-idn-00"/>
    <author fullname="Qing Li">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>liq@pcl.ac.cn</email>
      </address>
    </author>
    <author fullname="Hanling Wang">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>wanghl03@pcl.ac.cn</email>
      </address>
    </author>
    <author fullname="Yong Jiang">
      <organization>Tsinghua Shenzhen International Graduate School &amp; Pengcheng Laboratory</organization>
      <address>
        <email>jiangy@sz.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Mingwei Xu">
      <organization>Tsinghua University</organization>
      <address>
        <email>xumw@tsinghua.edu.cn</email>
      </address>
    </author>
    <date year="2026" month="March" day="02"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>intelligence delivery network</keyword>
    <keyword>deep learning inference</keyword>
    <keyword>distributed system</keyword>
    <abstract>
      <?line 116?>

<t>The rapid growth of deep learning inference workloads is placing increasing pressure on existing Internet and computing infrastructures. To support latency-aware, privacy-enhanced, and scalable inference services, this document introduces the concept of Intelligence Delivery Network (IDN), in which models with different inference capabilities are deployed across geographically distributed servers and selected to serve inference requests. This document describes the challenges motivating such networks, presents an architectural framework, and defines a common vocabulary for discussing the systems. This document does not specify protocol details, which are left to future documents.</t>
    </abstract>
    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        The latest revision of this draft can be found at <eref target="https://kongyanye.github.io/draft-li-cats-idn/draft-li-cats-idn.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-li-cats-idn/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        Computing-Aware Traffic Steering Working Group mailing list (<eref target="mailto:cats@ietf.org"/>),
        which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/cats/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/kongyanye/draft-li-cats-idn"/>.</t>
    </note>
  </front>
  <middle>
    <?line 120?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>The increasing deployment of deep learning models has led to rapid growth in inference workloads across the Internet. Unlike model training, which is typically performed in centralized data centers using batch-oriented processing, inference workloads are often latency-sensitive, geographically distributed, and closely coupled with user data. These characteristics introduce new requirements that were not primary design considerations in earlier Internet and computing architectures.</t>
      <t>Current approaches to deep learning inference largely rely on centralized or regionally centralized cloud infrastructures. In such systems, user data is transmitted to a limited number of locations where inference is performed. While effective for certain applications, this model can introduce challenges related to end-to-end latency, scalability, and privacy <xref target="RFC9556"/>. As inference demand continues to grow and applications increasingly require real-time responses, these limitations become more significant.</t>
      <t>At the same time, advances in model compression, quantization, distillation, and specialization have enabled inference capabilities to be represented in models of varying size and complexity. As a result, it has become feasible to deploy different models at different locations in the network, ranging from large, general-purpose models in cloud data centers to smaller, task-oriented models on edge devices. These developments motivate a shift from data-centric inference processing toward a model-centric approach to inference delivery.</t>
      <t>Inspired by the architectural principles of Content Delivery Networks (CDNs), this document introduces the concept of Intelligence Delivery Network (IDN). An IDN is a network architecture in which inference capabilities, encoded in trained deep learning models, are deployed across a set of interconnected nodes. Analogous to how CDNs cache content closer to users to improve delivery performance, IDNs place model-encoded intelligence closer to inference request sources in order to reduce latency and improve scalability. In this framework, inference requests are served by appropriate model instances based on factors such as task requirements, locality, and system conditions, rather than being uniformly directed to a single centralized location.</t>
      <t>The remainder of this document is organized as follows. Section 2 provides background and discusses the challenges motivating IDNs. Section 3 presents the IDN architectural framework. Section 4 defines the terminology used in IDNs and explains the relationships among these terms. Section 5 discusses security, privacy, and trust considerations.</t>
    </section>
    <section anchor="background-and-challenges">
      <name>Background and Challenges</name>
      <t>This section provides background on deep learning inference deployment and identifies challenges that motivate the Intelligence Delivery Network (IDN).</t>
      <section anchor="characteristics-of-deep-learning-inference-workloads">
        <name>Characteristics of Deep Learning Inference Workloads</name>
        <t>Deep learning inference workloads differ from traditional Internet services and deep learning training workloads in several important aspects. First, inference requests are often generated by interactive applications (e.g., conversational interfaces and programming assistants) and therefore tend to be latency-sensitive. Second, they are geographically distributed, reflecting the locations of end users and data sources. Third, inference usually operates directly on user-generated or user-specific data, increasing sensitivity to data locality and privacy requirements.</t>
        <t>Inference workloads are also heterogeneous <xref target="Elf"/>. Different applications impose different requirements in terms of model accuracy, response time, resource consumption, and availability. A single inference service may involve a mix of simple tasks that can be handled by lightweight models and more complex tasks that require larger or more capable models. This diversity complicates uniform deployment and execution strategies.</t>
      </section>
      <section anchor="limitations-of-existing-computing-infrastructures">
        <name>Limitations of Existing Computing Infrastructures</name>
        <t>Today, large-scale deep learning models, exemplified by large language models (LLMs), have become dominant solutions across a wide range of application domains. Due to their substantial computational and memory requirements, these models are typically deployed in centralized or regionally centralized cloud infrastructures. In this deployment paradigm, inference requests and associated data are transmitted from clients to a limited number of data centers where the models are hosted and executed. This approach benefits from operational simplicity and centralized management, and it aligns well with existing cloud computing practices.</t>
        <t>However, centralized inference deployment also exhibits limitations. Routing all inference requests to a limited set of locations can increase end-to-end latency, particularly for users far from data center locations. Centralized processing may create scalability bottlenecks during demand spikes, and it can increase network traffic as raw data has to be transmitted over wide-area networks. In addition, transferring user data to centralized locations may raise privacy, regulatory, or policy concerns in certain environments.</t>
      </section>
      <section anchor="emerging-capabilities-in-model-deployment">
        <name>Emerging Capabilities in Model Deployment</name>
        <t>Recent advances in deep learning have enabled inference capabilities to be packaged in models of varying size, complexity, and specialization. Techniques such as model compression, quantization, distillation, and task-specific fine-tuning allow smaller models to approximate the behavior of larger models for specific tasks or domains. These developments make it possible to deploy inference models on a broader range of platforms and locations, including regional servers and edge devices.</t>
        <t>As a result, inference capability is no longer inherently tied to a small number of large data centers. Instead, it can be distributed across multiple layers of the network, with different models providing different levels of capability and performance. This flexibility creates opportunities for architectures that place inference closer to data sources or users while retaining access to more capable models when needed.</t>
      </section>
      <section anchor="architectural-challenges">
        <name>Architectural Challenges</name>
        <t>Despite these advances, several architectural challenges remain unaddressed by existing deployment paradigm:</t>
        <ul spacing="normal">
          <li>
            <t>Capability Placement: Determining where different inference capabilities should be deployed to balance latency, accuracy, resource usage, and operational cost.</t>
          </li>
          <li>
            <t>Request Selection and Steering: Selecting appropriate model instances for individual inference requests based on task requirements, locality, and system conditions.</t>
          </li>
          <li>
            <t>Scalability: Supporting large and dynamic inference demand without introducing centralized bottlenecks.</t>
          </li>
          <li>
            <t>Data Locality and Privacy: Limiting unnecessary data movement while meeting privacy and regulatory requirements.</t>
          </li>
          <li>
            <t>Operational Heterogeneity: Managing inference services across diverse hardware platforms, network conditions, and administrative domains.</t>
          </li>
        </ul>
        <t>These challenges motivate the need for a new architectural framework that treats inference capability as a distributable, cacheable, and selectable entity within the network, rather than binding inference execution to a small number of centralized locations.</t>
      </section>
    </section>
    <section anchor="intelligence-delivery-network-framework">
      <name>Intelligence Delivery Network Framework</name>
      <t>This section presents the architectural framework for Intelligence Delivery Network (IDN). The framework describes how inference capabilities are organized, deployed, and selected across interconnected nodes, and how these components relate to one another at a conceptual level.</t>
      <section anchor="design-principles">
        <name>Design Principles</name>
        <t>The IDN framework is guided by the following design principles:</t>
        <ul spacing="normal">
          <li>
            <t>Capability-Centric Delivery: Inference capability, encoded in trained models, is treated as a first-class entity that can be deployed, selected, and reused independently of specific compute resources.</t>
          </li>
          <li>
            <t>Hierarchical and Distributed Deployment: Inference capabilities are deployed across multiple layers of the network, including centralized data centers, regional infrastructure, and edge devices, to balance performance, cost, and scalability.</t>
          </li>
          <li>
            <t>Locality Awareness: Placement and selection of inference capabilities should account for network proximity, data locality, and latency sensitivity.</t>
          </li>
          <li>
            <t>Reuse and Efficiency: The framework encourages reuse of inference results, intermediate representations, and distilled capabilities to reduce redundant computation and unnecessary data movement.</t>
          </li>
          <li>
            <t>Incremental Evolution: The framework accommodates continuous evolution of inference capabilities, including updates, specialization, and replacement, without requiring global redeployment.</t>
          </li>
        </ul>
      </section>
      <section anchor="high-level-architecture">
        <name>High-Level Architecture</name>
        <artwork><![CDATA[
                   +---------------------------+
                   |   Central Data Centers    |
                   |  (General-Purpose Models) |
                   |                           |
                   | - Large foundation        |
                   |     inference models      |
                   | - Capability distillation |
                   +-------------+-------------+
                                 |
                                 |
                       Capability Distribution
                                 |
                                 |
             +-------------------+--------------------+
             |                                        |
+------------+-------------+             +------------+-------------+
|    Regional IDN Nodes    |             |    Regional IDN Nodes    |
|(Specialized Capabilities)|             |(Specialized Capabilities)|
|                          |             |                          |
| - Domain-specific models |             |  - Task-oriented models  |
| - Cached popular skills  |             |  - Cached popular skills |
+------------+-------------+             +------------+-------------+
             |                                        |
    Inference|                                        | Inference
    Requests |                                        | Requests
             |                                        |
+------------+-------------+            +-------------+-------------+
|    Edge / Access Nodes   |            |     Edge / Access Nodes   |
|(Lightweight Capabilities)|            | (Lightweight Capabilities)|
|                          |            |                           |
|  - Latency-sensitive     |            |   - Privacy-sensitive     |
|      inference tasks     |            |       inference tasks     |
+------------+-------------+            +-------------+-------------+
             |                                        |
             +-------------------+--------------------+
                                 | 
                                 | 
                             End Users
]]></artwork>
        <t>At a high level, an IDN consists of a set of interconnected nodes capable of hosting and executing inference models. These nodes may belong to different administrative domains and may operate under diverse hardware and network conditions. Each node may host one or more inference capabilities, represented by trained models.</t>
        <t>Inference requests enter the IDN from clients or upstream systems and are directed to appropriate nodes based on capability requirements and operational considerations. Rather than uniformly forwarding requests to a centralized location, the IDN enables selection among multiple candidate nodes that offer suitable inference capabilities.</t>
        <t>The architecture allows inference capabilities to be replicated, cached, or specialized at different locations. Popular or frequently invoked capabilities may be deployed closer to request sources, while less frequently used or more complex capabilities may remain centralized.</t>
      </section>
      <section anchor="inference-capability-representation">
        <name>Inference Capability Representation</name>
        <t>In the IDN framework, inference capability refers to the ability of a model to perform a particular class of inference tasks with defined accuracy, latency, and resource characteristics. Capabilities may differ in scope and complexity, ranging from general-purpose models to highly specialized or task-specific models.</t>
        <t>Inference capabilities may be derived from larger models through techniques such as distillation, compression, fine-tuning, or specialization. In addition, analysis of usage patterns across users or applications may identify frequently invoked tasks or “hot” capabilities. Such capabilities can be decomposed or extracted from more general models and represented as smaller, specialized models that retain task-level accuracy while reducing computational and memory requirements.</t>
        <t>Each inference capability is associated with a set of descriptive attributes, such as supported tasks, expected resource usage, accuracy characteristics, performance profiles, and update frequency. These attributes provide an abstract description of the capability independent of its deployment location and serve as inputs to placement, distribution, and request selection decisions within the IDN.</t>
      </section>
      <section anchor="capability-placement-and-distribution">
        <name>Capability Placement and Distribution</name>
        <t>The IDN framework separates the concept of capability placement from inference execution. Placement refers to decisions about where inference capabilities are deployed within the network, while execution refers to the processing of individual inference requests by deployed capability instances.</t>
        <t>Based on capability attributes and observed demand, capabilities may be proactively deployed in anticipation of expected usage or dynamically replicated in response to changing access patterns. Frequently used or latency-sensitive capabilities may be placed closer to inference request sources, such as regional or edge nodes, while less frequently used or more resource-intensive capabilities may remain at centralized locations.</t>
        <t>The ability to represent popular or task-specific capabilities as smaller models enables selective distribution of such capabilities to locations closer to users. This approach allows the framework to reduce inference latency and resource consumption while preserving access to larger, more general models when higher capability is required. The placement and distribution of inference capabilities are conceptually analogous to the distribution of popular content in Content Delivery Networks (CDNs).</t>
        <t>The framework allows multiple capability instances providing the same task to exist at different locations, potentially with different operational characteristics, enabling flexible trade-offs among performance, cost, and scalability.</t>
      </section>
      <section anchor="inference-request-handling">
        <name>Inference Request Handling</name>
        <t>When an inference request is issued, the IDN framework enables selection of an appropriate capability instance based on factors such as task requirements, locality, availability, and current system conditions. This selection may involve an explicit or implicit request resolution step, in which a client or intermediary is directed to an appropriate IDN node that hosts a suitable inference capability. Such resolution may be performed using existing Internet mechanisms or application-layer services.</t>
        <t>The framework does not prescribe specific routing or selection mechanisms. Instead, it defines the architectural context in which such mechanisms operate, enabling flexible request steering across distributed capability instances.</t>
      </section>
      <section anchor="caching-and-capability-reuse">
        <name>Caching and Capability Reuse</name>
        <t>To improve efficiency and scalability, the IDN framework supports caching and reuse at multiple levels. Inference results or intermediate representations may be reused across similar requests, including requests from different users, when appropriate <xref target="KVShare"/>. This reuse can reduce repeated computation for common or popular inference tasks.</t>
        <t>Caching and reuse decisions are influenced by factors such as workload characteristics, resource constraints, and consistency requirements.</t>
      </section>
      <section anchor="model-evolution-and-lifecycle">
        <name>Model Evolution and Lifecycle</name>
        <t>Inference capabilities within an IDN are expected to evolve over time. Models may be updated, replaced, refined, or specialized as new data becomes available, as usage patterns change, or as application requirements evolve. The framework supports incremental updates and the coexistence of multiple capability versions, enabling gradual transitions rather than requiring global or disruptive replacements.</t>
        <t>Model evolution may be driven by multiple sources. Updates can be produced centrally, for example through cloud-side retraining, refinement, or distillation of models, and subsequently distributed to appropriate locations within the IDN. In addition, where permitted by policy and regulatory constraints, inference capabilities deployed at edge or near-user locations may be locally adapted using user-provided or locally observed data. Such local adaptation may follow federated or privacy-preserving learning approaches, in which locally derived updates contribute to global model improvement without requiring raw data to leave the local environment.</t>
        <t>Lifecycle management of inference capabilities includes deployment, update, versioning, deprecation, and removal. Different versions of a capability may coexist at the same or different locations, allowing the framework to balance stability, performance, and innovation. While this document does not specify how lifecycle management processes are implemented, it assumes that mechanisms for controlled rollout, compatibility management, and rollback are necessary to ensure operational stability and consistency within an IDN.</t>
      </section>
    </section>
    <section anchor="terminology">
      <name>Terminology</name>
      <t>This section defines the terminology used throughout this document. The terms defined here are intended to provide a common vocabulary for discussing IDNs.</t>
      <t>Phrases in upper-case refer to other defined terms.</t>
      <t>CACHE AND REUSE</t>
      <t>CACHE AND REUSE refers to mechanisms that reduce redundant inference execution by reusing inference results, intermediate representations, or derived capabilities across multiple requests or users. Caching may apply to complete inference outputs or to partial results, depending on workload characteristics and policy constraints.</t>
      <t>CAPABILITY DISTRIBUTION</t>
      <t>CAPABILITY DISTRIBUTION refers to the dissemination of INFERENCE CAPABILITIES or their derived forms across IDN nodes. Distribution may involve replication, specialization, or relocation of capabilities to support demand and performance objectives.</t>
      <t>CAPABILITY PLACEMENT</t>
      <t>CAPABILITY PLACEMENT refers to the process of determining where INFERENCE CAPABILITIES or MODEL INSTANCES are deployed within an IDN. Placement decisions may consider factors such as demand, locality, resource availability, and operational cost.</t>
      <t>IDN</t>
      <t>Intelligence Delivery Network. IDN is a network architecture in which INTELLIGENCE, encoded in deep learning models, is deployed, distributed, and selected across interconnected nodes to serve INFERENCE REQUESTS. An IDN enables INFERENCE CAPABILITIES to be placed at different locations in the network and selected based on task requirements, locality, and operational conditions.</t>
      <t>IDN NODE</t>
      <t>A network-accessible entity that hosts one or more MODEL INSTANCES and is capable of executing INFERENCE REQUESTS. IDN nodes may be located in cloud data centers, regional infrastructure, or edge environments and may operate under different administrative domains.</t>
      <t>INFERENCE CAPABILITY</t>
      <t>The ability of a model to perform a defined class of inference tasks with specified accuracy, performance, and resource characteristics. INFERENCE CAPABILITY may be provided by a single model or by a set of related model artifacts derived from a common source.</t>
      <t>INFERENCE REQUEST</t>
      <t>A request to perform inference using a specified or implied INFERENCE CAPABILITY on provided input data. An INFERENCE REQUEST may include task-specific parameters, quality requirements, or constraints relevant to capability selection.</t>
      <t>INFERENCE REQUEST STEERING</t>
      <t>The process of selecting and directing an INFERENCE REQUEST to an appropriate MODEL INSTANCE within an IDN. Steering decisions may take into account task requirements, proximity, system load, and policy constraints.</t>
      <t>INTELLIGENCE</t>
      <t>INTELLIGENCE refers to INFERENCE CAPABILITY encoded in a trained deep learning model. This includes the ability to perform specific tasks, reasoning functions, or predictions based on input data. In this document, INTELLIGENCE is treated as a distributable and reusable capability.</t>
      <t>MODEL EVOLUTION</t>
      <t>MODEL EVOLUTION describes the process by which INFERENCE CAPABILITIES change over time, including updates, specialization, replacement, or deprecation of models. Model evolution may result in multiple versions of a capability coexisting within an IDN.</t>
      <t>MODEL INSTANCE</t>
      <t>A deployed realization of an INFERENCE CAPABILITY at a specific network node. Multiple MODEL INSTANCE may provide the same INFERENCE CAPABILITY and may differ in operational characteristics such as latency, capacity, or availability.</t>
    </section>
    <section anchor="security-privacy-and-trust-considerations">
      <name>Security, Privacy, and Trust Considerations</name>
      <t>IDNs introduce new security, privacy, and trust considerations by distributing inference capabilities and processing across multiple network locations and administrative domains. This section discusses these considerations at an architectural level.</t>
      <section anchor="data-privacy-and-locality">
        <name>Data Privacy and Locality</name>
        <t>Inference workloads frequently operate on user-generated or user-specific data, which may be sensitive in nature. Transmitting such data across the network to centralized locations can raise privacy, regulatory, or policy concerns. IDNs may mitigate some of these concerns by enabling inference to be performed closer to data sources, thereby reducing unnecessary data movement.</t>
        <t>However, distributing inference capabilities across multiple nodes also increases the number of locations where data may be processed. This expansion of the processing surface requires careful consideration of data handling policies, access controls, and compliance with applicable regulations. The IDN framework does not mandate specific privacy mechanisms but assumes that privacy requirements influence capability placement and request steering decisions.</t>
      </section>
      <section anchor="model-integrity-and-authenticity">
        <name>Model Integrity and Authenticity</name>
        <t>Inference capabilities in an IDN are encoded in trained models that may be distributed, replicated, or derived across the network. Ensuring the integrity and authenticity of these models is critical, as tampered or malicious models could produce incorrect or harmful inference results.</t>
        <t>Architectural considerations include the ability to verify that a capability instance corresponds to an authorized and unmodified model, and to ensure that model updates or derived capabilities originate from trusted sources. The framework assumes the existence of mechanisms to support model provenance and integrity, but does not specify how such mechanisms are implemented.</t>
      </section>
      <section anchor="trust-across-administrative-domains">
        <name>Trust Across Administrative Domains</name>
        <t>IDNs may span multiple administrative domains, particularly when inference capabilities are deployed across cloud, edge, and user-managed environments. In such scenarios, each administrative domain may operate under distinct operational policies, trust assumptions, and security controls. As a result, inference requests and inference capabilities may traverse trust boundaries, requiring explicit consideration of trust relationships among participating entities.</t>
        <t>Trust considerations include determining which nodes are permitted to host or execute specific inference capabilities, which entities are authorized to distribute or update models, and under what conditions inference requests may be forwarded across domains. The IDN framework assumes that trust relationships influence capability distribution and request steering, but does not define trust establishment or enforcement mechanisms.</t>
      </section>
      <section anchor="inference-result-reuse-and-isolation">
        <name>Inference Result Reuse and Isolation</name>
        <t>Caching and reuse of inference results or intermediate representations can improve efficiency and scalability, but may introduce security and privacy risks if not properly managed. Reuse across different users or contexts may lead to unintended information disclosure or inference of sensitive attributes.</t>
        <t>Architectural considerations include ensuring appropriate isolation between inference contexts, defining conditions under which reuse is permissible, and preventing cross-user data leakage. The framework treats reuse as an optional optimization whose applicability depends on workload characteristics and policy constraints.</t>
      </section>
      <section anchor="availability-and-abuse-considerations">
        <name>Availability and Abuse Considerations</name>
        <t>As inference capabilities become more widely distributed, IDN nodes may be exposed to abuse, misuse, or denial-of-service attacks. Concentration of popular capabilities at specific nodes may create attractive targets for attacks that aim to degrade service availability.</t>
        <t>The IDN framework recognizes the need to consider resilience and robustness in the presence of such threats, including the ability to distribute load, replicate capabilities, or redirect requests in response to failures or attacks.</t>
      </section>
    </section>
    <section anchor="acknowledgments">
      <name>Acknowledgments</name>
      <t>The authors would like to thank colleagues and reviewers in the community who provided feedback on the early version of this draft.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-informative-references">
      <name>Informative References</name>
      <reference anchor="RFC9556">
        <front>
          <title>Internet of Things (IoT) Edge Challenges and Functions</title>
          <author fullname="J. Hong" initials="J." surname="Hong"/>
          <author fullname="Y-G. Hong" surname="Y-G. Hong"/>
          <author fullname="X. de Foy" initials="X." surname="de Foy"/>
          <author fullname="M. Kovatsch" initials="M." surname="Kovatsch"/>
          <author fullname="E. Schooler" initials="E." surname="Schooler"/>
          <author fullname="D. Kutscher" initials="D." surname="Kutscher"/>
          <date month="April" year="2024"/>
          <abstract>
            <t>Many Internet of Things (IoT) applications have requirements that cannot be satisfied by centralized cloud-based systems (i.e., cloud computing). These include time sensitivity, data volume, connectivity cost, operation in the face of intermittent services, privacy, and security. As a result, IoT is driving the Internet toward edge computing. This document outlines the requirements of the emerging IoT edge and its challenges. It presents a general model and major components of the IoT edge to provide a common basis for future discussions in the Thing-to-Thing Research Group (T2TRG) and other IRTF and IETF groups. This document is a product of the IRTF T2TRG.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="9556"/>
        <seriesInfo name="DOI" value="10.17487/RFC9556"/>
      </reference>
      <reference anchor="Elf" target="https://dl.acm.org/doi/abs/10.1145/3447993.3448628">
        <front>
          <title>Elf: accelerate high-resolution mobile deep vision with content-aware parallel offloading</title>
          <author initials="W." surname="Zhang" fullname="Wuyang Zhang">
            <organization/>
          </author>
          <author initials="Z." surname="He" fullname="Zhezhi He">
            <organization/>
          </author>
          <author initials="L." surname="Liu" fullname="Luyang Liu">
            <organization/>
          </author>
          <author initials="Z." surname="Jia" fullname="Zhenhua Jia">
            <organization/>
          </author>
          <author initials="Y." surname="Liu" fullname="Yunxin Liu">
            <organization/>
          </author>
          <author initials="M." surname="Gruteser" fullname="Marco Gruteser">
            <organization/>
          </author>
          <author initials="D." surname="Raychaudhuri" fullname="Dipankar Raychaudhuri">
            <organization/>
          </author>
          <author initials="Y." surname="Zhang" fullname="Yanyong Zhang">
            <organization/>
          </author>
          <date year="2021" month="September"/>
        </front>
        <seriesInfo name="DOI" value="10.1145/3447993.3448628"/>
        <refcontent>Proceedings of the 27th Annual International Conference on Mobile Computing and Networking</refcontent>
      </reference>
      <reference anchor="KVShare" target="https://arxiv.org/abs/2503.16525">
        <front>
          <title>KVShare: An LLM Service System with Efficient and Effective Multi-Tenant KV Cache Reuse</title>
          <author initials="H." surname="Yang" fullname="Huan Yang">
            <organization/>
          </author>
          <author initials="R." surname="Zhang" fullname="Renji Zhang">
            <organization/>
          </author>
          <author initials="M." surname="Huang" fullname="Mingzhe Huang">
            <organization/>
          </author>
          <author initials="W." surname="Wang" fullname="Weijun Wang">
            <organization/>
          </author>
          <author initials="Y." surname="Tang" fullname="Yin Tang">
            <organization/>
          </author>
          <author initials="Y." surname="Li" fullname="Yuanchun Li">
            <organization/>
          </author>
          <author initials="Y." surname="Liu" fullname="Yunxin Liu">
            <organization/>
          </author>
          <author initials="D." surname="Zhang" fullname="Deyu Zhang">
            <organization/>
          </author>
          <date year="2025" month="May"/>
        </front>
        <refcontent>arXiv preprint arXiv:2503.16525</refcontent>
      </reference>
    </references>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA71d63Ibx5X+j6eYkqu27DIAyRclMf+saZG2uKFkhaTjOKn9
0ZhpAG0NZuC5kIKTVOVBdl8uT7LnO6evMwOIznpXVTZFYKYv5/qdS7cWi8Ws
M12pz7In59nXjdrph7p5m9Xr7KrqdFmaja5ynV3o0tzr5pC91h0/8OHVxeuP
snXd0Fd6n11r1VSm2tBba93glScztVo1+p4GpkefzHLV6U3dHM4yU63r2ayo
84pmO8uKRq27RWkW9ES7MEW1ePZs1varnWlbU1fdYU8PXV3efZ1lH2SqbGsa
0VSF3mv6X9U9mWdPdGG6ujGqxC9X51/RD1rYk6ubu6+fzKp+t9LN2aygBZzN
8rpqddX27VnWNb2e0fo+m6lGKxr1pu472sOTGXa4aep+Tx++qHd7/nhx/kDP
ZXe03LXJs9tO64affqsP9EJxNssWtLeIaIUjWiVEwwMFqFU6ahlHLf7KtF1j
Vn2ni6w9tJ3ezWb3uupp1Vn2+OVkmZDsyfc0JSb5Bq/i850yJX0OOn9pdLde
1g0/r5p8S59vu27fnj19isfwEa196R57ig+erpr6odVPMcBTvLgx3bZf0atv
62pzUNVBPx1xE8+VRPq2i6bwzy9liKWpx2+OP1luu135ZDZTfbetGxCcBs+y
dV+WIktP/oANX5sn/DmtW1XmZ9WRGJ1lb3S1ybca36tV3SiSmAM/pi1dSvPT
l/u8XKp8mVcywsQML1VVYpLvlRD7F0/zQC9uy2efxXNNzPMDkSj7D3NklruW
1rDtVXZLU/1M/7G6NhV/rUriuSp6onp2m2/rusz+7f3r+hFzHb5sf152dvCl
Lvpjy3tFjzxok/2pP7m87ypoQGu6dK53/e7hy/E0Vd3saIR7kvgZrIT/Lctu
vn7xxfPnv8FfL8v1GY/mDBc+yFSe61I32PPWbLaLRrd12WM52a5emVKL6t0b
GJXsgcQuI1vQkQVZKNakvWpUWeqSbN+6rFVhHOW9tPGfRWb/kpH2khX5fpn9
eUuU858Kgb7vSb43yVfDN/+8zF7qwWt/3uqftyZ8PnzneknS3Q9eupa5whcT
M5EkjaeqwKLwzfC1H6Ym+6Gv3pnqxGSvliR9ZMRa3QxefUVGpB5+OXz9Ypnd
qEO+VX2x7RszGOLC7FX1VjVTz0wsf4ozP5DdqRPWsGfIPn326SeLZ1/wJ7Q8
o1uI4Jl/++Lbq7Psk2fLTz75/PnTzz7//LdffPHZkn7+7jef/o4favTaShTJ
5JumzrWGELVwpt1WZ5/+lmTuvKp60s9UW1/Uzg9kJJuvRFy9pc9UVTi/64Wy
U81G00TOohawJTs21UVtnqpV+/TYWun93//xdksyn6qR+5DWmF1fv8pudXNv
aEm37ItEZS7hbAxtkRdFv+kcGpq96svOLO50peir3/8xe6HI1mQ3um/1Y5To
5RJsGXLqZa+q+PPhWzfTDL7R1Y/mpOaRiGLs4YuwaWRLk+8m1P37CW3X5se+
ir+ZEMa7CVkkTbo7+dL1UAN+oMXl274K3/w6ansxTcwLfeinVOX54tnzsdSr
5k/mPts3ek9YhGQEv559+vzZZ8tPfvP80+ep6Ho0oJp35l5QBolt/PhssVhk
9GHXqLybze6IOY3amwJw6IHEkTTrCKDKoC0w421m2mxfqly+zgnqwe9gkW3b
N6xx+h1BL0GvUEst0p17BaRRG0WL6POO3miJlXXW9vt93XSMbar8IE5kTsOa
e0W/km0lPulizkO1uSrVipQ6rK8V7WrnZBtohQSH+x30iujW1EVP37DRINrm
et89EpDP6fXsYWvyLTk9gp+taG1h1jxtF82fq70iO2M6MnQZ/B8h6rI+EPZU
eVO3bbbR9YaITYORWzyk6JTWTl5dtkZeN8eHXS2fR3M0+qeegB8Iluyx0G1O
Y7ktbuF3qw39uqvJmiimedvTJixwbufMLnoVczJeNZ0GM8h4rl3QIqQu9NpU
2BL4tyPm3te5WvWEaA8crtA+8r5lEcDkArTHK6xpiKrusnavc7M+0Px1V+eE
pArdEYahFQmZQblSrztsf91DPPwg7VLkd2eKotSz2QfgIPMWZl+kORJIoT/P
PhJry8ytaukzJnWiBsT0KcG3jMQ2nWAvCZGV5q2WESkEUgYTuN0QCSh4sBzf
6wYQjOaj8XNaF1Hb/Ey/khFQ/AFkoOfFr1SXbxcUg+HTAtQiAW555MmVQe/W
pDhefRCUGbiS+QnBEw7nZU1CdyD+9ntQg0WcHE3DCwMnSVQgVTAa5MhJtfM2
6BUJ1QNLpmk0s4kIpLrsgRbJHCcN3kFYSEjNpoICtqYAsCSmYZiMmFIamu2I
sYikU0MEXvQNq57aE1XgFlvw75jZKmEcaXMN/lendCfpbfSGIQO2H31DJOmL
sZm6qkSNrJDPA5WY042q2p3prPKqrDT0G/0iETOksCTdkX0/bEGfsE4YVScf
5BK3QCzawwEoWq4b0pQK+y6NHcYaOxG+XFURVyIrQHtXdlEU4y+6mqxp4SRl
bo0pTNdBBMLa3Oyvf/13Gyb8/e/L7LyNlltQ5MFcqohHvbAAysPvxyuMNJK5
wGJCP1W56MwOf2v3SCHwViBnTDT77kqTGEC36BUIjyGwRICIhOC8E2tDpirD
OLTw4h4OgiXK0oNECE6JhppnP5Gb72w8NWcdMGVpf2OzC7sE7vNnZBmI6oS+
ViWr66SRpy2vsAFrSkWvrWUhXt+T0LPhJYnyIl2Sa+wOTEyFvRPMI43u2BLZ
3a5BLfg2lmrYsMjf2OFJv8JnQahoflDFWvk5GbVqgyWsm3onmgBrUJHylYt9
3+xJ792IMEks9Ikxgg/aQY4aYo9q3waL5PZJ6ltsIA/sfJ2xoF91We/FHFgn
RETI2q0h087LwTwL1jmTRxQOlo4mJwhA0iRz+Wed3mNxsUCK/ybZuKraPQlZ
ka0OTI7UvQFJ5YYYwUx6IVBr5P7b7MMXF6/bj35VNLFEJEB/gbIrx6XEvgW4
MS1zc5LJnKjBosbOBu5jwrXNJyEIMUDzYpFWo9ixqgRqVPROi9Wpst7UPfN9
S8oMEtD8ueyUCcXOosEDsH38pCE1q++j9Jw1ZFDHOfYreNGK2iLsICJYGHaE
drK27hur2HVTyFPE3p6tO9swVi+3jMicscVmBkagZgynmFaMtVhmWMBITCCy
YkkI0HdiXFaqheOosjX5wpr2z/6AlBfakXjBOetlMKriM0DHwljbTT5wi+0Q
siXlB/N6MnFEOfbSjYeBxDaYT524KKf1SwvjkQqqCnEzA5ltXTYJkkC0qMuy
fiB+32pGTtmn0Lp7csrYX86ZWlhxgD8BdyeBJRgcxvosIEvGSSTtR+BleOdz
DzLxCknmzlQ1SeIBMsaizkKEBel3JErEDn6S/RpIuTV7+npXCwZtZYxoUc+j
jbQ67xvminVzwh7y8iRqKThZAmN+lVLkhacB6G54PJ5jioT08TFcEqFTFl6k
3sm90fsRlRlIefPpcOf7rAyt+gOsM4FrJBVHCgvZ9w5GzmYX7w3+xO2ICSdZ
FFn2CRiyLi4Os8FDPJ5Dx3EsSYiKfAUEg/SX4j/kOxScMaKcr03Tdkc1VgCv
+LNOVJftmhLMlKCQD/Vys5yDvwi0XJ6IHydNtqslFhJQ3u0YdZIPgtJ37Uci
H8BrawCRDuBJfP8IbLPIkYYzmDnwKk+hbxoRwZ6LnYIjJ25hFjGxTEh4ZWsI
ObZqipgufdvz6PWeadFa8yGQF6MsApkISvInEomRQ8XY8zhyctshLWEQgrmd
MUsAYmzv2PFORyYoMmVbTbSusQw4mL9cluv/XGYXHsSkmHHH0CRAnCS8gOuD
foNKYp9VTjrNquzgpAWFSFqDZqzY/W4f8J66R1XGuYlzZ2FH+YRspyBW93UJ
kaLY8x2mbQ2gHBt9q6U523DCcVVRiiySlm67B43/e9hWFQJmLRSMB3DQmGFa
Ay7Jk/D+pYNpLqx2RQAZCHQjnlvnMTQt+h1ZPDZRSPt0FPBwGEU24jqC2rSp
S5e1CZnSqzQCIpNXF4rIzItcwNPqI/CDZsXKyKIJMfAC/b/a9GrjQeeH19ev
gLEYbVv4W9Skf7ACrtzQBvTyQGaSQS2UPxYZvAW3QBLVM3AmdTINOecV6zAB
extQOs1nTmii8GHgtMV/OH5B3X387tGU+d+HkeKhA6NQLCnMZjdt7CCvbVvn
hhWY9ZGXFgWcbJDz0ojnnY4+E2gvASjMTrTZbd3ilSA2CEZZ5DzqXpEGr03X
yoxicISmrBQmd0YiJgRhQWI7diraRxEPfbNBHEzuTDIOPmkoVAsJgD1b9Jyl
9mX9AHcxT0af9qowOvrd1qyw2CiqXGa2Lk2PlFPkTqhnAXMwzRJls63Uk+E0
sZJWiwRZKRkyseJr1YTAx3IhDLvMXkQbioIgmB9M1iXANlvVXUcgQedkPoq+
kXzXTgJZ8xaBgiVzslwXcnS2yE1gsFEPsiJEoOLVYrEiQN2w2i1Q0/f5Q5Zh
VYj3n8sbREdeR8iJ0HBTiLXlTREWaHXAYKRBRDIUUbnXYF+TJB0kvmoksnX5
D13dm6aunNMhO3a50w3HuS/i8Nyg8gPvcOGFYja70TkLR5QsSO3X4wP/PeE8
EuoTUf88CvmnsgykWjrfVgZy5wOJfyF3wZG5d+dA0ouur6yEUxhnQ3i3Ssg3
tPmd2TlYudK0b1NLjkockH0YAuyHFneFpK8zt1PhvnqrIXnkwod5jEDPkD5Q
2YoMCwIXb9gJ4nfwZGL5vNgwRil71AC9wU1S50kmYjZLcyxjVh4QGVU1TVBh
w6baMtggre2MD7xAujiBx34sNqRQBTKaqpg7dVvpJLlvvdcOFT2AhlIdsGBb
xPS5mkFhwRJIogpW75DvAbV5gGgrDMtC3G2t9hrCZ58QK0LvcZ2F5INlGfxN
MqyCRiRgj4jm4/MYiWbeuj1wwrJBLl8EL4cBw/MTKAa+p6Kd64LcC2vweRIi
xjHWBQE6I1JKgub0du6jhjS4TJKeEFECRWSmoEmCQ7yTmfC9Z9lstgg25JC9
ARF2XIm70BKWcvTCnvO9NaB2W/dlwdLggAMMB9nwKmQu5il2FbDatwqZOrA0
9q85OWeUQLIbmxi55VIR4A8edR1LZ+5zsOFEKgOcNxVBSVP0atIR+mTHL09u
8Dpvg8OiRUl9D6sSJeKw5lCpXZL/s14M2kBe2ufbGBdEviTyfzzVBaTyOo5R
3ohnOROUK8kVep4kgYsReH5H3o0lQKR3p7XFGxLdYJTglobBziL7NuLNSx/d
8GZfAe+kIXSIi8UeCIZHxNAU0i/jrN7c++k4WcQgsIAIMoxHhOusMGeA2qn8
jLY2BgARis61miP5GFH8DlainTaXCgbVmzYo9Fyyg/LXUL1kXUdCg14CJ8d5
6SjxBRlMKBUClkkTPAkplrYceCI54psxR4mbKF91jDag3qMyvMjFhddCYRbp
1BPFYp+gm3trERM0uJGpzK08iRnESgI81BXvSUo/ICR9QI/VTHjis3K5a+g+
exSxxBdSpXvjs+SSXkQeL2yLqLfpTRFS7JJSFLvK74cs+1lqVBcvbBrfEfAs
ykQFWZtMdLvYkmttmmMhFsk10kSLvKQYyUldHJMHgjpizq1u2/Si731FvmQd
4I6EINobZlH7l4bUHmKS2zjyInL2AWtO7utYb8D7wEEAPsfqxvMAidJ4cz6C
RvPYDyW5eniYuMFCsiPYtDet3CVbkRU9C+4xElQoFNcXTrlEcnl1T69BqZyp
EzzKnE/yTbIcl+ePMlPWE/atdq1T3EgFi5/qIASJtFkwAR5P1if4kLElPLwu
2Ff6op6KrK8F3ojrB9GArUbgR1UgdxElG/jVo66Hd3GF+Ay/EfMu723aY7gN
EG1HGsAYztZdkUnT7oXjdI/lp9/zCPNBKOI0Yu94OvcuWPwe3t2UNYkNtuml
XIzGSzSIXsOIxEhOz3zv0+DPx4vjfz4+9tLf6D8bJIu7f2EzGfjuxEsffmMr
nm9sxZOjwvajky8d/XPiJdISBjZr5P6F9+9/CX9GYdH7Z4pQahwPHn8pJfnH
jyP5YN5f4bFo3d5sonnn/3gBUwI3KYQTpDglDRPzfnx8jo+Pr2mKITzxjbPq
8MCv4e3Hazr1oIz04a3TdrJecZbko8FIJx6cjSc+QadTD9qRCLUzgA2pCyv+
o5EW2d1U80E0EvfHUgRc75F4y9q3pBHtxJqOPflr8+6xtJiiDP547PD4d8M7
fpQbF8n9glHcO/9vivB+y8QTXwLBPM3OJbHgBDxZk/xy5EGrCNdRVea4Ivwt
O/HgL1KE9/sSlsrrYSVxeqSFC2uHD8ZrCu5E8nVH1zT54K/Mu+OUOfXnVzbh
k1Nkv95zl4SdvkMebMadaYpPzEhUBWDFJplbC6CLqFyd6sPxyTL6HvUYd2LB
hsVJqByqgoj75HUk11e65GaIOkpTTScPpBKmfOmYoCpysaPsBB4bpyWW2SVq
QpiYB8F6OdJ05ctjgDTumkMMmUR4SSnZZ6OkYtL5YDSqdyENuW8RE+5cb6bk
SzhPFzXSROkwIZZPcUVpjqTYPE7CJR0i2U2UygjdO/QDvWuSqY7LSlPJi7nf
lNQd2iiWkqYWHx5SRFuYIiyfw9yaOzLa3nSDlviY5LZNKGk24+rAZK4naW+U
+nJhcz0F12faCCZMNyMuszfWxdYofIEKHGCjlv52GEOJzIawOKSbB01gc5ur
K2Hbo1E5kvc1c1tdH01hc8IREyR+CdIW4dObJAaESEbCN9FNlkjQ2vbGcU7J
fsqKb/vEaxd400ehYJhJDiMJ5MQ0S3WAu6SKKGUcssgcvblmh7T3Z5nWxUAI
28WD7pucpHvQnjpoHD3SMYoeQTJzRPxYGIgHaTlqQqWnWU+OzdWx0xJUt23q
nuxpNy6WpbWwpGgWlcFSibWlt6SAqUi1D61h0nP2nZjSdVx5tBkaKXMghxo3
qnBziLRuHaaE3BfM/vmP/9rW3T//8d+pUma32EhCD5+04iyeFWv9jk/sOPqw
lFu2xL0lsVUl6vjW3ZhBnqjcc8IVVWYYeysvWr6g43Lvj+meICazOzhWa4t6
GFievRuUNOle2rY6m0xDjsKy2Z4NcgRFd8lejPqoZuLWP9CBeZzoQrJpTduz
eR1JiTj25QfnTsNKXHMfH5axp6fCoiX1wg2S0XZDVpH1uUvaPZyVtJkznPJR
MMREY1asKA1TRGGy03NrEr2XIGHhY7htnG0nQ2VbAScqWmni0h+fSdO8rUZp
rBv3Okf79CsVyZzI4i+jWYNhDEtWK2SZhuchjqdMpwoKIq2hcJAa4Kidgk3r
yZpX1OeTsNOWzYikX01ghkhWGDGsbDuxlLPmkyaPG2og84PeIhT7c7NXTrK8
sItlQvldymbcbxQcNN4N/W81VECsuC3GOpO2zL4eO85RK+P0ksHJ4jHd2kF5
fV4adgyxma1ZPMKRO+1eACfTuqZWZX060v1HCkN3kQ9mRGGNpM8BjHxWKnzt
sIFiANPudaKkXEIYmfWujnuI0ib6YYuVxWVdkv8Naeb4cFNof59qdbQ05v02
92ldXnzsfNKXcH0ezp0WmZpwa+4LKXTtE3MyJMIJZQ61p/LAztcfO8CmhwM5
PrkzCMTu953bsGyP0udC0whJj3U76rYIh4tQ+sbBKTQOHIG65FxqrMbwdgaN
HEnwMHRKLEkMtLhRo+Tmq0IvCNK7hvZHFWgSDOvaA16iF5UGn82+Bz9VNaGt
Bsd5214X8zG4nQhHAGGrJIyaoOO/ekgi6si1BxPtSb9xd0FmK7huXUmTbsWn
BNCICM12TYl+y9FFFjTsPjriq2xAya/5clDDgp+EkSkJQDSOgBlTIQTmkzYn
4rGDBX7xnRrWwPozonIOdHySeqdh1027G6LRBdcOfZ/BSAP8EVyYA65Kh0pn
YzsigZMDUf1MaY9TfFhj0H4DrXzXBYoy4+MVS5JhSvC9/7BtLKFPIlRXjzhk
xji0DpsoSWI4MrDoWvang7SvEw7VaEoDLPCUQ1BufKki4mSGr9pyR9YyuxqW
FVNJGhcWHdttKdpuuTU73BjkQUna+GaBivSSejPDjmQuljsWzr/YWyj+0+qM
LB5Bhi9a7qWUHtcs+bypHPfmZkyxv4OgFGdxR2SJcJ3guRKYWrI8Q3PgTgmM
7WLizDg51FmwbtNozMBB7EFiIA2fvobKb1ybtc4POU6MHwlBLaK0iTqs20Mu
WH4xK9wJi1MFS1tAdLyT8IFPcwg84nMdCNTHmZKWG3C4Aizt7q0ze9xA0w6D
T4ZwmsdRbdLwniSpZInD5hMvvCaqL9v6rzvTQvRkEyN3pawn/SOfNWA357V2
w9chldL4KzY5aeoZ1YzlooCmlxgvqjODccK1UMh2KQFkBCoIjl+VPwHznd2G
jZb3cijTN52XpM1rjpuVHNSwKQRuLV+0fIxAh8P6wi+Jt2SloaLqDplY+cOR
Ag9WY9s0SC9GJ73TkCxNPEjYs0dvYWdzobb1edB9lijCEWQVuko6QdrcX6Ga
Bbdkp+3XK1ki469C7TvvcfhskI13JTKwj4WYhm8FYAfG38kAyrNOmoGytS7C
iSN3jUcERX3LdTjIH/liN6vLCzm5hYsRkvORc5Eu29ooJl46+kbNC77RHdhX
o8vbHbkq45ZyEkdvMKJjCyfwrNhmHcf3c7vcudMdlrICtj9P2i129b0q42NQ
TtckWRgpIZ8DqD0K9fiUxXUCkSrXkDWKIlznD/lP5/gSjMkHB6qqvreJMrmL
oDt9nQeaz8oputnY2+J+Pja14wQVQwlF2HPnDzoGnCD+B72f3G+DH8RNyfDR
sjxN0lMleAynL3mq0HDD1x7I1TTxaZXOx+4Dt5K4A5KH2QfZXTiTOugePHly
1ZodCGJCP7HUcojNJXXZEIjPxAlDsSg+7/T+y1f4GO5s9mbbqFYONpD5J1XO
cfCD8yHcBcg22s0px2TJjZ+/eHmZnb++yG4uv7u9HH0Q5VMiJtkU4qD1aaqP
c3VgdJDWrh7ZeYVdWhuQhpGDzjkPjFxP+tKDQugOfCeLgqS5uxiXE3848VYz
iTgdz21Odn2SyGN4XB3FLNKA70+tOFvNxH1z/tXV9dXdD9nF1e3dzdVX391d
ffv66BeD5BWxuNU4Eue80dXrry9vLl+/uMz8AFeXt7x4PvTmM+lygkLI5GIU
nJCLo+s4cHKZJLZPw/YwPuTm05ZxDtBmN9z9TbaBe3AegbzHj5IrGZDkzfX5
i8tXl6/vpj+dzuRJznjYkX+cMK++vbi8pu9v787p69vJfKJV9yhZGaCsGF8p
/I1QrEvxhUjWo9dxSDvRz0+zApme6C1ePvbSiKvXd5fX11ffgAhJB+30GU0T
IMM8G10L9JjO43BJVSD+zeUfvru8vbv1t124NMIR/tjTVJJZfNTFJunyHn9K
YVDHDUcVuGWKZGQ2O3dzLCRTZqJO9ii8j0vcI+GC/0yq+KFyP0Ulr5oxLrPp
3PGVLCf6fF1+NT4gd7S8f7orADQZc+uHNJd6rJ7pvMvpYqbNPSTlzBEOOV7S
nFpelFi/d93p/u4MWSjRSD6UwpO7GckeIyfDD+1u02qkd76yloQ2lo0QHJfA
iGgRH9BnpBvt2qWm6K+TewlXShRSGLK4G0o1nN6accahg1Q2Kjg7LZLzU69G
DQ4sNpHDAkn0PRw5fGXAnz4tNLX97Pbu8vLm6vU3IiCRlW7DUSTOEDf+t4ld
jFNrqW4NTbU78TSw1B0fQKwwmm00n7AMUb+5TS/Cq8+Pu/HYtqa/RT5qkpOR
HVanLu2xGRofT3Rp3cIJVXoaE/ZAtRxhZOu+ygNqIhxVmNxepOVsZCxJ/hi6
RaXzxH+MTlkkB398xod/iRKbFM0zzy7/+O21xTmDDwa3FTphWR28E5t0E5IK
CWmYRzW1Jw3tjCR9CBYCe5vQGSQgBP7x4V4HMY8GZzYwYygyCB5SCYad8MAD
l6C5+8YksT4pPnxYxzPduUB4jKVcE4ulDRQFG3Cxgw8Vp0e3/iF0hJwoWHjM
4/tOQISctQgpqvhODY6bbv1lO2/iy3bu+LKdF0krFXvh4bWCv+CyHq7cemib
RBpp3FAl5+uHYYSjb8AeJw7eZWkoGF+W1Orh+sDH4V2b8cErOPk30dFDd+Rm
+l6VqGTqXPujr3qxF5qKswwFX+I9RRkEJZa4BV+uAPA3h8qlE+ECTH+VwLEz
/pxi/iVn/JdyyxOWhQObG77yADeCSHNFq8NtADjI61KREbio0wrK9JnluVzl
wyGpbW45dTrHXzjxKNkaChPDOr6Jwt3BYKl39EZImd/jGM6duDs49Ls90q2h
3ySS47bni4yclwP9yS31g4ZFfwvI1tYGhQnckmlLwzbt4tPtwCgcwUnLjqSg
pWDD3HT1uGHpxKeIEB0xLz0ksUIepRKIrGkuaOqGoVBNmG5ASXpjRtAgLhAg
2to0Lvdz3hMtueciVbZBli+pDxw7nGgzWTaBnd71FBopo5zGWKWW2SWyVS53
Z5K1qmitQTHcHY7EPXoSjSFzKbfuSBtsP4UCn1Fktw/nfBTPJs4hnnUDcIZn
yejvIDqjXA0uVRgW/NLLXC0ETZEL6Q9yhEwaNVkx5snRuVK0DgXy/eZSNeED
dLRsAc68fusHfGbP3pQG5rpM8bHEEY26QTpFu0vMer7zJrpcK+kb8FKps7RS
EiXCQvZDlsB56Ip3JtlUy8I5C/pk8nRYKR1kS0V6xW+ei8icpz5JztI4NwoJ
bMlaBFs07cIG19Vw+fAX3GnN8emcw07bSwd3I3nZIr2pJVyeS/5CNaZGPYm7
XaYWNhmzAmHlaUdFMF+CCZhd++jQpkMQ3q4N716dvm7pCA04vGiU9MbLjCs+
7dfYrnZXbfAdCCPzK29NXV0onODGL4yApINt3J6CO07X0jyYsd34wq5QV+IL
PdtOekn5Yqdgj4/16Mtobh1yk1vQSj5b4OybdOAX/poJ11nJjHvgs9g+3zJF
cmswbdt8kK/4npmBf0ncxRRRJ71F0lo05TAGKiq5DDu+Rs2gNO12Z5tENP5N
Fet9om6JUT8ORxLhrPJVW5e2qXxcQJ86nfzeNgK+5ekRHQ7Ym+QKHMr2+pHc
62eQpjFr2y4ChStduYXAiN2J689IWhBsOgFtIMJWinFZWNCJbQsb/p+isaCZ
cBpXZ+IOA84dOGQauisf64G0c6FxNsE4upOsdQ86NXV2zXNhufQ8e5F1gmy4
aQe7l/u6+d/U8pdfEE/uoS54F7RZhKuwiAq4LGroXextG7ajhC/jr/fWtOEv
OxcfPmzRee/Ql5Vlrku0/1pVApfuROGaAKEVljEMy86PnhCJr+fGHWHD6y1H
mU2yitzRDhePueaE9Fv+yc66otB9Ua8X7upFYrvCBS9YkgQZo47AxEF1UZjs
p7WXp0GE7MWg8u9l2KuPZAqLTcxOWpPR3hBugByEtWNTRLip3uD2DofjZIu+
YkCqatBYZuFAU9PeO9yh4PLaosxW6uEjuy0LRpzjGGCqyPZK4srjy4EV58KN
JN6CvR10Cq9pg3z3U6AIX6hynr+t6oeS3Dv7cJv+ZR+A7h0ASP7HD7g6oyqc
DCtJ0je9ducR7o1+gFWwG0UiFTdPAWnUIb+5Jopx4baWxzSDEZtuCRca458v
W87+BwFePQhjbwAA

-->

</rfc>
