+++ /dev/null
-
-
-
-
-Benchmarking Working Group M. Konstantynowicz
-Internet-Draft V. Polak
-Intended status: Informational Cisco Systems
-Expires: 6 March 2026 2 September 2025
-
-
- Multiple Loss Ratio Search
- draft-ietf-bmwg-mlrsearch-12
-
-Abstract
-
- This document specifies extensions to "Benchmarking Methodology for
- Network Interconnect Devices" (RFC 2544) throughput search by
- defining a new methodology called Multiple Loss Ratio search
- (MLRsearch). MLRsearch aims to minimize search duration, support
- multiple loss ratio searches, and improve result repeatability and
- comparability.
-
- MLRsearch is motivated by the pressing need to address the challenges
- of evaluating and testing the various data plane solutions,
- especially in software-based networking systems based on Commercial
- Off-the-Shelf (COTS) CPU hardware vs purpose-built ASIC / NPU / FPGA
- hardware.
-
-Status of This Memo
-
- This Internet-Draft is submitted in full conformance with the
- provisions of BCP 78 and BCP 79.
-
- Internet-Drafts are working documents of the Internet Engineering
- Task Force (IETF). Note that other groups may also distribute
- working documents as Internet-Drafts. The list of current Internet-
- Drafts is at https://datatracker.ietf.org/drafts/current/.
-
- Internet-Drafts are draft documents valid for a maximum of six months
- and may be updated, replaced, or obsoleted by other documents at any
- time. It is inappropriate to use Internet-Drafts as reference
- material or to cite them other than as "work in progress."
-
- This Internet-Draft will expire on 6 March 2026.
-
-Copyright Notice
-
- Copyright (c) 2025 IETF Trust and the persons identified as the
- document authors. All rights reserved.
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 1]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- This document is subject to BCP 78 and the IETF Trust's Legal
- Provisions Relating to IETF Documents (https://trustee.ietf.org/
- license-info) in effect on the date of publication of this document.
- Please review these documents carefully, as they describe your rights
- and restrictions with respect to this document. Code Components
- extracted from this document must include Revised BSD License text as
- described in Section 4.e of the Trust Legal Provisions and are
- provided without warranty as described in the Revised BSD License.
-
-Table of Contents
-
- 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4
- 1.1. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . 4
- 1.2. Positioning within BMWG Methodologies . . . . . . . . . . 6
- 2. Overview of RFC 2544 Problems . . . . . . . . . . . . . . . . 7
- 2.1. Long Search Duration . . . . . . . . . . . . . . . . . . 7
- 2.2. DUT in SUT . . . . . . . . . . . . . . . . . . . . . . . 8
- 2.3. Repeatability and Comparability . . . . . . . . . . . . . 10
- 2.4. Throughput with Non-Zero Loss . . . . . . . . . . . . . . 11
- 2.5. Inconsistent Trial Results . . . . . . . . . . . . . . . 12
- 3. Requirements Language . . . . . . . . . . . . . . . . . . . . 13
- 4. MLRsearch Specification . . . . . . . . . . . . . . . . . . . 13
- 4.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 14
- 4.1.1. Relationship to RFC 2544 . . . . . . . . . . . . . . 14
- 4.1.2. Applicability of Other Specifications . . . . . . . . 15
- 4.1.3. Out of Scope . . . . . . . . . . . . . . . . . . . . 15
- 4.2. Architecture Overview . . . . . . . . . . . . . . . . . . 15
- 4.2.1. Test Report . . . . . . . . . . . . . . . . . . . . . 17
- 4.2.2. Behavior Correctness . . . . . . . . . . . . . . . . 17
- 4.3. Quantities . . . . . . . . . . . . . . . . . . . . . . . 17
- 4.3.1. Current and Final Values . . . . . . . . . . . . . . 18
- 4.4. Existing Terms . . . . . . . . . . . . . . . . . . . . . 18
- 4.4.1. SUT . . . . . . . . . . . . . . . . . . . . . . . . . 18
- 4.4.2. DUT . . . . . . . . . . . . . . . . . . . . . . . . . 19
- 4.4.3. Trial . . . . . . . . . . . . . . . . . . . . . . . . 19
- 4.5. Trial Terms . . . . . . . . . . . . . . . . . . . . . . . 21
- 4.5.1. Trial Duration . . . . . . . . . . . . . . . . . . . 21
- 4.5.2. Trial Load . . . . . . . . . . . . . . . . . . . . . 21
- 4.5.3. Trial Input . . . . . . . . . . . . . . . . . . . . . 23
- 4.5.4. Traffic Profile . . . . . . . . . . . . . . . . . . . 23
- 4.5.5. Trial Forwarding Ratio . . . . . . . . . . . . . . . 25
- 4.5.6. Trial Loss Ratio . . . . . . . . . . . . . . . . . . 25
- 4.5.7. Trial Forwarding Rate . . . . . . . . . . . . . . . . 26
- 4.5.8. Trial Effective Duration . . . . . . . . . . . . . . 27
- 4.5.9. Trial Output . . . . . . . . . . . . . . . . . . . . 27
- 4.5.10. Trial Result . . . . . . . . . . . . . . . . . . . . 28
- 4.6. Goal Terms . . . . . . . . . . . . . . . . . . . . . . . 28
- 4.6.1. Goal Final Trial Duration . . . . . . . . . . . . . . 28
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 2]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- 4.6.2. Goal Duration Sum . . . . . . . . . . . . . . . . . . 29
- 4.6.3. Goal Loss Ratio . . . . . . . . . . . . . . . . . . . 29
- 4.6.4. Goal Exceed Ratio . . . . . . . . . . . . . . . . . . 30
- 4.6.5. Goal Width . . . . . . . . . . . . . . . . . . . . . 30
- 4.6.6. Goal Initial Trial Duration . . . . . . . . . . . . . 31
- 4.6.7. Search Goal . . . . . . . . . . . . . . . . . . . . . 32
- 4.6.8. Controller Input . . . . . . . . . . . . . . . . . . 32
- 4.7. Auxiliary Terms . . . . . . . . . . . . . . . . . . . . . 34
- 4.7.1. Trial Classification . . . . . . . . . . . . . . . . 34
- 4.7.2. Load Classification . . . . . . . . . . . . . . . . . 35
- 4.8. Result Terms . . . . . . . . . . . . . . . . . . . . . . 37
- 4.8.1. Relevant Upper Bound . . . . . . . . . . . . . . . . 37
- 4.8.2. Relevant Lower Bound . . . . . . . . . . . . . . . . 38
- 4.8.3. Conditional Throughput . . . . . . . . . . . . . . . 38
- 4.8.4. Goal Results . . . . . . . . . . . . . . . . . . . . 39
- 4.8.5. Search Result . . . . . . . . . . . . . . . . . . . . 40
- 4.8.6. Controller Output . . . . . . . . . . . . . . . . . . 41
- 4.9. Architecture Terms . . . . . . . . . . . . . . . . . . . 41
- 4.9.1. Measurer . . . . . . . . . . . . . . . . . . . . . . 42
- 4.9.2. Controller . . . . . . . . . . . . . . . . . . . . . 42
- 4.9.3. Manager . . . . . . . . . . . . . . . . . . . . . . . 43
- 4.10. Compliance . . . . . . . . . . . . . . . . . . . . . . . 44
- 4.10.1. Test Procedure Compliant with MLRsearch . . . . . . 44
- 4.10.2. MLRsearch Compliant with RFC 2544 . . . . . . . . . 45
- 4.10.3. MLRsearch Compliant with TST009 . . . . . . . . . . 45
- 5. Methodology Rationale and Design Considerations . . . . . . . 46
- 5.1. Binary Search . . . . . . . . . . . . . . . . . . . . . . 46
- 5.2. Stopping Conditions and Precision . . . . . . . . . . . . 47
- 5.3. Loss Ratios and Loss Inversion . . . . . . . . . . . . . 47
- 5.3.1. Single Goal and Hard Bounds . . . . . . . . . . . . . 47
- 5.3.2. Multiple Goals and Loss Inversion . . . . . . . . . . 47
- 5.3.3. Conservativeness and Relevant Bounds . . . . . . . . 48
- 5.3.4. Consequences . . . . . . . . . . . . . . . . . . . . 48
- 5.4. Exceed Ratio and Multiple Trials . . . . . . . . . . . . 49
- 5.5. Short Trials and Duration Selection . . . . . . . . . . . 50
- 5.6. Generalized Throughput . . . . . . . . . . . . . . . . . 50
- 5.6.1. Hard Performance Limit . . . . . . . . . . . . . . . 51
- 5.6.2. Performance Variability . . . . . . . . . . . . . . . 51
- 6. MLRsearch Logic and Example . . . . . . . . . . . . . . . . . 52
- 6.1. Load Classification Logic . . . . . . . . . . . . . . . . 52
- 6.2. Conditional Throughput Logic . . . . . . . . . . . . . . 54
- 6.2.1. Conditional Throughput and Load Classification . . . 55
- 6.3. SUT Behaviors . . . . . . . . . . . . . . . . . . . . . . 55
- 6.3.1. Expert Predictions . . . . . . . . . . . . . . . . . 55
- 6.3.2. Exceed Probability . . . . . . . . . . . . . . . . . 56
- 6.3.3. Trial Duration Dependence . . . . . . . . . . . . . . 56
- 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 57
- 8. Security Considerations . . . . . . . . . . . . . . . . . . . 57
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 3]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 57
- 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 58
- 10.1. Normative References . . . . . . . . . . . . . . . . . . 58
- 10.2. Informative References . . . . . . . . . . . . . . . . . 58
- Appendix A. Load Classification Code . . . . . . . . . . . . . . 60
- Appendix B. Conditional Throughput Code . . . . . . . . . . . . 61
- Appendix C. Example Search . . . . . . . . . . . . . . . . . . . 63
- C.1. Example Goals . . . . . . . . . . . . . . . . . . . . . . 64
- C.2. Example Trial Results . . . . . . . . . . . . . . . . . . 65
- C.3. Load Classification Computations . . . . . . . . . . . . 66
- C.3.1. Point 1 . . . . . . . . . . . . . . . . . . . . . . . 66
- C.3.2. Point 2 . . . . . . . . . . . . . . . . . . . . . . . 67
- C.3.3. Point 3 . . . . . . . . . . . . . . . . . . . . . . . 69
- C.3.4. Point 4 . . . . . . . . . . . . . . . . . . . . . . . 70
- C.3.5. Point 5 . . . . . . . . . . . . . . . . . . . . . . . 71
- C.3.6. Point 6 . . . . . . . . . . . . . . . . . . . . . . . 73
- C.4. Conditional Throughput Computations . . . . . . . . . . . 74
- C.4.1. Goal 2 . . . . . . . . . . . . . . . . . . . . . . . 74
- C.4.2. Goal 3 . . . . . . . . . . . . . . . . . . . . . . . 75
- C.4.3. Goal 4 . . . . . . . . . . . . . . . . . . . . . . . 76
- Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 78
-
-1. Introduction
-
- This document describes the Multiple Loss Ratio search (MLRsearch)
- methodology, optimized for determining data plane throughput in
- software-based networking functions running on commodity systems with
- x86/ARM CPUs (vs purpose-built ASIC / NPU / FPGA). Such network
- functions can be deployed on dedicated physical appliance (e.g., a
- standalone hardware device) or as virtual appliance (e.g., Virtual
- Network Function running on shared servers in the compute cloud).
-
-1.1. Purpose
-
- The purpose of this document is to describe the Multiple Loss Ratio
- search (MLRsearch) methodology, optimized for determining data plane
- throughput in software-based networking devices and functions.
-
- Applying the vanilla throughput binary search, as specified for
- example in [TST009] and [RFC2544] to software devices under test
- (DUTs) results in several problems:
-
- * Binary search takes long as most trials are done far from the
- eventually found throughput.
-
- * The required final trial duration and pauses between trials
- prolong the overall search duration.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 4]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * Software DUTs show noisy trial results, leading to a big spread of
- possible discovered throughput values.
-
- * Throughput requires a loss of exactly zero frames, but the
- industry best practices frequently allow for low but non-zero
- losses tolerance ([Y.1564], test-equipment manuals).
-
- * The definition of throughput is not clear when trial results are
- inconsistent. (e.g., when successive trials at the same - or even
- a higher - offered load yield different loss ratios, the classical
- [RFC1242] / [RFC2544] throughput metric can no longer be pinned to
- a single, unambiguous value.)
-
- To address these problems, early MLRsearch implementations employed
- the following enhancements:
-
- 1. Allow multiple short trials instead of one big trial per load.
-
- * Optionally, tolerate a percentage of trial results with higher
- loss.
-
- 2. Allow searching for multiple Search Goals, with differing loss
- ratios.
-
- * Any trial result can affect each Search Goal in principle.
-
- 3. Insert multiple coarse targets for each Search Goal, earlier ones
- need to spend less time on trials.
-
- * Earlier targets also aim for lesser precision.
-
- * Use Forwarding Rate (FR) at Maximum Offered Load (FRMOL), as
- defined in Section 3.6.2 of [RFC2285], to initialize bounds.
-
- 4. Be careful when dealing with inconsistent trial results.
-
- * Reported throughput is smaller than the smallest load with
- high loss.
-
- * Smaller load candidates are measured first.
-
- 5. Apply several time-saving load selection heuristics that
- deliberately prevent the bounds from narrowing unnecessarily.
-
- Enhancements 1, 2 and partly 4 are formalized as MLRsearch
- Specification within this document, other implementation details are
- out the scope.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 5]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- The remaining enhancements are treated as implementation details,
- thus achieving high comparability without limiting future
- improvements.
-
- MLRsearch configuration supports both conservative settings and
- aggressive settings. Conservative enough settings lead to results
- unconditionally compliant with [RFC2544], but without much
- improvement on search duration and repeatability - see MLRsearch
- Compliant with RFC 2544 (Section 4.10.2). Conversely, aggressive
- settings lead to shorter search durations and better repeatability,
- but the results are not compliant with [RFC2544]. Exact settings are
- not specified, but see the discussion in Overview of RFC 2544
- Problems (Section 2) for the impact of different settings on result
- quality.
-
- This document does not change or obsolete any part of [RFC2544].
-
-1.2. Positioning within BMWG Methodologies
-
- The Benchmarking Methodology Working Group (BMWG) produces
- recommendations (RFCs) that describe various benchmarking
- methodologies for use in a controlled laboratory environment. A
- large number of these benchmarks are based on the terminology from
- [RFC1242] and the foundational methodology from [RFC2544]. A common
- pattern has emerged where BMWG documents reference the methodology of
- [RFC2544] and augment it with specific requirements for testing
- particular network systems or protocols, without modifying the core
- benchmark definitions.
-
- While BMWG documents are formally recommendations, they are widely
- treated as industry norms to ensure the comparability of results
- between different labs. The set of benchmarks defined in [RFC2544],
- in particular, became a de facto standard for performance testing.
- In this context, the MLRsearch Specification formally defines a new
- class of benchmarks that fits within the wider [RFC2544] framework
- (see Scope (Section 4.1)).
-
- A primary consideration in the design of MLRsearch is the trade-off
- between configurability and comparability. The methodology's
- flexibility, especially the ability to define various sets of Search
- Goals, supporting both single-goal and multiple-goal benchmarks in an
- unified way is powerful for detailed characterization and internal
- testing. However, this same flexibility is detrimental to inter-lab
- comparability unless a specific, common set of Search Goals is agreed
- upon.
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 6]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Therefore, MLRsearch should not be seen as a direct extension nor a
- replacement for the [RFC2544] Throughput benchmark. Instead, this
- document provides a foundational methodology that future BMWG
- documents can use to define new, specific, and comparable benchmarks
- by mandating particular Search Goal configurations. For operators of
- existing test procedures, it is worth noting that many test setups
- measuring [RFC2544] Throughput can be adapted to produce results
- compliant with the MLRsearch Specification, often without affecting
- Trials, merely by augmenting the content of the final test report.
-
-2. Overview of RFC 2544 Problems
-
- This section describes the problems affecting usability of various
- performance testing methodologies, mainly a binary search for
- [RFC2544] unconditionally compliant throughput.
-
-2.1. Long Search Duration
-
- The proliferation of software DUTs, with frequent software updates
- and a number of different frame processing modes and configurations,
- has increased both the number of performance tests required to verify
- the DUT update and the frequency of running those tests. This makes
- the overall test execution time even more important than before.
-
- The throughput definition per [RFC2544] restricts the potential for
- time-efficiency improvements. The bisection method, when used in a
- manner unconditionally compliant with [RFC2544], is excessively slow
- due to two main factors.
-
- Firstly, a significant amount of time is spent on trials with loads
- that, in retrospect, are far from the final determined throughput.
-
- Secondly, [RFC2544] does not specify any stopping condition for
- throughput search, so users of testing equipment implementing the
- procedure already have access to a limited trade-off between search
- duration and achieved precision. However, each of the full 60-second
- trials doubles the precision.
-
- As such, not many trials can be removed without a substantial loss of
- precision.
-
- For reference, here is a brief [RFC2544] throughput binary
- (bisection) reminder, based on Sections 24 and 26 of [RFC2544:
-
- * Set Max = line-rate and Min = a proven loss-free load.
-
- * Run a single 60-s trial at the midpoint.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 7]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * Zero-loss -> midpoint becomes new Min; any loss-> new Max.
-
- * Repeat until the Max-Min gap meets the desired precision, then
- report the highest zero-loss rate for every mandatory frame size.
-
-2.2. DUT in SUT
-
- [RFC2285] defines:
-
- DUT as:
-
- * The network frame forwarding device to which stimulus is offered
- and response measured Section 3.1.1 of [RFC2285].
-
- SUT as:
-
- * The collective set of network devices as a single entity to which
- stimulus is offered and response measured Section 3.1.2 of
- [RFC2285].
-
- Section 19 of [RFC2544] specifies a test setup with an external
- tester stimulating the networking system, treating it either as a
- single Device Under Test (DUT), or as a system of devices, a System
- Under Test (SUT).
-
- For software-based data-plane forwarding running on commodity x86/ARM
- CPUs, the SUT comprises not only the forwarding application itself,
- the DUT, but the entire execution environment: host hardware,
- firmware and kernel/hypervisor services, as well as any other
- software workloads that share the same CPUs, memory and I/O
- resources.
-
- Given that a SUT is a shared multi-tenant environment, the DUT might
- inadvertently experience interference from the operating system or
- from other software operating on the same server.
-
- Some of this interference can be mitigated. For instance, in multi-
- core CPU systems, pinning DUT program threads to specific CPU cores
- and isolating those cores can prevent context switching.
-
- Despite taking all feasible precautions, some adverse effects may
- still impact the DUT's network performance. In this document, these
- effects are collectively referred to as SUT noise, even if the
- effects are not as unpredictable as what other engineering
- disciplines call noise.
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 8]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- A DUT can also exhibit fluctuating performance itself, for reasons
- not related to the rest of SUT. For example, this can be due to
- pauses in execution as needed for internal stateful processing. In
- many cases this may be an expected per-design behavior, as it would
- be observable even in a hypothetical scenario where all sources of
- SUT noise are eliminated. Such behavior affects trial results in a
- way similar to SUT noise. As the two phenomena are hard to
- distinguish, in this document the term 'noise' is used to encompass
- both the internal performance fluctuations of the DUT and the genuine
- noise of the SUT.
-
- A simple model of SUT performance consists of an idealized noiseless
- performance, and additional noise effects. For a specific SUT, the
- noiseless performance is assumed to be constant, with all observed
- performance variations being attributed to noise. The impact of the
- noise can vary in time, sometimes wildly, even within a single trial.
- The noise can sometimes be negligible, but frequently it lowers the
- observed SUT performance as observed in trial results.
-
- In this simple model, a SUT does not have a single performance value,
- it has a spectrum. One end of the spectrum is the idealized
- noiseless performance value, the other end can be called a noiseful
- performance. In practice, trial results close to the noiseful end of
- the spectrum happen only rarely. The worse a possible performance
- value is, the more rarely it is seen in a trial. Therefore, the
- extreme noiseful end of the SUT spectrum is not observable among
- trial results.
-
- Furthermore, the extreme noiseless end of the SUT spectrum is
- unlikely to be observable, this time because minor noise events
- almost always occur during each trial, nudging the measured
- performance slightly below the theoretical maximum.
-
- Unless specified otherwise, this document's focus is on the
- potentially observable ends of the SUT performance spectrum, as
- opposed to the extreme ones.
-
- When focusing on the DUT, the benchmarking effort should ideally aim
- to eliminate only the SUT noise from SUT measurements. However, this
- is currently not feasible in practice, as there are no realistic
- enough models that would be capable to distinguish SUT noise from DUT
- fluctuations (based on the available literature at the time of
- writing).
-
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 9]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Provided SUT execution environment and any co-resident workloads
- place only negligible demands on SUT shared resources, so that the
- DUT remains the principal performance limiter, the DUT's ideal
- noiseless performance is defined as the noiseless end of the SUT
- performance spectrum.
-
- Note that by this definition, DUT noiseless performance also
- minimizes the impact of DUT fluctuations, as much as realistically
- possible for a given trial duration.
-
- The MLRsearch methodology aims to solve the DUT in SUT problem by
- estimating the noiseless end of the SUT performance spectrum using a
- limited number of trial results.
-
- Improvements to the throughput search algorithm, aimed at better
- dealing with software networking SUT and DUT setups, should adopt
- methods that explicitly model SUT-generated noise, enabling to derive
- surrogate metrics that approximate the (proxies for) DUT noiseless
- performance across a range of SUT noise-tolerance levels.
-
-2.3. Repeatability and Comparability
-
- [RFC2544] does not suggest repeating throughput search. Also, note
- that from simply one discovered throughput value, it cannot be
- determined how repeatable that value is. Unsatisfactory
- repeatability then leads to unacceptable comparability, as different
- benchmarking teams may obtain varying throughput values for the same
- SUT, exceeding the expected differences from search precision.
- Repeatability is important also when the test procedure is kept the
- same, but SUT is varied in small ways. For example, during
- development of software-based DUTs, repeatability is needed to detect
- small regressions.
-
- [RFC2544] throughput requirements (60 seconds trial and no tolerance
- of a single frame loss) affect the throughput result as follows:
-
- The SUT behavior close to the noiseful end of its performance
- spectrum consists of rare occasions of significantly low performance,
- but the long trial duration makes those occasions not so rare on the
- trial level. Therefore, the binary search results tend to spread
- away from the noiseless end of SUT performance spectrum, more
- frequently and more widely than shorter trials would, thus causing
- unacceptable throughput repeatability.
-
- The repeatability problem can be better addressed by defining a
- search procedure that identifies a consistent level of performance,
- even if it does not meet the strict definition of throughput in
- [RFC2544].
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 10]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- According to the SUT performance spectrum model, better repeatability
- will be at the noiseless end of the spectrum. Therefore, solutions
- to the DUT in SUT problem will help also with the repeatability
- problem.
-
- Conversely, any alteration to [RFC2544] throughput search that
- improves repeatability should be considered as less dependent on the
- SUT noise.
-
- An alternative option is to simply run a search multiple times, and
- report some statistics (e.g., average and standard deviation, and/or
- percentiles like p95).
-
- This can be used for a subset of tests deemed more important, but it
- makes the search duration problem even more pronounced.
-
-2.4. Throughput with Non-Zero Loss
-
- Section 3.17 of [RFC1242] defines throughput as: The maximum rate at
- which none of the offered frames are dropped by the device.
-
- Then, it says: Since even the loss of one frame in a data stream can
- cause significant delays while waiting for the higher-level
- protocols to time out, it is useful to know the actual maximum
- data rate that the device can support.
-
- However, many benchmarking teams accept a low, non-zero loss ratio as
- the goal for their load search.
-
- Motivations are many:
-
- * Networking protocols tolerate frame loss better, compared to the
- time when [RFC1242] and [RFC2544] were specified.
-
- * Increased link speeds require trials sending more frames within
- the same duration, increasing the chance of a small SUT
- performance fluctuation being enough to cause frame loss.
-
- * Because noise-related drops usually arrive in small bursts, their
- impact on the trial's overall frame loss ratio is diluted by the
- longer intervals in which the SUT operates close to its noiseless
- performance; consequently, the averaged Trial Loss Ratio can still
- end up below the specified Goal Loss Ratio value.
-
- * If an approximation of the SUT noise impact on the Trial Loss
- Ratio is known, it can be set as the Goal Loss Ratio (see
- definitions of Trial and Goal terms in Trial Terms (Section 4.5)
- and Goal Terms (Section 4.6)).
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 11]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * For more information, see an earlier draft [Lencze-Shima]
- (Section 5) and references there.
-
- Regardless of the validity of all similar motivations, support for
- non-zero loss goals makes a search algorithm more user-friendly.
- [RFC2544] throughput is not user-friendly in this regard.
-
- Furthermore, allowing users to specify multiple loss ratio values,
- and enabling a single search to find all relevant bounds,
- significantly enhances the usefulness of the search algorithm.
-
- Searching for multiple Search Goals also helps to describe the SUT
- performance spectrum better than the result of a single Search Goal.
- For example, the repeated wide gap between zero and non-zero loss
- loads indicates the noise has a large impact on the observed
- performance, which is not evident from a single goal load search
- procedure result.
-
- It is easy to modify the vanilla bisection to find a lower bound for
- the load that satisfies a non-zero Goal Loss Ratio. But it is not
- that obvious how to search for multiple goals at once, hence the
- support for multiple Search Goals remains a problem.
-
- At the time of writing there does not seem to be a consensus in the
- industry on which ratio value is the best. For users, performance of
- higher protocol layers is important, for example, goodput of TCP
- connection (TCP throughput, [RFC6349]), but relationship between
- goodput and loss ratio is not simple. Refer to [Lencze-Kovacs-Shima]
- for examples of various corner cases, Section 3 of [RFC6349] for loss
- ratios acceptable for an accurate measurement of TCP throughput, and
- [Ott-Mathis-Semke-Mahdavi] for models and calculations of TCP
- performance in presence of packet loss.
-
-2.5. Inconsistent Trial Results
-
- While performing throughput search by executing a sequence of
- measurement trials, there is a risk of encountering inconsistencies
- between trial results.
-
- Examples include, but are not limited to:
-
- * A trial at the same load (same or different trial duration)
- results in a different Trial Loss Ratio.
-
- * A trial at a larger load (same or different trial duration)
- results in a lower Trial Loss Ratio.
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 12]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- The plain bisection never encounters inconsistent trials. But
- [RFC2544] hints about the possibility of inconsistent trial results,
- in two places in its text. The first place is Section 24 of
- [RFC2544], where full trial durations are required, presumably
- because they can be inconsistent with the results from short trial
- durations. The second place is Section 26.3 of [RFC2544], where two
- successive zero-loss trials are recommended, presumably because after
- one zero-loss trial there can be a subsequent inconsistent non-zero-
- loss trial.
-
- A robust throughput search algorithm needs to decide how to continue
- the search in the presence of such inconsistencies. Definitions of
- throughput in [RFC1242] and [RFC2544] are not specific enough to
- imply a unique way of handling such inconsistencies.
-
- Ideally, there will be a definition of a new quantity which both
- generalizes throughput for non-zero Goal Loss Ratio values (and other
- possible repeatability enhancements), while being precise enough to
- force a specific way to resolve trial result inconsistencies. But
- until such a definition is agreed upon, the correct way to handle
- inconsistent trial results remains an open problem.
-
- Relevant Lower Bound is the MLRsearch term that addresses this
- problem.
-
-3. Requirements Language
-
- The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
- "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
- "OPTIONAL" in this document are to be interpreted as described in BCP
- 14, [RFC2119] and [RFC8174] when, and only when, they appear in all
- capitals, as shown here.
-
- This document is categorized as an Informational RFC. While it does
- not mandate the adoption of the MLRsearch methodology, it uses the
- normative language of BCP 14 to provide an unambiguous specification.
- This ensures that if a test procedure or test report claims
- compliance with the MLRsearch Specification, it MUST adhere to all
- the absolute requirements defined herein. The use of normative
- language is intended to promote repeatable and comparable results
- among those who choose to implement this methodology.
-
-4. MLRsearch Specification
-
- This chapter provides all technical definitions needed for evaluating
- whether a particular test procedure complies with MLRsearch
- Specification.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 13]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Some terms used in the specification are capitalized. It is just a
- stylistic choice for this document, reminding the reader this term is
- introduced, defined or explained elsewhere in the document. Lower
- case variants are equally valid.
-
- This document does not separate terminology from methodology. Terms
- are fully specified and discussed in their own subsections, under
- sections titled "Terms". This way, the list of terms is visible in
- table of contents.
-
- Each per term subsection contains a short _Definition_ paragraph
- containing a minimal definition and all strict requirements, followed
- by _Discussion_ paragraphs focusing on important consequences and
- recommendations. Requirements about how other components can use the
- defined quantity are also included in the discussion.
-
-4.1. Scope
-
- This document specifies the Multiple Loss Ratio search (MLRsearch)
- methodology. The MLRsearch Specification details a new class of
- benchmarks by listing all terminology definitions and methodology
- requirements. The definitions support "multi-goal" benchmarks, with
- "single-goal" as a subset.
-
- The normative scope of this specification includes:
-
- * The terminology for all required quantities and their attributes.
-
- * An abstract architecture consisting of functional components
- (Manager, Controller, Measurer) and the requirements for their
- inputs and outputs.
-
- * The required structure and attributes of the Controller Input,
- including one or more Search Goal instances.
-
- * The required logic for Load Classification, which determines
- whether a given Trial Load qualifies as a Lower Bound or an Upper
- Bound for a Search Goal.
-
- * The required structure and attributes of the Controller Output,
- including a Goal Result for each Search Goal.
-
-4.1.1. Relationship to RFC 2544
-
- MLRsearch Specification is an independent methodology and does not
- change or obsolete any part of [RFC2544].
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 14]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- This specification permits deviations from the Trial procedure as
- described in [RFC2544]. Any deviation from the [RFC2544] procedure
- must be documented explicitly in the Test Report, and such variations
- remain outside the scope of the original [RFC2544] benchmarks.
-
- A specific single-goal MLRsearch benchmark can be configured to be
- compliant with [RFC2544] Throughput, and most procedures reporting
- [RFC2544] Throughput can be adapted to satisfy also MLRsearch
- requirements for specific search goal.
-
-4.1.2. Applicability of Other Specifications
-
- Methodology extensions from other BMWG documents that specify details
- for testing particular DUTs, configurations, or protocols (e.g., by
- defining a particular Traffic Profile) are considered orthogonal to
- MLRsearch and are applicable to a benchmark conducted using MLRsearch
- methodology.
-
-4.1.3. Out of Scope
-
- The following aspects are explicitly out of the normative scope of
- this document:
-
- * This specification does not mandate or recommend any single,
- universal Search Goal configuration for all use cases. The
- selection of Search Goal parameters is left to the operator of the
- test procedure or may be defined by future specifications.
-
- * The internal heuristics or algorithms used by the Controller to
- select Trial Input values (e.g., the load selection strategy) are
- considered implementation details.
-
- * The potential for, and the effects of, interference between
- different Search Goal instances within a multiple-goal search are
- considered outside the normative scope of this specification.
-
-4.2. Architecture Overview
-
- Although the normative text references only terminology that has
- already been introduced, explanatory passages beside it sometimes
- profit from terms that are defined later in the document. To keep
- the initial read-through clear, this informative section offers a
- concise, top-down sketch of the complete MLRsearch architecture.
-
- The architecture is modelled as a set of abstract, interacting
- components. Information exchange between components is expressed in
- an imperative-programming style: one component "calls" another,
- supplying inputs (arguments) and receiving outputs (return values).
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 15]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- This notation is purely conceptual; actual implementations need not
- exchange explicit messages. When the text contrasts alternative
- behaviours, it refers to the different implementations of the same
- component.
-
- A test procedure is considered compliant with the MLRsearch
- Specification if it can be conceptually decomposed into the abstract
- components defined herein, and each component satisfies the
- requirements defined for its corresponding MLRsearch element.
-
- The Measurer component is tasked to perform Trials, the Controller
- component is tasked to select Trial Durations and Loads, the Manager
- component is tasked to pre-configure involved entities and to produce
- the Test Report. The Test Report explicitly states Search Goals (as
- Controller Input) and corresponding Goal Results (Controller Output).
-
- This constitutes one benchmark (single-goal or multi-goal). Repeated
- or slightly differing benchmarks are realized by calling Controller
- once for each benchmark.
-
- The Manager calls a Controller once, and the Controller then invokes
- the Measurer repeatedly until Controller decides it has enough
- information to return outputs.
-
- The part during which the Controller invokes the Measurer is termed
- the Search. Any work the Manager performs either before invoking the
- Controller or after Controller returns, falls outside the scope of
- the Search.
-
- MLRsearch Specification prescribes Regular Search Results and
- recommends corresponding search completion conditions.
-
- Irregular Search Results are also allowed, they have different
- requirements and their corresponding stopping conditions are out of
- scope.
-
- Search Results are based on Load Classification. When measured
- enough, a chosen Load can either achieve or fail each Search Goal
- (separately), thus becoming a Lower Bound or an Upper Bound for that
- Search Goal.
-
- When the Relevant Lower Bound is close enough to Relevant Upper Bound
- according to Goal Width, the Regular Goal Result is found. Search
- stops when all Regular Goal Results are found, or when some Search
- Goals are proven to have only Irregular Goal Results.
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 16]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.2.1. Test Report
-
- A primary responsibility of the Manager is to produce a Test Report,
- which serves as the final and formal output of the test procedure.
-
- This document does not provide a single, complete, normative
- definition for the structure of the Test Report. For example, Test
- Report may contain results for a single benchmark, or it could
- aggregate results of many benchmarks.
-
- Instead, normative requirements for the content of the Test Report
- are specified throughout this document in conjunction with the
- definitions of the quantities and procedures to which they apply.
- Readers should note that any clause requiring a value to be
- "reported" or "stated in the test report" constitutes a normative
- requirement on the content of this final artifact.
-
- Even where not stated explicitly, the "Reporting format" paragraphs
- in [RFC2544] sections are still requirements on Test Report if they
- apply to a MLRsearch benchmark.
-
-4.2.2. Behavior Correctness
-
- MLRsearch Specification by itself does not guarantee that the Search
- ends in finite time, as the freedom the Controller has for Load
- selection also allows for clearly deficient choices.
-
- For deeper insights on these matters, refer to [FDio-CSIT-MLRsearch].
-
- The primary MLRsearch implementation, used as the prototype for this
- specification, is [PyPI-MLRsearch].
-
-4.3. Quantities
-
- MLRsearch Specification uses a number of specific quantities, some of
- them can be expressed in several different units.
-
- In general, MLRsearch Specification does not require particular units
- to be used, but it is REQUIRED for the test report to state all the
- units. For example, ratio quantities can be dimensionless numbers
- between zero and one, but may be expressed as percentages instead.
-
- For convenience, a group of quantities can be treated as a composite
- quantity. One constituent of a composite quantity is called an
- attribute. A group of attribute values is called an instance of that
- composite quantity.
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 17]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Some attributes may depend on others and can be calculated from other
- attributes. Such quantities are called derived quantities.
-
-4.3.1. Current and Final Values
-
- Some quantities are defined in a way that makes it possible to
- compute their values in the middle of a Search. Other quantities are
- specified so that their values can be computed only after a Search
- ends. Some quantities are important only after a Search ended, but
- their values are computable also before a Search ends.
-
- For a quantity that is computable before a Search ends, the adjective
- *current* is used to mark a value of that quantity available before
- the Search ends. When such value is relevant for the search result,
- the adjective *final* is used to denote the value of that quantity at
- the end of the Search.
-
- If a time evolution of such a dynamic quantity is guided by
- configuration quantities, those adjectives can be used to distinguish
- quantities. For example, if the current value of "duration" (dynamic
- quantity) increases from "initial duration" to "final duration"
- (configuration quantities), all the quoted names denote separate but
- related quantities. As the naming suggests, the final value of
- "duration" is expected to be equal to "final duration" value.
-
-4.4. Existing Terms
-
- This specification relies on the following three documents that
- should be consulted before attempting to make use of this document:
-
- * "Benchmarking Terminology for Network Interconnect Devices"
- [RFC1242] contains basic term definitions.
-
- * "Benchmarking Terminology for LAN Switching Devices" [RFC2285]
- adds more terms and discussions, describing some known network
- benchmarking situations in a more precise way.
-
- * "Benchmarking Methodology for Network Interconnect Devices"
- [RFC2544] contains discussions about terms and additional
- methodology requirements.
-
- Definitions of some central terms from above documents are copied and
- discussed in the following subsections.
-
-4.4.1. SUT
-
- Defined in Section 3.1.2 of [RFC2285] as follows.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 18]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Definition:
-
- The collective set of network devices to which stimulus is offered
- as a single entity and response measured.
-
- Discussion:
-
- An SUT consisting of a single network device is allowed by this
- definition.
-
- In software-based networking SUT may comprise multitude of
- networking applications and the entire host hardware and software
- execution environment.
-
- SUT is the only entity that can be benchmarked directly, even
- though only the performance of some sub-components are of
- interest.
-
-4.4.2. DUT
-
- Defined in Section 3.1.1 of [RFC2285] as follows.
-
- Definition:
-
- The network forwarding device to which stimulus is offered and
- response measured.
-
- Discussion:
-
- Contrary to SUT, the DUT stimulus and response are frequently
- initiated and observed only indirectly, on different parts of SUT.
-
- DUT, as a sub-component of SUT, is only indirectly mentioned in
- MLRsearch Specification, but is of key relevance for its
- motivation. The device can represent a software-based networking
- functions running on commodity x86/ARM CPUs (vs purpose-built ASIC
- / NPU / FPGA).
-
- A well-designed SUTs should have the primary DUT as their
- performance bottleneck. The ways to achieve that are outside of
- MLRsearch Specification scope.
-
-4.4.3. Trial
-
- A trial is the part of the test described in Section 23 of [RFC2544].
-
- Definition:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 19]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- A particular test consists of multiple trials. Each trial returns
- one piece of information, for example the loss rate at a
- particular input frame rate. Each trial consists of a number of
- phases:
-
- a) If the DUT is a router, send the routing update to the "input"
- port and pause two seconds to be sure that the routing has
- settled.
-
- b) Send the "learning frames" to the "output" port and wait 2
- seconds to be sure that the learning has settled. Bridge learning
- frames are frames with source addresses that are the same as the
- destination addresses used by the test frames. Learning frames
- for other protocols are used to prime the address resolution
- tables in the DUT. The formats of the learning frame that should
- be used are shown in the Test Frame Formats document.
-
- c) Run the test trial.
-
- d) Wait for two seconds for any residual frames to be received.
-
- e) Wait for at least five seconds for the DUT to restabilize.
-
- Discussion:
-
- The traffic is sent only in phase c) and received in phases c) and
- d).
-
- Trials are the only stimuli the SUT is expected to experience
- during the Search.
-
- In some discussion paragraphs, it is useful to consider the
- traffic as sent and received by a tester, as implicitly defined in
- Section 6 of [RFC2544].
-
- The definition describes some traits, not using capitalized verbs
- to signify strength of the requirements. For the purposes of the
- MLRsearch Specification, the test procedure MAY deviate from the
- [RFC2544] description, but any such deviation MUST be described
- explicitly in the Test Report. It is still RECOMMENDED to not
- deviate from the description, as any deviation weakens
- comparability.
-
- An example of deviation from [RFC2544] is using shorter wait
- times, compared to those described in phases a), b), d) and e).
-
- The [RFC2544] document itself seems to be treating phase b) as any
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 20]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- type of configuration that cannot be configured only once (by
- Manager, before Search starts), as some crucial SUT state could
- time-out during the Search. It is RECOMMENDED to interpret the
- "learning frames" to be any such time-sensitive per-trial
- configuration method, with bridge MAC learning being only one
- possible examples. Appendix C.2.4.1 of [RFC2544] lists another
- example: ARP with wait time of 5 seconds.
-
- Some methodologies describe recurring tests. If those are based
- on Trials, they are treated as multiple independent Trials.
-
-4.5. Trial Terms
-
- This section defines new and redefine existing terms for quantities
- relevant as inputs or outputs of a Trial, as used by the Measurer
- component. This includes also any derived quantities related to
- results of one Trial.
-
-4.5.1. Trial Duration
-
- Definition:
-
- Trial Duration is the intended duration of the phase c) of a
- Trial.
-
- Discussion:
-
- The value MUST be positive.
-
- While any positive real value may be provided, some Measurer
- implementations MAY limit possible values, e.g., by rounding down
- to nearest integer in seconds. In that case, it is RECOMMENDED to
- give such inputs to the Controller so that the Controller only
- uses the accepted values.
-
-4.5.2. Trial Load
-
- Definition:
-
- Trial Load is the per-interface Intended Load for a Trial.
-
- Discussion:
-
- Trial Load is equivalent to the quantities defined as constant
- load (Section 3.4 of [RFC1242]), data rate (Section 14 of
- [RFC2544]), and Intended Load (Section 3.5.1 of [RFC2285]), in the
- sense that all three definitions specify that this value applies
- to one (input or output) interface.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 21]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- For specification purposes, it is assumed that this is a constant
- load by default, as specified in Section 3.4 of [RFC1242]).
- Informally, Traffic Load is a single number that can "scale" any
- traffic pattern as long as the intuition of load intended against
- a single interface can be applied.
-
- It MAY be possible to use a Trial Load value to describe a non-
- constant traffic (using average load when the traffic consists of
- repeated bursts of frames e.g., as suggested in Section 21 of
- [RFC2544]). In the case of a non-constant load, the Test Report
- MUST explicitly mention how exactly non-constant the traffic is
- and how it reacts to Traffic Load value. But the rest of the
- MLRsearch Specification assumes that is not the case, to avoid
- discussing corner cases (e.g., which values are possible within
- medium limitations).
-
- Similarly, traffic patterns where different interfaces are subject
- to different loads MAY be described by a single Trial Load value
- (e.g. using largest load among interfaces), but again the Test
- Report MUST explicitly describe how the traffic pattern reacts to
- Traffic Load value, and this specification does not discuss all
- the implications of that approach.
-
- In the common case of bidirectional traffic, as described in
- Section 14. Bidirectional Traffic of [RFC2544], Trial Load is the
- data rate per direction, half of aggregate data rate.
-
- Traffic patterns where a single Trial Load does not describe their
- scaling cannot be used for MLRsearch benchmarks.
-
- Similarly to Trial Duration, some Measurers MAY limit the possible
- values of Trial Load. Contrary to Trial Duration, documenting
- such behavior in the test report is OPTIONAL. This is because the
- load differences are negligible (and frequently undocumented) in
- practice.
-
- The Controller MAY select Trial Load and Trial Duration values in
- a way that would not be possible to achieve using any integer
- number of data frames.
-
- If a particular Trial Load value is not tied to a single Trial,
- e.g., if there are no Trials yet or if there are multiple Trials,
- this document uses a shorthand *Load*.
-
- The test report MAY present the aggregate load across multiple
- interfaces, treating it as the same quantity expressed using
- different units. Each reported Trial Load value MUST state
- unambiguously whether it refers to (i) a single interface, (ii) a
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 22]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- specified subset of interfaces (such as all logical interfaces
- mapped to one physical port), or (iii) the total across every
- interface. For any aggregate load value, the report MUST also
- give the fixed conversion factor that links the per-interface and
- multi-interface load values.
-
- The per-interface value remains the primary unit, consistent with
- prevailing practice in [RFC1242], [RFC2544], and [RFC2285].
-
- The last paragraph also applies to other terms related to Load.
-
- For example, tests with symmetric bidirectional traffic can report
- load-related values as "bidirectional load" (double of
- "unidirectional load").
-
-4.5.3. Trial Input
-
- Definition:
-
- Trial Input is a composite quantity, consisting of two attributes:
- Trial Duration and Trial Load.
-
- Discussion:
-
- When talking about multiple Trials, it is common to say "Trial
- Inputs" to denote all corresponding Trial Input instances.
-
- A Trial Input instance acts as the input for one call of the
- Measurer component.
-
- Contrary to other composite quantities, MLRsearch implementations
- MUST NOT add optional attributes into Trial Input. This improves
- interoperability between various implementations of a Controller
- and a Measurer.
-
- Note that both attributes are *intended* quantities, as only those
- can be fully controlled by the Controller. The actual offered
- quantities, as realized by the Measurer, can be different (and
- must be different if not multiplying into integer number of
- frames), but questions around those offered quantities are
- generally outside of the scope of this document.
-
-4.5.4. Traffic Profile
-
- Definition:
-
- Traffic Profile is a composite quantity containing all attributes
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 23]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- other than Trial Load and Trial Duration, that are needed for
- unique determination of the Trial to be performed.
-
- Discussion:
-
- All the attributes are assumed to be constant during the Search,
- and the composite is configured on the Measurer by the Manager
- before the Search starts. This is why the traffic profile is not
- part of the Trial Input.
-
- Specification of traffic properties included in the Traffic
- Profile is the responsibility of the Manager, but the specific
- configuration mechanisms are outside of the scope of this
- document.
-
- Informally, implementations of the Manager and the Measurer must
- be aware of their common set of capabilities, so that Traffic
- Profile instance uniquely defines the traffic during the Search.
- Typically, Manager and Measurer implementations are tightly
- integrated.
-
- Integration efforts between independent Manager and Measurer
- implementations are outside of the scope of this document. An
- example standardization effort is [Vassilev].
-
- Examples of traffic properties include: - Data link frame size -
- Fixed sizes as listed in Section 3.5 of [RFC1242] and in Section 9
- of [RFC2544] - IMIX mixed sizes as defined in [RFC6985] - Frame
- formats and protocol addresses - Section 8, 12 and Appendix C of
- [RFC2544] - Symmetric bidirectional traffic - Section 14 of
- [RFC2544].
-
- Other traffic properties that need to be somehow specified in
- Traffic Profile, and MUST be mentioned in Test Report if they
- apply to the benchmark, include:
-
-
- * bidirectional traffic from Section 14 of [RFC2544],
-
- * fully meshed traffic from Section 3.3.3 of [RFC2285],
-
- * modifiers from Section 11 of [RFC2544].
-
- * IP version mixing from Section 5.3 of [RFC8219].
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 24]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.5.5. Trial Forwarding Ratio
-
- Definition:
-
- The Trial Forwarding Ratio is a dimensionless floating point
- value. It MUST range between 0.0 and 1.0, both inclusive. It is
- calculated by dividing the number of frames successfully forwarded
- by the SUT by the total number of frames expected to be forwarded
- during the trial.
-
- Discussion:
-
- For most Traffic Profiles, "expected to be forwarded" means
- "intended to get received by SUT from tester". This SHOULD be the
- default interpretation. Only if this is not the case, the test
- report MUST describe the Traffic Profile in a detail sufficient to
- imply how Trial Forwarding Ratio should be calculated.
-
- Trial Forwarding Ratio MAY be expressed in other units (e.g., as a
- percentage) in the test report.
-
- Note that, contrary to Load terms, frame counts used to compute
- Trial Forwarding Ratio are generally aggregates over all SUT
- output interfaces, as most test procedures verify all outgoing
- frames. The procedure for [RFC2544] Throughput counts received
- frames, so implicitly it implies bidirectional counts for
- bidirectional traffic, even though the final value is "rate" that
- is still per-interface.
-
- For example, in a test with symmetric bidirectional traffic, if
- one direction is forwarded without losses, but the opposite
- direction does not forward at all, the Trial Forwarding Ratio
- would be 0.5 (50%).
-
- In future extensions, more general ways to compute Trial
- Forwarding Ratio may be allowed, but the current MLRsearch
- Specification relies on this specific averaged counters approach.
-
-4.5.6. Trial Loss Ratio
-
- Definition:
-
- The Trial Loss Ratio is equal to one minus the Trial Forwarding
- Ratio.
-
- Discussion:
-
- 100% minus the Trial Forwarding Ratio, when expressed as a
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 25]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- percentage.
-
- This is almost identical to Frame Loss Rate of Section 3.6 of
- [RFC1242]. The only minor differences are that Trial Loss Ratio
- does not need to be expressed as a percentage, and Trial Loss
- Ratio is explicitly based on averaged frame counts when more than
- one data stream is present.
-
-4.5.7. Trial Forwarding Rate
-
- Definition:
-
- The Trial Forwarding Rate is a derived quantity, calculated by
- multiplying the Trial Load by the Trial Forwarding Ratio.
-
- Discussion:
-
- This quantity differs from the Forwarding Rate described in
- Section 3.6.1 of [RFC2285]. Under the RFC 2285 method, each
- output interface is measured separately, so every interface may
- report a distinct rate. The Trial Forwarding Rate, by contrast,
- uses a single set of frame counts and therefore yields one value
- that represents the whole system, while still preserving the
- direct link to the per-interface load.
-
- When the Traffic Profile is symmetric and bidirectional, as
- defined in Section 14 of [RFC2544], the Trial Forwarding Rate is
- numerically equal to the arithmetic average of the individual per-
- interface forwarding rates that would be produced by the RFC 2285
- procedure.
-
- For more complex traffic patterns, such as many-to-one as
- mentioned in Section 3.3.2 Partially Meshed Traffic of [RFC2285],
- the meaning of Trial Forwarding Rate is less straightforward. For
- example, if two input interfaces receive one million frames per
- second each, and a single interface outputs 1.4 million frames per
- second (fps), Trial Load is 1 million fps, Trial Loss Ratio is
- 30%, and Trial Forwarding Rate is 0.7 million fps.
-
- Because this rate is anchored to the Load defined for one
- interface, a test report MAY show it either as the single averaged
- figure just described, or as the sum of the separate per-interface
- forwarding rates. For the example above, the aggregate trial
- forwarding rate is 1.4 million fps.
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 26]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.5.8. Trial Effective Duration
-
- Definition:
-
- Trial Effective Duration is a time quantity related to a Trial, by
- default equal to the Trial Duration.
-
- Discussion:
-
- This is an optional feature. If the Measurer does not return any
- Trial Effective Duration value, the Controller MUST use the Trial
- Duration value instead.
-
- Trial Effective Duration may be any positive time quantity chosen
- by the Measurer to be used for time-based decisions in the
- Controller.
-
- The test report MUST explain how the Measurer computes the
- returned Trial Effective Duration values, if they are not always
- equal to the Trial Duration.
-
- This feature can be beneficial for time-critical benchmarks
- designed to manage the overall search duration, rather than solely
- the traffic portion of it. An approach is to measure the duration
- of the whole trial (including all wait times) and use that as the
- Trial Effective Duration.
-
- This is also a way for the Measurer to inform the Controller about
- its surprising behavior, for example, when rounding the Trial
- Duration value.
-
-4.5.9. Trial Output
-
- Definition:
-
- Trial Output is a composite quantity consisting of several
- attributes. Required attributes are: Trial Loss Ratio, Trial
- Effective Duration and Trial Forwarding Rate.
-
- Discussion:
-
- When referring to more than one trial, plural term "Trial Outputs"
- is used to collectively describe multiple Trial Output instances.
-
- Measurer implementations may provide additional optional
- attributes. The Controller implementations SHOULD ignore values
- of any optional attribute they are not familiar with, except when
- passing Trial Output instances to the Manager.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 27]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Example of an optional attribute: The aggregate number of frames
- expected to be forwarded during the trial, especially if it is not
- (a rounded-down value) implied by Trial Load and Trial Duration.
-
- While Section 3.5.2 of [RFC2285] requires the Offered Load value
- to be reported for forwarding rate measurements, it is not
- required in MLRsearch Specification, as search results do not
- depend on it.
-
-4.5.10. Trial Result
-
- Definition:
-
- Trial Result is a composite quantity, consisting of the Trial
- Input and the Trial Output.
-
- Discussion:
-
- When referring to more than one trial, plural term "Trial Results"
- is used to collectively describe multiple Trial Result instances.
-
-4.6. Goal Terms
-
- This section defines new terms for quantities relevant (directly or
- indirectly) for inputs and outputs of the Controller component.
-
- Several goal attributes are defined before introducing the main
- composite quantity: the Search Goal.
-
- Contrary to other sections, definitions in subsections of this
- section are necessarily vague, as their fundamental meaning is to act
- as coefficients in formulas for Controller Output, which are not
- defined yet.
-
- The discussions in this section relate the attributes to concepts
- mentioned in Section Overview of RFC 2544 Problems (Section 2), but
- even these discussion paragraphs are short, informal, and mostly
- referencing later sections, where the impact on search results is
- discussed after introducing the complete set of auxiliary terms.
-
-4.6.1. Goal Final Trial Duration
-
- Definition:
-
- Minimal value for Trial Duration that must be reached. The value
- MUST be positive.
-
- Discussion:
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 28]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Certain trials must reach this minimum duration before a load can
- be classified as a lower bound.
-
- The Controller may choose shorter durations, results of those may
- be enough for classification as an Upper Bound.
-
- It is RECOMMENDED for all search goals to share the same Goal
- Final Trial Duration value. Otherwise, Trial Duration values
- larger than the Goal Final Trial Duration may occur, weakening the
- assumptions the Load Classification Logic (Section 6.1) is based
- on.
-
-4.6.2. Goal Duration Sum
-
- Definition:
-
- A threshold value for a particular sum of Trial Effective Duration
- values. The value MUST be positive.
-
- Discussion:
-
- Informally, this prescribes the sufficient number of trials
- performed at a specific Trial Load and Goal Final Trial Duration
- during the search.
-
- If the Goal Duration Sum is larger than the Goal Final Trial
- Duration, multiple trials may be needed to be performed at the
- same load.
-
- Refer to Section MLRsearch Compliant with TST009 (Section 4.10.3)
- for an example where the possibility of multiple trials at the
- same load is intended.
-
- A Goal Duration Sum value shorter than the Goal Final Trial
- Duration (of the same goal) could save some search time, but is
- NOT RECOMMENDED, as the time savings come at the cost of decreased
- repeatability.
-
- In practice, the Search can spend less than Goal Duration Sum
- measuring a Load value when the results are particularly one-
- sided, but also, the Search can spend more than Goal Duration Sum
- measuring a Load when the results are balanced and include trials
- shorter than Goal Final Trial Duration.
-
-4.6.3. Goal Loss Ratio
-
- Definition:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 29]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- A threshold value for Trial Loss Ratio values. The value MUST be
- non-negative and smaller than one.
-
- Discussion:
-
- A trial with Trial Loss Ratio larger than this value signals the
- SUT may be unable to process this Trial Load well enough.
-
- See Throughput with Non-Zero Loss (Section 2.4) for reasons why
- users may want to set this value above zero.
-
- Since multiple trials may be needed for one Load value, the Load
- Classification may be more complicated than mere comparison of
- Trial Loss Ratio to Goal Loss Ratio.
-
-4.6.4. Goal Exceed Ratio
-
- Definition:
-
- A threshold value for a particular ratio of sums of Trial
- Effective Duration values. The value MUST be non-negative and
- smaller than one.
-
- Discussion:
-
- Informally, up to this proportion of Trial Results with Trial Loss
- Ratio above Goal Loss Ratio is tolerated at a Lower Bound. This
- is the full impact if every Trial was measured at Goal Final Trial
- Duration. The actual full logic is more complicated, as shorter
- Trials are allowed.
-
- For explainability reasons, the RECOMMENDED value for exceed ratio
- is 0.5 (50%), as in practice that value leads to the smallest
- variation in overall Search Duration.
-
- Refer to Section Exceed Ratio and Multiple Trials (Section 5.4)
- for more details.
-
-4.6.5. Goal Width
-
- Definition:
-
- A threshold value for deciding whether two Trial Load values are
- close enough. This is an OPTIONAL attribute. If present, the
- value MUST be positive.
-
- Discussion:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 30]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Informally, this acts as a stopping condition, controlling the
- precision of the search result. The search stops if every goal
- has reached its precision.
-
- Implementations without this attribute MUST provide the Controller
- with other means to control the search stopping conditions.
-
- Absolute load difference and relative load difference are two
- popular choices, but implementations may choose a different way to
- specify width.
-
- The test report MUST make it clear what specific quantity is used
- as Goal Width.
-
- It is RECOMMENDED to express Goal Width as a relative difference
- and setting it to a value not lower than the Goal Loss Ratio.
-
- Refer to Section Generalized Throughput (Section 5.6) for more
- elaboration on the reasoning.
-
-4.6.6. Goal Initial Trial Duration
-
- Definition:
-
- Minimal value for Trial Duration suggested to use for this goal.
- If present, this value MUST be positive.
-
- Discussion:
-
- This is an example of an optional Search Goal.
-
- A typical default value is equal to the Goal Final Trial Duration
- value.
-
- Informally, this is the shortest Trial Duration the Controller
- should select when focusing on the goal.
-
- Note that shorter Trial Duration values can still be used, for
- example, selected while focusing on a different Search Goal. Such
- results MUST be still accepted by the Load Classification logic.
-
- Goal Initial Trial Duration is a mechanism for a user to
- discourage trials with Trial Duration values deemed as too
- unreliable for a particular SUT and a given Search Goal.
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 31]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.6.7. Search Goal
-
- Definition:
-
- The Search Goal is a composite quantity consisting of several
- attributes, some of them are required.
-
- Required attributes: Goal Final Trial Duration, Goal Duration Sum,
- Goal Loss Ratio and Goal Exceed Ratio.
-
- Optional attributes: Goal Initial Trial Duration and Goal Width.
-
- Discussion:
-
- Implementations MAY add their own attributes. Those additional
- attributes may be required by an implementation even if they are
- not required by MLRsearch Specification. However, it is
- RECOMMENDED for those implementations to support missing
- attributes by providing typical default values.
-
- For example, implementations with Goal Initial Trial Durations may
- also require users to specify "how quickly" should Trial Durations
- increase.
-
- Refer to Section Section 4.10 for important Search Goal settings.
-
-4.6.8. Controller Input
-
- Definition:
-
- Controller Input is a composite quantity required as an input for
- the Controller. The only REQUIRED attribute is a list of Search
- Goal instances.
-
- Discussion:
-
- MLRsearch implementations MAY use additional attributes. Those
- additional attributes may be required by an implementation even if
- they are not required by MLRsearch Specification.
-
- Formally, the Manager does not apply any Controller configuration
- apart from one Controller Input instance.
-
- For example, Traffic Profile is configured on the Measurer by the
- Manager, without explicit assistance of the Controller.
-
- The order of Search Goal instances in a list SHOULD NOT have a big
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 32]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- impact on Controller Output, but MLRsearch implementations MAY
- base their behavior on the order of Search Goal instances in a
- list.
-
-4.6.8.1. Max Load
-
- Definition:
-
- Max Load is an optional attribute of Controller Input. It is the
- maximal value the Controller is allowed to use for Trial Load
- values.
-
- Discussion:
-
- Max Load is an example of an optional attribute (outside the list
- of Search Goals) required by some implementations of MLRsearch.
-
- If the Max Load value is provided, Controller MUST NOT select
- Trial Load values larger than that value.
-
- In theory, each search goal could have its own Max Load value, but
- as all Trial Results are possibly affecting all Search Goals, it
- makes more sense for a single Max Load value to apply to all
- Search Goal instances.
-
- While Max Load is a frequently used configuration parameter,
- already governed (as maximum frame rate) by [RFC2544] (Section 20)
- and (as maximum offered load) by [RFC2285] (Section 3.5.3), some
- implementations may detect or discover it (instead of requiring a
- user-supplied value).
-
- In MLRsearch Specification, one reason for listing the Relevant
- Upper Bound (Section 4.8.1) as a required attribute is that it
- makes the search result independent of Max Load value.
-
- Given that Max Load is a quantity based on Load, Test Report MAY
- express this quantity using multi-interface values, as sum of per-
- interface maximal loads.
-
-4.6.8.2. Min Load
-
- Definition:
-
- Min Load is an optional attribute of Controller Input. It is the
- minimal value the Controller is allowed to use for Trial Load
- values.
-
- Discussion:
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 33]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Min Load is another example of an optional attribute required by
- some implementations of MLRsearch. Similarly to Max Load, it
- makes more sense to prescribe one common value, as opposed to
- using a different value for each Search Goal.
-
- If the Min Load value is provided, Controller MUST NOT select
- Trial Load values smaller than that value.
-
- Min Load is mainly useful for saving time by failing early,
- arriving at an Irregular Goal Result when Min Load gets classified
- as an Upper Bound.
-
- For implementations, it is RECOMMENDED to require Min Load to be
- non-zero and large enough to result in at least one frame being
- forwarded even at shortest allowed Trial Duration, so that Trial
- Loss Ratio is always well-defined, and the implementation can
- apply relative Goal Width safely.
-
- Given that Min Load is a quantity based on Load, Test Report MAY
- express this quantity using multi-interface values, as sum of per-
- interface minimal loads.
-
-4.7. Auxiliary Terms
-
- While the terms defined in this section are not strictly needed when
- formulating MLRsearch requirements, they simplify the language used
- in discussion paragraphs and explanation sections.
-
-4.7.1. Trial Classification
-
- When one Trial Result instance is compared to one Search Goal
- instance, several relations can be named using short adjectives.
-
- As trial results do not affect each other, this *Trial
- Classification* does not change during a Search.
-
-4.7.1.1. High-Loss Trial
-
- A trial with Trial Loss Ratio larger than a Goal Loss Ratio value is
- called a *high-loss trial*, with respect to given Search Goal (or
- lossy trial, if Goal Loss Ratio is zero).
-
-4.7.1.2. Low-Loss Trial
-
- If a trial is not high-loss, it is called a *low-loss trial* (or
- zero-loss trial, if Goal Loss Ratio is zero).
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 34]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.7.1.3. Short Trial
-
- A trial with Trial Duration shorter than the Goal Final Trial
- Duration is called a *short trial* (with respect to the given Search
- Goal).
-
-4.7.1.4. Full-Length Trial
-
- A trial that is not short is called a *full-length* trial.
-
- Note that this includes Trial Durations larger than Goal Final Trial
- Duration.
-
-4.7.1.5. Long Trial
-
- A trial with Trial Duration longer than the Goal Final Trial Duration
- is called a *long trial*.
-
-4.7.2. Load Classification
-
- When a set of all Trial Result instances, performed so far at one
- Trial Load, is compared to one Search Goal instance, their relation
- can be named using the concept of a bound.
-
- In general, such bounds are a current quantity, even though cases of
- a Load changing its classification more than once during the Search
- is rare in practice.
-
-4.7.2.1. Upper Bound
-
- Definition:
-
- A Load value is called an Upper Bound if and only if it is
- classified as such by Appendix A (Appendix A) algorithm for the
- given Search Goal at the current moment of the Search.
-
- Discussion:
-
- In more detail, the set of all Trial Result instances performed so
- far at the Trial Load (and any Trial Duration) is certain to fail
- to uphold all the requirements of the given Search Goal, mainly
- the Goal Loss Ratio in combination with the Goal Exceed Ratio. In
- this context, "certain to fail" relates to any possible results
- within the time remaining till Goal Duration Sum.
-
- One search goal can have multiple different Trial Load values
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 35]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- classified as its Upper Bounds. While search progresses and more
- trials are measured, any load value can become an Upper Bound in
- principle.
-
- Moreover, a Load can stop being an Upper Bound, but that can only
- happen when more than Goal Duration Sum of trials are measured
- (e.g., because another Search Goal needs more trials at this
- load). Informally, the previous Upper Bound got invalidated. In
- practice, the Load frequently becomes a Lower Bound
- (Section 4.7.2.2) instead.
-
-4.7.2.2. Lower Bound
-
- Definition:
-
- A Load value is called a Lower Bound if and only if it is
- classified as such by Appendix A (Appendix A) algorithm for the
- given Search Goal at the current moment of the search.
-
- Discussion:
-
- In more detail, the set of all Trial Result instances performed so
- far at the Trial Load (and any Trial Duration) is certain to
- uphold all the requirements of the given Search Goal, mainly the
- Goal Loss Ratio in combination with the Goal Exceed Ratio. Here
- "certain to uphold" relates to any possible results within the
- time remaining till Goal Duration Sum.
-
- One search goal can have multiple different Trial Load values
- classified as its Lower Bounds. As search progresses and more
- trials are measured, any load value can become a Lower Bound in
- principle.
-
- No load can be both an Upper Bound and a Lower Bound for the same
- Search goal at the same time, but it is possible for a larger load
- to be a Lower Bound while a smaller load is an Upper Bound.
-
- Moreover, a Load can stop being a Lower Bound, but that can only
- happen when more than Goal Duration Sum of trials are measured
- (e.g., because another Search Goal needs more trials at this
- load). Informally, the previous Lower Bound got invalidated. In
- practice, the Load frequently becomes an Upper Bound
- (Section 4.7.2.1) instead.
-
-4.7.2.3. Undecided
-
- Definition:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 36]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- A Load value is called Undecided if it is currently neither an
- Upper Bound nor a Lower Bound.
-
- Discussion:
-
- A Load value that has not been measured so far is Undecided.
-
- It is possible for a Load to transition from an Upper Bound to
- Undecided by adding Short Trials with Low-Loss results. That is
- yet another reason for users to avoid using Search Goal instances
- with different Goal Final Trial Duration values.
-
-4.8. Result Terms
-
- Before defining the full structure of a Controller Output, it is
- useful to define the composite quantity, called Goal Result. The
- following subsections define its attribute first, before describing
- the Goal Result quantity.
-
- There is a correspondence between Search Goals and Goal Results.
- Most of the following subsections refer to a given Search Goal, when
- defining their terms. Conversely, at the end of the search, each
- Search Goal instance has its corresponding Goal Result instance.
-
-4.8.1. Relevant Upper Bound
-
- Definition:
-
- The Relevant Upper Bound is the smallest Trial Load value
- classified as an Upper Bound for a given Search Goal at the end of
- the Search.
-
- Discussion:
-
- If no measured load had enough High-Loss Trials, the Relevant
- Upper Bound MAY be non-existent. For example, when Max Load is
- classified as a Lower Bound.
-
- Conversely, when Relevant Upper Bound does exist, it is not
- affected by Max Load value.
-
- Given that Relevant Upper Bound is a quantity based on Load, Test
- Report MAY express this quantity using multi-interface values, as
- sum of per-interface loads.
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 37]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.8.2. Relevant Lower Bound
-
- Definition:
-
- The Relevant Lower Bound is the largest Trial Load value among
- those smaller than the Relevant Upper Bound, that got classified
- as a Lower Bound for a given Search Goal at the end of the search.
-
- Discussion:
-
- If no load had enough Low-Loss Trials, the Relevant Lower Bound
- MAY be non-existent.
-
- Strictly speaking, if the Relevant Upper Bound does not exist, the
- Relevant Lower Bound also does not exist. In a typical case, Max
- Load is classified as a Lower Bound, making it impossible to
- increase the Load to continue the search for an Upper Bound.
- Thus, it is not clear whether a larger value would be found for a
- Relevant Lower Bound if larger Loads were possible.
-
- Given that Relevant Lower Bound is a quantity based on Load, Test
- Report MAY express this quantity using multi-interface values, as
- sum of per-interface loads.
-
-4.8.3. Conditional Throughput
-
- Definition:
-
- Conditional Throughput is a value computed at the Relevant Lower
- Bound according to algorithm defined in Appendix B (Appendix B).
-
- Discussion:
-
- The Relevant Lower Bound is defined only at the end of the Search,
- and so is the Conditional Throughput. But the algorithm can be
- applied at any time on any Lower Bound load, so the final
- Conditional Throughput value may appear sooner than at the end of
- a Search.
-
- Informally, the Conditional Throughput should be a typical Trial
- Forwarding Rate, expected to be seen at the Relevant Lower Bound
- of a given Search Goal.
-
- But frequently it is only a conservative estimate thereof, as
- MLRsearch implementations tend to stop measuring more Trials as
- soon as they confirm the value cannot get worse than this estimate
- within the Goal Duration Sum.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 38]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- This value is RECOMMENDED to be used when evaluating repeatability
- and comparability of different MLRsearch implementations.
-
- Refer to Section Generalized Throughput (Section 5.6) for more
- details.
-
- Given that Conditional Throughput is a quantity based on Load,
- Test Report MAY express this quantity using multi-interface
- values, as sum of per-interface forwarding rates.
-
-4.8.4. Goal Results
-
- MLRsearch Specification is based on a set of requirements for a
- "regular" result. But in practice, it is not always possible for
- such result instance to exist, so also "irregular" results need to be
- supported.
-
-4.8.4.1. Regular Goal Result
-
- Definition:
-
- Regular Goal Result is a composite quantity consisting of several
- attributes. Relevant Upper Bound and Relevant Lower Bound are
- REQUIRED attributes. Conditional Throughput is a RECOMMENDED
- attribute.
-
- Discussion:
-
- Implementations MAY add their own attributes.
-
- Test report MUST display Relevant Lower Bound. Displaying
- Relevant Upper Bound is RECOMMENDED, especially if the
- implementation does not use Goal Width.
-
- In general, stopping conditions for the corresponding Search Goal
- MUST be satisfied to produce a Regular Goal Result. Specifically,
- if an implementation offers Goal Width as a Search Goal attribute,
- the distance between the Relevant Lower Bound and the Relevant
- Upper Bound MUST NOT be larger than the Goal Width.
-
- For stopping conditions refer to Sections Goal Width
- (Section 4.6.5) and Stopping Conditions and Precision
- (Section 5.2).
-
-4.8.4.2. Irregular Goal Result
-
- Definition:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 39]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Irregular Goal Result is a composite quantity. No attributes are
- required.
-
- Discussion:
-
- It is RECOMMENDED to report any useful quantity even if it does
- not satisfy all the requirements. For example, if Max Load is
- classified as a Lower Bound, it is fine to report it as an
- "effective" Relevant Lower Bound (although not a real one, as that
- requires Relevant Upper Bound which does not exist in this case),
- and compute Conditional Throughput for it. In this case, only the
- missing Relevant Upper Bound signals this result instance is
- irregular.
-
- Similarly, if both relevant bounds exist, it is RECOMMENDED to
- include them as Irregular Goal Result attributes, and let the
- Manager decide if their distance is too far for Test Report
- purposes.
-
- If Test Report displays some Irregular Goal Result attribute
- values, they MUST be clearly marked as coming from irregular
- results.
-
- The implementation MAY define additional attributes, for example
- explicit flags for expected situations, so the Manager logic can
- be simpler.
-
-4.8.4.3. Goal Result
-
- Definition:
-
- Goal Result is a composite quantity. Each instance is either a
- Regular Goal Result or an Irregular Goal Result.
-
- Discussion:
-
- The Manager MUST be able of distinguishing whether the instance is
- regular or not.
-
-4.8.5. Search Result
-
- Definition:
-
- The Search Result is a single composite object that maps each
- Search Goal instance to a corresponding Goal Result instance.
-
- Discussion:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 40]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- As an alternative to mapping, the Search Result may be represented
- as an ordered list of Goal Result instances that appears in the
- exact sequence of their corresponding Search Goal instances.
-
- When the Search Result is expressed as a mapping, it MUST contain
- an entry for every Search Goal instance supplied in the Controller
- Input.
-
- Identical Goal Result instances MAY be listed for different Search
- Goals, but their status as regular or irregular may be different.
- For example, if two goals differ only in Goal Width value, and the
- relevant bound values are close enough according to only one of
- them.
-
-4.8.6. Controller Output
-
- Definition:
-
- The Controller Output is a composite quantity returned from the
- Controller to the Manager at the end of the search. The Search
- Result instance is its only required attribute.
-
- Discussion:
-
- MLRsearch implementation MAY return additional data in the
- Controller Output, e.g., number of trials performed and the total
- Search Duration.
-
-4.9. Architecture Terms
-
- MLRsearch architecture consists of three main system components: the
- Manager, the Controller, and the Measurer. The components were
- introduced in Architecture Overview (Section 4.2), and the following
- subsections finalize their definitions using terms from previous
- sections.
-
- Note that the architecture also implies the presence of other
- components, such as the SUT and the tester (as a sub-component of the
- Measurer).
-
- Communication protocols and interfaces between components are left
- unspecified. For example, when MLRsearch Specification mentions
- "Controller calls Measurer", it is possible that the Controller
- notifies the Manager to call the Measurer indirectly instead. In
- doing so, the Measurer implementations can be fully independent from
- the Controller implementations, e.g., developed in different
- programming languages.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 41]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-4.9.1. Measurer
-
- Definition:
-
- The Measurer is a functional element that when called with a Trial
- Input (Section 4.5.3) instance, performs one Trial (Section 4.4.3)
- and returns a Trial Output (Section 4.5.9) instance.
-
- Discussion:
-
- This definition assumes the Measurer is already initialized. In
- practice, there may be additional steps before the Search, e.g.,
- when the Manager configures the traffic profile (either on the
- Measurer or on its tester sub-component directly) and performs a
- warm-up (if the tester or the test procedure requires one).
-
- It is the responsibility of the Measurer implementation to uphold
- any requirements and assumptions present in MLRsearch
- Specification, e.g., Trial Forwarding Ratio not being larger than
- one.
-
- Implementers have some freedom. For example, Section 10 of
- [RFC2544] gives some suggestions (but not requirements) related to
- duplicated or reordered frames. Implementations are RECOMMENDED
- to document their behavior related to such freedoms in as detailed
- a way as possible.
-
- It is RECOMMENDED to benchmark the test equipment first, e.g.,
- connect sender and receiver directly (without any SUT in the
- path), find a load value that guarantees the Offered Load is not
- too far from the Intended Load and use that value as the Max Load
- value. When testing the real SUT, it is RECOMMENDED to turn any
- severe deviation between the Intended Load and the Offered Load
- into increased Trial Loss Ratio.
-
- Neither of the two recommendations are made into mandatory
- requirements, because it is not easy to provide guidance about
- when the difference is severe enough, in a way that would be
- disentangled from other Measurer freedoms.
-
- For a sample situation where the Offered Load cannot keep up with
- the Intended Load, and the consequences on MLRsearch result, refer
- to Section Hard Performance Limit (Section 5.6.1).
-
-4.9.2. Controller
-
- Definition:
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 42]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- The Controller is a functional element that, upon receiving a
- Controller Input instance, repeatedly generates Trial Input
- instances for the Measurer and collects the corresponding Trial
- Output instances. This cycle continues until the stopping
- conditions are met, at which point the Controller produces a final
- Controller Output instance and terminates.
-
- Discussion:
-
- Informally, the Controller has big freedom in selection of Trial
- Inputs, and the implementations want to achieve all the Search
- Goals in the shortest average time.
-
- The Controller's role in optimizing the overall Search Duration
- distinguishes MLRsearch algorithms from simpler search procedures.
-
- Informally, each implementation can have different stopping
- conditions. Goal Width is only one example. In practice,
- implementation details do not matter, as long as Goal Result
- instances are regular.
-
-4.9.3. Manager
-
- Definition:
-
- The Manager is a functional element that is responsible for
- provisioning other components, calling a Controller component
- once, and for creating the test report following the reporting
- format as defined in Section 26 of [RFC2544].
-
- Discussion:
-
- The Manager initializes the SUT, the Measurer (and the tester if
- independent from Measurer) with their intended configurations
- before calling the Controller.
-
- Note that Section 7 of [RFC2544] already puts requirements on SUT
- setups:
-
- "It is expected that all of the tests will be run without changing
- the configuration or setup of the DUT in any way other than that
- required to do the specific test. For example, it is not
- acceptable to change the size of frame handling buffers between
- tests of frame handling rates or to disable all but one transport
- protocol when testing the throughput of that protocol."
-
- It is REQUIRED for the test report to encompass all the SUT
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 43]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- configuration details, including description of a "default"
- configuration common for most tests and configuration changes if
- required by a specific test.
-
- For example, Section 5.1.1 of [RFC5180] recommends testing jumbo
- frames if SUT can forward them, even though they are outside the
- scope of the 802.3 IEEE standard. In this case, it is acceptable
- for the SUT default configuration to not support jumbo frames, and
- only enable this support when testing jumbo traffic profiles, as
- the handling of jumbo frames typically has different packet buffer
- requirements and potentially higher processing overhead. Non-
- jumbo frame sizes should also be tested on the jumbo-enabled
- setup.
-
- The Manager does not need to be able to tweak any Search Goal
- attributes, but it MUST report all applied attribute values even
- if not tweaked.
-
- A "user" - human or automated - invokes the Manager once to launch
- a single Search and receive its report. Every new invocation is
- treated as a fresh, independent Search; how the system behaves
- across multiple calls (for example, combining or comparing their
- results) is explicitly out of scope for this document.
-
-4.10. Compliance
-
- This section discusses compliance relations between MLRsearch and
- other test procedures.
-
-4.10.1. Test Procedure Compliant with MLRsearch
-
- Any networking measurement setup that could be understood as
- consisting of functional elements satisfying requirements for the
- Measurer, the Controller and the Manager, is compliant with MLRsearch
- Specification.
-
- These components can be seen as abstractions present in any testing
- procedure. For example, there can be a single component acting both
- as the Manager and the Controller, but if values of required
- attributes of Search Goals and Goal Results are visible in the test
- report, the Controller Input instance and Controller Output instance
- are implied.
-
- For example, any setup for conditionally (or unconditionally)
- compliant [RFC2544] throughput testing can be understood as a
- MLRsearch architecture, if there is enough data to reconstruct the
- Relevant Upper Bound.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 44]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Refer to section MLRsearch Compliant with RFC 2544 (Section 4.10.2)
- for an equivalent Search Goal.
-
- Any test procedure that can be understood as one call to the Manager
- of MLRsearch architecture is said to be compliant with MLRsearch
- Specification.
-
-4.10.2. MLRsearch Compliant with RFC 2544
-
- The following Search Goal instance makes the corresponding Search
- Result unconditionally compliant with Section 24 of [RFC2544].
-
- * Goal Final Trial Duration = 60 seconds
-
- * Goal Duration Sum = 60 seconds
-
- * Goal Loss Ratio = 0%
-
- * Goal Exceed Ratio = 0%
-
- Goal Loss Ratio and Goal Exceed Ratio attributes, are enough to make
- the Search Goal conditionally compliant. Adding Goal Final Trial
- Duration makes the Search Goal unconditionally compliant.
-
- Goal Duration Sum prevents MLRsearch from repeating zero-loss Full-
- Length Trials.
-
- The presence of other Search Goals does not affect the compliance of
- this Goal Result. The Relevant Lower Bound and the Conditional
- Throughput are in this case equal to each other, and the value is the
- [RFC2544] throughput.
-
- Non-zero exceed ratio is not strictly disallowed, but it could
- needlessly prolong the search when Low-Loss short trials are present.
-
-4.10.3. MLRsearch Compliant with TST009
-
- One of the alternatives to [RFC2544] is Binary search with loss
- verification as described in Section 12.3.3 of [TST009].
-
- The rationale of such search is to repeat high-loss trials, hoping
- for zero loss on second try, so the results are closer to the
- noiseless end of performance spectrum, thus more repeatable and
- comparable.
-
- Only the variant with "z = infinity" is achievable with MLRsearch.
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 45]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- For example, for "max(r) = 2" variant, the following Search Goal
- instance should be used to get compatible Search Result:
-
- * Goal Final Trial Duration = 60 seconds
-
- * Goal Duration Sum = 120 seconds
-
- * Goal Loss Ratio = 0%
-
- * Goal Exceed Ratio = 50%
-
- If the first 60 seconds trial has zero loss, it is enough for
- MLRsearch to stop measuring at that load, as even a second high-loss
- trial would still fit within the exceed ratio.
-
- But if the first trial is high-loss, MLRsearch needs to perform also
- the second trial to classify that load. Goal Duration Sum is twice
- as long as Goal Final Trial Duration, so third full-length trial is
- never needed.
-
-5. Methodology Rationale and Design Considerations
-
- This section explains the Why behind MLRsearch. Building on the
- normative specification in Section MLRsearch Specification
- (Section 4), it contrasts MLRsearch with the classic [RFC2544]
- single-ratio binary-search procedure and walks through the key design
- choices: binary-search mechanics, stopping-rule precision, loss-
- inversion for multiple goals, exceed-ratio handling, short-trial
- strategies, and the generalised throughput concept. Together, these
- considerations show how the methodology reduces test time, supports
- multiple loss ratios, and improves repeatability.
-
-5.1. Binary Search
-
- A typical binary search implementation for [RFC2544] tracks only the
- two tightest bounds. To start, the search needs both Max Load and
- Min Load values. Then, one trial is used to confirm Max Load is an
- Upper Bound, and one trial to confirm Min Load is a Lower Bound.
-
- Then, next Trial Load is chosen as the mean of the current tightest
- upper bound and the current tightest lower bound, and becomes a new
- tightest bound depending on the Trial Loss Ratio.
-
- After some number of trials, the tightest lower bound becomes the
- throughput, but [RFC2544] does not specify when, if ever, the search
- should stop. In practice, the search stops either at some distance
- between the tightest upper bound and the tightest lower bound, or
- after some number of Trials.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 46]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- For a given pair of Max Load and Min Load values, there is one-to-one
- correspondence between number of Trials and final distance between
- the tightest bounds. Thus, the search always takes the same time,
- assuming initial bounds are confirmed.
-
-5.2. Stopping Conditions and Precision
-
- MLRsearch Specification requires listing both Relevant Bounds for
- each Search Goal, and the difference between the bounds implies
- whether the result precision is achieved. Therefore, it is not
- necessary to report the specific stopping condition used.
-
- MLRsearch implementations may use Goal Width to allow direct control
- of result precision and indirect control of the Search Duration.
-
- Other MLRsearch implementations may use different stopping
- conditions: for example based on the Search Duration, trading off
- precision control for duration control.
-
- Due to various possible time optimizations, there is no strict
- correspondence between the Search Duration and Goal Width values. In
- practice, noisy SUT performance increases both average search time
- and its variance.
-
-5.3. Loss Ratios and Loss Inversion
-
- The biggest difference between MLRsearch and [RFC2544] binary search
- is in the goals of the search. [RFC2544] has a single goal, based on
- classifying a single full-length trial as either zero-loss or non-
- zero-loss. MLRsearch supports searching for multiple Search Goals at
- once, usually differing in their Goal Loss Ratio values.
-
-5.3.1. Single Goal and Hard Bounds
-
- Each bound in [RFC2544] simple binary search is "hard", in the sense
- that all further Trial Load values are smaller than any current upper
- bound and larger than any current lower bound.
-
- This is also possible for MLRsearch implementations, when the search
- is started with only one Search Goal instance.
-
-5.3.2. Multiple Goals and Loss Inversion
-
- MLRsearch Specification supports multiple Search Goals, making the
- search procedure more complicated compared to binary search with
- single goal, but most of the complications do not affect the final
- results much. Except for one phenomenon: Loss Inversion.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 47]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Depending on Search Goal attributes, Load Classification results may
- be resistant to small amounts of Section Inconsistent Trial Results
- (Section 2.5). However, for larger amounts, a Load that is
- classified as an Upper Bound for one Search Goal may still be a Lower
- Bound for another Search Goal. Due to this other goal, MLRsearch
- will probably perform subsequent Trials at Trial Loads even larger
- than the original value.
-
- This introduces questions any many-goals search algorithm has to
- address. For example: What to do when all such larger load trials
- happen to have zero loss? Does it mean the earlier upper bound was
- not real? Does it mean the later Low-Loss trials are not considered
- a lower bound?
-
- The situation where a smaller Load is classified as an Upper Bound,
- while a larger Load is classified as a Lower Bound (for the same
- search goal), is called Loss Inversion.
-
- Conversely, only single-goal search algorithms can have hard bounds
- that shield them from Loss Inversion.
-
-5.3.3. Conservativeness and Relevant Bounds
-
- MLRsearch is conservative when dealing with Loss Inversion: the Upper
- Bound is considered real, and the Lower Bound is considered to be a
- fluke, at least when computing the final result.
-
- This is formalized using definitions of Relevant Upper Bound
- (Section 4.8.1) and Relevant Lower Bound (Section 4.8.2).
-
- The Relevant Upper Bound (for specific goal) is the smallest Load
- classified as an Upper Bound. But the Relevant Lower Bound is not
- simply the largest among Lower Bounds. It is the largest Load among
- Loads that are Lower Bounds while also being smaller than the
- Relevant Upper Bound.
-
- With these definitions, the Relevant Lower Bound is always smaller
- than the Relevant Upper Bound (if both exist), and the two relevant
- bounds are used analogously as the two tightest bounds in the binary
- search. When they meet the stopping conditions, the Relevant Bounds
- are used in the output.
-
-5.3.4. Consequences
-
- The consequence of the way the Relevant Bounds are defined is that
- every Trial Result can have an impact on any current Relevant Bound
- larger than that Trial Load, namely by becoming a new Upper Bound.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 48]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- This also applies when that Load is measured before another Load gets
- enough measurements to become a current Relevant Bound.
-
- This also implies that if the SUT tested (or the Traffic Generator
- used) needs a warm-up, it should be warmed up before starting the
- Search, otherwise the first few measurements could become unjustly
- limiting.
-
- For MLRsearch implementations, it means it is better to measure at
- smaller Loads first, so bounds found earlier are less likely to get
- invalidated later.
-
-5.4. Exceed Ratio and Multiple Trials
-
- The idea of performing multiple Trials at the same Trial Load comes
- from a model where some Trial Results (those with high Trial Loss
- Ratio) are affected by infrequent effects, causing unsatisfactory
- repeatability
-
- of [RFC2544] Throughput results. Refer to Section DUT in SUT
- (Section 2.2) for a discussion about noiseful and noiseless ends of
- the SUT performance spectrum. Stable results are closer to the
- noiseless end of the SUT performance spectrum, so MLRsearch may need
- to allow some frequency of high-loss trials to ignore the rare but
- big effects near the noiseful end.
-
- For MLRsearch to perform such Trial Result filtering, it needs a
- configuration option to tell how frequent the "infrequent" big loss
- can be. This option is called the Goal Exceed Ratio (Section 4.6.4).
- It tells MLRsearch what ratio of trials (more specifically, what
- ratio of Trial Effective Duration seconds) can have a Trial Loss
- Ratio (Section 4.5.6) larger than the Goal Loss Ratio (Section 4.6.3)
- and still be classified as a Lower Bound (Section 4.7.2.2).
-
- Zero exceed ratio means all Trials must have a Trial Loss Ratio equal
- to or lower than the Goal Loss Ratio.
-
- When more than one Trial is intended to classify a Load, MLRsearch
- also needs something that controls the number of trials needed.
- Therefore, each goal also has an attribute called Goal Duration Sum.
-
- The meaning of a Goal Duration Sum (Section 4.6.2) is that when a
- Load has (Full-Length) Trials whose Trial Effective Durations when
- summed up give a value at least as big as the Goal Duration Sum
- value, the Load is guaranteed to be classified either as an Upper
- Bound or a Lower Bound for that Search Goal instance.
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 49]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-5.5. Short Trials and Duration Selection
-
- MLRsearch requires each Search Goal to specify its Goal Final Trial
- Duration.
-
- Section 24 of [RFC2544] already anticipates possible time savings
- when Short Trials are used.
-
- An MLRsearch implementation MAY expose configuration parameters that
- decide whether, when, and how short trial durations are used. The
- exact heuristics and controls are left to the discretion of the
- implementer.
-
- While MLRsearch implementations are free to use any logic to select
- Trial Input values, comparability between MLRsearch implementations
- is only assured when the Load Classification logic handles any
- possible set of Trial Results in the same way.
-
- The presence of Short Trial Results complicates the Load
- Classification logic, see more details in Section Load Classification
- Logic (Section 6.1).
-
- While the Load Classification algorithm is designed to avoid any
- unneeded Trials, for explainability reasons it is recommended for
- users to use such Controller Input instances that lead to all Trial
- Duration values selected by Controller to be the same, e.g., by
- setting any Goal Initial Trial Duration to be a single value also
- used in all Goal Final Trial Duration attributes.
-
-5.6. Generalized Throughput
-
- Because testing equipment takes the Intended Load as an input
- parameter for a Trial measurement, any load search algorithm needs to
- deal with Intended Load values internally.
-
- But in the presence of Search Goals with a non-zero Goal Loss Ratio
- (Section 4.6.3), the Load usually does not match the user's intuition
- of what a throughput is. The forwarding rate as defined in
- Section Section 3.6.1 of [RFC2285] is better, but it is not obvious
- how to generalize it for Loads with multiple Trials and a non-zero
- Goal Loss Ratio.
-
- The clearest illustration - and the chief reason for adopting a
- generalized throughput definition - is the presence of a hard
- performance limit.
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 50]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-5.6.1. Hard Performance Limit
-
- Even if bandwidth of a medium allows higher traffic forwarding
- performance, the SUT interfaces may have their additional own
- limitations, e.g., a specific frames-per-second limit on the NIC (a
- common occurrence).
-
- Those limitations should be known and provided as Max Load,
- Section Max Load (Section 4.6.8.1).
-
- But if Max Load is set larger than what the interface can receive or
- transmit, there will be a "hard limit" behavior observed in Trial
- Results.
-
- Consider that the hard limit is at hundred million frames per second
- (100 Mfps), Max Load is larger, and the Goal Loss Ratio is 0.5%. If
- DUT has no additional losses, 0.5% Trial Loss Ratio will be achieved
- at Relevant Lower Bound of 100.5025 Mfps.
-
- Reporting a throughput that exceeds the SUT's verified hard limit is
- counter-intuitive. Accordingly, the [RFC2544] Throughput metric
- should be generalized - rather than relying solely on the Relevant
- Lower Bound - to reflect realistic, limit-aware performance.
-
- MLRsearch defines one such generalization, the Conditional Throughput
- (Section 4.8.3). It is the Trial Forwarding Rate from one of the
- Full-Length Trials performed at the Relevant Lower Bound. The
- algorithm to determine which trial exactly is in Appendix B
- (Appendix B).
-
- In the hard limit example, 100.5025 Mfps Load will still have only
- 100.0 Mfps forwarding rate, nicely confirming the known limitation.
-
-5.6.2. Performance Variability
-
- With non-zero Goal Loss Ratio, and without hard performance limits,
- Low-Loss trials at the same Load may achieve different Trial
- Forwarding Rate values just due to DUT performance variability.
-
- By comparing the best case (all Relevant Lower Bound trials have zero
- loss) and the worst case (all Trial Loss Ratios at Relevant Lower
- Bound are equal to the Goal Loss Ratio), one can prove that
- Conditional Throughput values may have up to the Goal Loss Ratio
- relative difference.
-
- Setting the Goal Width below the Goal Loss Ratio may cause the
- Conditional Throughput for a larger Goal Loss Ratio to become smaller
- than a Conditional Throughput for a goal with a lower Goal Loss
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 51]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Ratio, which is counter-intuitive, considering they come from the
- same Search. Therefore, it is RECOMMENDED to set the Goal Width to a
- value no lower than the Goal Loss Ratio of the higher-loss Search
- Goal.
-
- Although Conditional Throughput can fluctuate from one run to the
- next, it still offers a more discriminating basis for comparison than
- the Relevant Lower Bound - particularly when deterministic load
- selection yields the same Lower Bound value across multiple runs.
-
-6. MLRsearch Logic and Example
-
- This section uses informal language to describe two aspects of
- MLRsearch logic: Load Classification and Conditional Throughput,
- reflecting formal pseudocode representation provided in Appendix A
- (Appendix A) and Appendix B (Appendix B). This is followed by
- example search.
-
- The logic is equivalent but not identical to the pseudocode on
- appendices. The pseudocode is designed to be short and frequently
- combines multiple operations into one expression. The logic as
- described in this section lists each operation separately and uses
- more intuitive names for the intermediate values.
-
-6.1. Load Classification Logic
-
- Note: For clarity of explanation, variables are tagged as (I)nput,
- (T)emporary, (O)utput.
-
- * Collect Trial Results:
-
- - Take all Trial Result instances (I) measured at a given load.
-
- * Aggregate Trial Durations:
-
- - Full-length high-loss sum (T) is the sum of Trial Effective
- Duration values of all full-length high-loss trials (I).
-
- - Full-length low-loss sum (T) is the sum of Trial Effective
- Duration values of all full-length low-loss trials (I).
-
- - Short high-loss sum is the sum (T) of Trial Effective Duration
- values of all short high-loss trials (I).
-
- - Short low-loss sum is the sum (T) of Trial Effective Duration
- values of all short low-loss trials (I).
-
- * Derive goal-based ratios:
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 52]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- - Subceed ratio (T) is One minus the Goal Exceed Ratio (I).
-
- - Exceed coefficient (T) is the Goal Exceed Ratio divided by the
- subceed ratio.
-
- * Balance short-trial effects:
-
- - Balancing sum (T) is the short low-loss sum multiplied by the
- exceed coefficient.
-
- - Excess sum (T) is the short high-loss sum minus the balancing
- sum.
-
- - Positive excess sum (T) is the maximum of zero and excess sum.
-
- * Compute effective duration totals
-
- - Effective high-loss sum (T) is the full-length high-loss sum
- plus the positive excess sum.
-
- - Effective full sum (T) is the effective high-loss sum plus the
- full-length low-loss sum.
-
- - Effective whole sum (T) is the larger of the effective full sum
- and the Goal Duration Sum.
-
- - Missing sum (T) is the effective whole sum minus the effective
- full sum.
-
- * Estimate exceed ratios:
-
- - Pessimistic high-loss sum (T) is the effective high-loss sum
- plus the missing sum.
-
- - Optimistic exceed ratio (T) is the effective high-loss sum
- divided by the effective whole sum.
-
- - Pessimistic exceed ratio (T) is the pessimistic high-loss sum
- divided by the effective whole sum.
-
- * Classify the Load:
-
- - The load is classified as an Upper Bound (O) if the optimistic
- exceed ratio is larger than the Goal Exceed Ratio.
-
- - The load is classified as a Lower Bound (O) if the pessimistic
- exceed ratio is not larger than the Goal Exceed Ratio.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 53]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- - The load is classified as undecided (O) otherwise.
-
-6.2. Conditional Throughput Logic
-
- * Collect Trial Results
-
- - Take all Trial Result instances (I) measured at a given Load.
-
- * Sum Full-Length Durations:
-
- - Full-length high-loss sum (T) is the sum of Trial Effective
- Duration values of all full-length high-loss trials (I).
-
- - Full-length low-loss sum (T) is the sum of Trial Effective
- Duration values of all full-length low-loss trials (I).
-
- - Full-length sum (T) is the full-length high-loss sum (I) plus
- the full-length low-loss sum (I).
-
- * Derive initial thresholds:
-
- - Subceed ratio (T) is One minus the Goal Exceed Ratio (I) is
- called.
-
- - Remaining sum (T) initially is full-lengths sum multiplied by
- subceed ratio.
-
- - Current loss ratio (T) initially is 100%.
-
- * Iterate through ordered trials
-
- - For each full-length trial result, sorted in increasing order
- by Trial Loss Ratio:
-
- o If remaining sum is not larger than zero, exit the loop.
-
- o Set current loss ratio to this trial's Trial Loss Ratio (I).
-
- o Decrease the remaining sum by this trial's Trial Effective
- Duration (I).
-
- * Compute Conditional Throughput
-
- - Current forwarding ratio (T) is One minus the current loss
- ratio.
-
- - Conditional Throughput (T) is the current forwarding ratio
- multiplied by the Load value.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 54]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-6.2.1. Conditional Throughput and Load Classification
-
- Conditional Throughput and results of Load Classification overlap but
- are not identical.
-
- * When a load is marked as a Relevant Lower Bound, its Conditional
- Throughput is taken from a trial whose loss ratio never exceeds
- the Goal Loss Ratio.
-
- * The reverse is not guaranteed: if the Goal Width is narrower than
- the Goal Loss Ratio, Conditional Throughput can still end up
- higher than the Relevant Upper Bound.
-
-6.3. SUT Behaviors
-
- In Section DUT in SUT (Section 2.2), the notion of noise has been
- introduced. This section uses new terms to describe possible SUT
- behaviors more precisely.
-
- From measurement point of view, noise is visible as inconsistent
- trial results. See Inconsistent Trial Results (Section 2.5) for
- general points and Loss Ratios and Loss Inversion (Section 5.3) for
- specifics when comparing different Load values.
-
- Load Classification and Conditional Throughput apply to a single Load
- value, but even the set of Trial Results measured at that Trial Load
- value may appear inconsistent.
-
- As MLRsearch aims to save time, it executes only a small number of
- Trials, getting only a limited amount of information about SUT
- behavior. It is useful to introduce an "SUT expert" point of view to
- contrast with that limited information.
-
-6.3.1. Expert Predictions
-
- Imagine that before the Search starts, a human expert had unlimited
- time to measure SUT and obtain all reliable information about it.
- The information is not perfect, as there is still random noise
- influencing SUT. But the expert is familiar with possible noise
- events, even the rare ones, and thus the expert can do probabilistic
- predictions about future Trial Outputs.
-
- When several outcomes are possible, the expert can assess probability
- of each outcome.
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 55]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-6.3.2. Exceed Probability
-
- When the Controller selects new Trial Duration and Trial Load, and
- just before the Measurer starts performing the Trial, the SUT expert
- can envision possible Trial Results.
-
- With respect to a particular Search Goal instance, the possibilities
- can be summarized into a single number: Exceed Probability. It is
- the probability (according to the expert) that the measured Trial
- Loss Ratio will be higher than the Goal Loss Ratio.
-
-6.3.3. Trial Duration Dependence
-
- When comparing Exceed Probability values for the same Trial Load
- value but different Trial Duration values, there are several patterns
- that commonly occur in practice.
-
-6.3.3.1. Strong Increase
-
- Exceed Probability is very low at short durations but very high at
- full-length. This SUT behavior is undesirable, and may hint at
- faulty SUT, e.g., SUT leaks resources and is unable to sustain the
- desired performance.
-
- But this behavior is also seen when SUT uses large amount of buffers.
- This is the main reasons users may want to set large Goal Final Trial
- Duration.
-
-6.3.3.2. Mild Increase
-
- Short trials are slightly less likely to exceed the loss-ratio limit,
- but the improvement is modest. This mild benefit is typical when
- noise is dominated by rare, large loss spikes: during a full-length
- trial, the good-performing periods cannot fully offset the heavy
- frame loss that occurs in the brief low-performing bursts.
-
-6.3.3.3. Independence
-
- Short trials have basically the same Exceed Probability as full-
- length trials. This is possible only if loss spikes are small (so
- other parts can compensate) and if Goal Loss Ratio is more than zero
- (otherwise, other parts cannot compensate at all).
-
-
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 56]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-6.3.3.4. Decrease
-
- Short trials have larger Exceed Probability than full-length trials.
- This can be possible only for non-zero Goal Loss Ratio, for example
- if SUT needs to "warm up" to best performance within each trial. Not
- commonly seen in practice.
-
-7. IANA Considerations
-
- This document does not make any request to IANA.
-
-8. Security Considerations
-
- Benchmarking activities as described in this memo are limited to
- technology characterization of a DUT/SUT using controlled stimuli in
- a laboratory environment, with dedicated address space and the
- constraints specified in the sections above.
-
- The benchmarking network topology will be an independent test setup
- and MUST NOT be connected to devices that may forward the test
- traffic into a production network or misroute traffic to the test
- management network.
-
- Further, benchmarking is performed on an "opaque" basis, relying
- solely on measurements observable external to the DUT/SUT.
-
- The DUT/SUT SHOULD NOT include features that serve only to boost
- benchmark scores - such as a dedicated "fast-track" test mode that is
- never used in normal operation.
-
- Any implications for network security arising from the DUT/SUT SHOULD
- be identical in the lab and in production networks.
-
-9. Acknowledgements
-
- Special wholehearted gratitude and thanks to the late Al Morton for
- his thorough reviews filled with very specific feedback and
- constructive guidelines. Thank You Al for the close collaboration
- over the years, Your Mentorship, Your continuous unwavering
- encouragement full of empathy and energizing positive attitude. Al,
- You are dearly missed.
-
- Thanks to Gabor Lencse, Giuseppe Fioccola, Carsten Rossenhoevel and
- BMWG contributors for good discussions and thorough reviews, guiding
- and helping us to improve the clarity and formality of this document.
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 57]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Many thanks to Alec Hothan of the OPNFV NFVbench project for a
- thorough review and numerous useful comments and suggestions in the
- earlier versions of this document.
-
- We are equally indebted to Mohamed Boucadair for a very thorough and
- detailed AD review and providing many good comments and suggestions,
- helping us make this document complete.
-
- Our appreciation is also extended to Shawn Emery, Yoshifumi Nishida,
- David Dong, Nabeel Cocker and Lars Eggert for their reviews and
- valueable comments.
-
-10. References
-
-10.1. Normative References
-
- [RFC1242] Bradner, S., "Benchmarking Terminology for Network
- Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242,
- July 1991, <https://www.rfc-editor.org/info/rfc1242>.
-
- [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
- Requirement Levels", BCP 14, RFC 2119,
- DOI 10.17487/RFC2119, March 1997,
- <https://www.rfc-editor.org/info/rfc2119>.
-
- [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN
- Switching Devices", RFC 2285, DOI 10.17487/RFC2285,
- February 1998, <https://www.rfc-editor.org/info/rfc2285>.
-
- [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
- Network Interconnect Devices", RFC 2544,
- DOI 10.17487/RFC2544, March 1999,
- <https://www.rfc-editor.org/info/rfc2544>.
-
- [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
- 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
- May 2017, <https://www.rfc-editor.org/info/rfc8174>.
-
-10.2. Informative References
-
- [FDio-CSIT-MLRsearch]
- "FD.io CSIT Test Methodology - MLRsearch", October 2023,
- <https://csit.fd.io/cdocs/methodology/measurements/
- data_plane_throughput/mlr_search/>.
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 58]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- [Lencze-Kovacs-Shima]
- "Gaming with the Throughput and the Latency Benchmarking
- Measurement Procedures of RFC 2544", n.d.,
- <http://dx.doi.org/10.11601/ijates.v9i2.288>.
-
- [Lencze-Shima]
- "An Upgrade to Benchmarking Methodology for Network
- Interconnect Devices - expired", n.d.,
- <https://datatracker.ietf.org/doc/html/draft-lencse-bmwg-
- rfc2544-bis-00>.
-
- [Ott-Mathis-Semke-Mahdavi]
- "The Macroscopic Behavior of the TCP Congestion Avoidance
- Algorithm", n.d.,
- <https://www.cs.cornell.edu/people/egs/cornellonly/
- syslunch/fall02/ott.pdf>.
-
- [PyPI-MLRsearch]
- "MLRsearch 1.2.1, Python Package Index", October 2023,
- <https://pypi.org/project/MLRsearch/1.2.1/>.
-
- [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D.
- Dugatkin, "IPv6 Benchmarking Methodology for Network
- Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May
- 2008, <https://www.rfc-editor.org/info/rfc5180>.
-
- [RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage,
- "Framework for TCP Throughput Testing", RFC 6349,
- DOI 10.17487/RFC6349, August 2011,
- <https://www.rfc-editor.org/info/rfc6349>.
-
- [RFC6985] Morton, A., "IMIX Genome: Specification of Variable Packet
- Sizes for Additional Testing", RFC 6985,
- DOI 10.17487/RFC6985, July 2013,
- <https://www.rfc-editor.org/info/rfc6985>.
-
- [RFC8219] Georgescu, M., Pislaru, L., and G. Lencse, "Benchmarking
- Methodology for IPv6 Transition Technologies", RFC 8219,
- DOI 10.17487/RFC8219, August 2017,
- <https://www.rfc-editor.org/info/rfc8219>.
-
- [TST009] "TST 009", n.d., <https://www.etsi.org/deliver/etsi_gs/
- NFV-TST/001_099/009/03.04.01_60/gs_NFV-
- TST009v030401p.pdf>.
-
- [Vassilev] "A YANG Data Model for Network Tester Management", n.d.,
- <https://datatracker.ietf.org/doc/draft-ietf-bmwg-network-
- tester-cfg/06>.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 59]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- [Y.1564] "Y.1564", n.d., <https://www.itu.int/rec/
- dologin_pub.asp?lang=e&id=T-REC-Y.1564-201602-I!!PDF-
- E&type=items>.
-
-Appendix A. Load Classification Code
-
- This appendix specifies how to perform the Load Classification.
-
- Any Trial Load value can be classified, according to a given Search
- Goal (Section 4.6.7) instance.
-
- The algorithm uses (some subsets of) the set of all available Trial
- Results from Trials measured at a given Load at the end of the
- Search.
-
- The block at the end of this appendix holds pseudocode which computes
- two values, stored in variables named optimistic_is_lower and
- pessimistic_is_lower.
-
- Although presented as pseudocode, the listing is syntactically valid
- Python and can be executed without modification.
-
- If values of both variables are computed to be true, the Load in
- question is classified as a Lower Bound according to the given Search
- Goal instance. If values of both variables are false, the Load is
- classified as an Upper Bound. Otherwise, the load is classified as
- Undecided.
-
- Some variable names are shortened to fit expressions in one line.
- Namely, variables holding sum quantities end in _s instead of _sum,
- and variables holding effective quantities start in effect_ instead
- of effective_.
-
- The pseudocode expects the following variables to hold the following
- values:
-
- * goal_duration_s: The Goal Duration Sum value of the given Search
- Goal.
-
- * goal_exceed_ratio: The Goal Exceed Ratio value of the given Search
- Goal.
-
- * full_length_low_loss_s: Sum of Trial Effective Durations across
- Trials with Trial Duration at least equal to the Goal Final Trial
- Duration and with Trial Loss Ratio not higher than the Goal Loss
- Ratio (across Full-Length Low-Loss Trials).
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 60]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * full_length_high_loss_s: Sum of Trial Effective Durations across
- Trials with Trial Duration at least equal to the Goal Final Trial
- Duration and with Trial Loss Ratio higher than the Goal Loss Ratio
- (across Full-Length High-Loss Trials).
-
- * short_low_loss_s: Sum of Trial Effective Durations across Trials
- with Trial Duration shorter than the Goal Final Trial Duration and
- with Trial Loss Ratio not higher than the Goal Loss Ratio (across
- Short Low-Loss Trials).
-
- * short_high_loss_s: Sum of Trial Effective Durations across Trials
- with Trial Duration shorter than the Goal Final Trial Duration and
- with Trial Loss Ratio higher than the Goal Loss Ratio (across
- Short High-Loss Trials).
-
- The code works correctly also when there are no Trial Results at a
- given Load.
-
- <CODE BEGINS>
- exceed_coefficient = goal_exceed_ratio / (1.0 - goal_exceed_ratio)
- balancing_s = short_low_loss_s * exceed_coefficient
- positive_excess_s = max(0.0, short_high_loss_s - balancing_s)
- effect_high_loss_s = full_length_high_loss_s + positive_excess_s
- effect_full_length_s = full_length_low_loss_s + effect_high_loss_s
- effect_whole_s = max(effect_full_length_s, goal_duration_s)
- quantile_duration_s = effect_whole_s * goal_exceed_ratio
- pessimistic_high_loss_s = effect_whole_s - full_length_low_loss_s
- pessimistic_is_lower = pessimistic_high_loss_s <= quantile_duration_s
- optimistic_is_lower = effect_high_loss_s <= quantile_duration_s
- <CODE ENDS>
-
-Appendix B. Conditional Throughput Code
-
- This section specifies an example of how to compute Conditional
- Throughput, as referred to in Section Conditional Throughput
- (Section 4.8.3).
-
- Any Load value can be used as the basis for the following
- computation, but only the Relevant Lower Bound (at the end of the
- Search) leads to the value called the Conditional Throughput for a
- given Search Goal.
-
- The algorithm uses (some subsets of) the set of all available Trial
- Results from Trials measured at a given Load at the end of the
- Search.
-
- The block at the end of this appendix holds pseudocode which computes
- a value stored as variable conditional_throughput.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 61]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Although presented as pseudocode, the listing is syntactically valid
- Python and can be executed without modification.
-
- Some variable names are shortened in order to fit expressions in one
- line. Namely, variables holding sum quantities end in _s instead of
- _sum, and variables holding effective quantities start in effect_
- instead of effective_.
-
- The pseudocode expects the following variables to hold the following
- values:
-
- * goal_duration_s: The Goal Duration Sum value of the given Search
- Goal.
-
- * goal_exceed_ratio: The Goal Exceed Ratio value of the given Search
- Goal.
-
- * full_length_low_loss_s: Sum of Trial Effective Durations across
- Trials with Trial Duration at least equal to the Goal Final Trial
- Duration and with Trial Loss Ratio not higher than the Goal Loss
- Ratio (across Full-Length Low-Loss Trials).
-
- * full_length_high_loss_s: Sum of Trial Effective Durations across
- Trials with Trial Duration at least equal to the Goal Final Trial
- Duration and with Trial Loss Ratio higher than the Goal Loss Ratio
- (across Full-Length High-Loss Trials).
-
- * full_length_trials: An iterable of all Trial Results from Trials
- with Trial Duration at least equal to the Goal Final Trial
- Duration (all Full-Length Trials), sorted by increasing Trial Loss
- Ratio. One item trial is a composite with the following two
- attributes available:
-
- - trial.loss_ratio: The Trial Loss Ratio as measured for this
- Trial.
-
- - trial.effect_duration: The Trial Effective Duration of this
- Trial.
-
- The code works correctly only when there is at least one Trial Result
- measured at a given Load.
-
-
-
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 62]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- <CODE BEGINS>
- full_length_s = full_length_low_loss_s + full_length_high_loss_s
- whole_s = max(goal_duration_s, full_length_s)
- remaining = whole_s * (1.0 - goal_exceed_ratio)
- quantile_loss_ratio = None
- for trial in full_length_trials:
- if quantile_loss_ratio is None or remaining > 0.0:
- quantile_loss_ratio = trial.loss_ratio
- remaining -= trial.effect_duration
- else:
- break
- else:
- if remaining > 0.0:
- quantile_loss_ratio = 1.0
- conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
- <CODE ENDS>
-
-Appendix C. Example Search
-
- The following example Search is related to one hypothetical run of a
- Search test procedure that has been started with multiple Search
- Goals. Several points in time are chosen, to show how the logic
- works, with specific sets of Trial Result available. The trial
- results themselves are not very realistic, as the intention is to
- show several corner cases of the logic.
-
- In all Trials, the Effective Trial Duration is equal to Trial
- Duration.
-
- Only one Trial Load is in focus, its value is one million frames per
- second. Trial Results at other Trial Loads are not mentioned, as the
- parts of logic present here do not depend on those. In practice,
- Trial Results at other Load values would be present, e.g., MLRsearch
- will look for a Lower Bound smaller than any Upper Bound found.
-
- At any given moment, exactly one Search Goal is designated as in
- focus. This designation affects only the Trial Duration chosen for
- new trials; it does not alter the rest of the decision logic.
-
- An MLRsearch implementation is free to evaluate several goals
- simultaneously - the "focus" mechanism is optional and appears here
- only to show that a load can still be classified against goals that
- are not currently in focus.
-
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 63]
-\f
-Internet-Draft MLRsearch September 2025
-
-
-C.1. Example Goals
-
- The following four Search Goal instances are selected for the example
- Search. Each goal has a readable name and dense code, the code is
- useful to show Search Goal attribute values.
-
- As the variable "exceed coefficient" does not depend on trial
- results, it is also precomputed here.
-
- Goal 1:
-
- name: RFC2544
- Goal Final Trial Duration: 60s
- Goal Duration Sum: 60s
- Goal Loss Ratio: 0%
- Goal Exceed Ratio: 0%
- exceed coefficient: 0% / (100% / 0%) = 0.0
- code: 60f60d0l0e
-
- Goal 2:
-
- name: TST009
- Goal Final Trial Duration: 60s
- Goal Duration Sum: 120s
- Goal Loss Ratio: 0%
- Goal Exceed Ratio: 50%
- exceed coefficient: 50% / (100% - 50%) = 1.0
- code: 60f120d0l50e
-
- Goal 3:
-
- name: 1s final
- Goal Final Trial Duration: 1s
- Goal Duration Sum: 120s
- Goal Loss Ratio: 0.5%
- Goal Exceed Ratio: 50%
- exceed coefficient: 50% / (100% - 50%) = 1.0
- code: 1f120d.5l50e
-
- Goal 4:
-
- name: 20% exceed
- Goal Final Trial Duration: 60s
- Goal Duration Sum: 60s
- Goal Loss Ratio: 0.5%
- Goal Exceed Ratio: 20%
- exceed coefficient: 20% / (100% - 20%) = 0.25
- code: 60f60d0.5l20e
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 64]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- The first two goals are important for compliance reasons, the other
- two cover less frequent cases.
-
-C.2. Example Trial Results
-
- The following six sets of trial results are selected for the example
- Search. The sets are defined as points in time, describing which
- Trial Results were added since the previous point.
-
- Each point has a readable name and dense code, the code is useful to
- show Trial Output attribute values and number of times identical
- results were added.
-
- Point 1:
-
- name: first short good
- goal in focus: 1s final (1f120d.5l50e)
- added Trial Results: 59 trials, each 1 second and 0% loss
- code: 59x1s0l
-
- Point 2:
-
- name: first short bad
- goal in focus: 1s final (1f120d.5l50e)
- added Trial Result: one trial, 1 second, 1% loss
- code: 59x1s0l+1x1s1l
-
- Point 3:
-
- name: last short bad
- goal in focus: 1s final (1f120d.5l50e)
- added Trial Results: 59 trials, 1 second each, 1% loss each
- code: 59x1s0l+60x1s1l
-
- Point 4:
-
- name: last short good
- goal in focus: 1s final (1f120d.5l50e)
- added Trial Results: one trial 1 second, 0% loss
- code: 60x1s0l+60x1s1l
-
- Point 5:
-
- name: first long bad
- goal in focus: TST009 (60f120d0l50e)
- added Trial Results: one trial, 60 seconds, 0.1% loss
- code: 60x1s0l+60x1s1l+1x60s.1l
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 65]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Point 6:
-
- name: first long good
- goal in focus: TST009 (60f120d0l50e)
- added Trial Results: one trial, 60 seconds, 0% loss
- code: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l
-
- Comments on point in time naming:
-
- * When a name contains "short", it means the added trial had Trial
- Duration of 1 second, which is Short Trial for 3 of the Search
- Goals, but it is a Full-Length Trial for the "1s final" goal.
-
- * Similarly, "long" in name means the added trial had Trial Duration
- of 60 seconds, which is Full-Length Trial for 3 goals but Long
- Trial for the "1s final" goal.
-
- * When a name contains "good" it means the added trial is Low-Loss
- Trial for all the goals.
-
- * When a name contains "short bad" it means the added trial is High-
- Loss Trial for all the goals.
-
- * When a name contains "long bad", it means the added trial is a
- High-Loss Trial for goals "RFC2544" and "TST009", but it is a Low-
- Loss Trial for the two other goals.
-
-C.3. Load Classification Computations
-
- This section shows how Load Classification logic is applied by
- listing all temporary values at the specific time point.
-
-C.3.1. Point 1
-
- This is the "first short good" point. Code for available results is:
- 59x1s0l
-
- +==============+==========+============+============+=============+
- |Goal name |RFC2544 |TST009 |1s final |20% exceed |
- +==============+==========+============+============+=============+
- |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |0s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |59s |0s |
- |low-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 66]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- |Short high- |0s |0s |0s |0s |
- |loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short low-loss|59s |59s |0s |59s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Balancing sum |0s |59s |0s |14.75s |
- +--------------+----------+------------+------------+-------------+
- |Excess sum |0s |-59s |0s |-14.75s |
- +--------------+----------+------------+------------+-------------+
- |Positive |0s |0s |0s |0s |
- |excess sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |0s |0s |0s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective full|0s |0s |59s |0s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |60s |120s |120s |60s |
- |whole sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Missing sum |60s |120s |61s |60s |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |60s |120s |61s |60s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Optimistic |0% |0% |0% |0% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |100% |100% |50.833% |100% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Classification|Undecided |Undecided |Undecided |Undecided |
- |Result | | | | |
- +--------------+----------+------------+------------+-------------+
-
- Table 1
-
- This is the last point in time where all goals have this load as
- Undecided.
-
-C.3.2. Point 2
-
- This is the "first short bad" point. Code for available results is:
- 59x1s0l+1x1s1l
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 67]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- +==============+==========+============+============+=============+
- |Goal name |RFC2544 |TST009 |1s final |20% exceed |
- +==============+==========+============+============+=============+
- |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |1s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |59s |0s |
- |low-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short high- |1s |1s |0s |1s |
- |loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short low-loss|59s |59s |0s |59s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Balancing sum |0s |59s |0s |14.75s |
- +--------------+----------+------------+------------+-------------+
- |Excess sum |1s |-58s |0s |-13.75s |
- +--------------+----------+------------+------------+-------------+
- |Positive |1s |0s |0s |0s |
- |excess sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |1s |0s |1s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective full|1s |0s |60s |0s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |60s |120s |120s |60s |
- |whole sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Missing sum |59s |120s |60s |60s |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |60s |120s |61s |60s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Optimistic |1.667% |0% |0.833% |0% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |100% |100% |50.833% |100% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Classification|Upper |Undecided |Undecided |Undecided |
- |Result |Bound | | | |
- +--------------+----------+------------+------------+-------------+
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 68]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Table 2
-
- Due to zero Goal Loss Ratio, RFC2544 goal must have mild or strong
- increase of exceed probability, so the one lossy trial would be lossy
- even if measured at 60 second duration. Due to zero exceed ratio,
- one High-Loss Trial is enough to preclude this Load from becoming a
- Lower Bound for RFC2544. That is why this Load is classified as an
- Upper Bound for RFC2544 this early.
-
- This is an example how significant time can be saved, compared to
- 60-second trials.
-
-C.3.3. Point 3
-
- This is the "last short bad" point. Code for available trial results
- is: 59x1s0l+60x1s1l
-
- +==============+==========+============+============+=============+
- |Goal name |RFC2544 |TST009 |1s final |20% exceed |
- +==============+==========+============+============+=============+
- |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |60s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |59s |0s |
- |low-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short high- |60s |60s |0s |60s |
- |loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short low-loss|59s |59s |0s |59s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Balancing sum |0s |59s |0s |14.75s |
- +--------------+----------+------------+------------+-------------+
- |Excess sum |60s |1s |0s |45.25s |
- +--------------+----------+------------+------------+-------------+
- |Positive |60s |1s |0s |45.25s |
- |excess sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |60s |1s |60s |45.25s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective full|60s |1s |119s |45.25s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |60s |120s |120s |60s |
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 69]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- |whole sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Missing sum |0s |119s |1s |14.75s |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |60s |120s |61s |60s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Optimistic |100% |0.833% |50% |75.417% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |100% |100% |50.833% |100% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Classification|Upper |Undecided |Undecided |Upper Bound |
- |Result |Bound | | | |
- +--------------+----------+------------+------------+-------------+
-
- Table 3
-
- This is the last point for "1s final" goal to have this Load still
- Undecided. Only one 1-second trial is missing within the 120-second
- Goal Duration Sum, but its result will decide the classification
- result.
-
- The "20% exceed" started to classify this load as an Upper Bound
- somewhere between points 2 and 3.
-
-C.3.4. Point 4
-
- This is the "last short good" point. Code for available trial
- results is: 60x1s0l+60x1s1l
-
- +==============+==========+============+============+=============+
- |Goal name |RFC2544 |TST009 |1s final |20% exceed |
- +==============+==========+============+============+=============+
- |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |60s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |60s |0s |
- |low-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short high- |60s |60s |0s |60s |
- |loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short low-loss|60s |60s |0s |60s |
- |sum | | | | |
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 70]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- +--------------+----------+------------+------------+-------------+
- |Balancing sum |0s |60s |0s |15s |
- +--------------+----------+------------+------------+-------------+
- |Excess sum |60s |0s |0s |45s |
- +--------------+----------+------------+------------+-------------+
- |Positive |60s |0s |0s |45s |
- |excess sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |60s |0s |60s |45s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective full|60s |0s |120s |45s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |60s |120s |120s |60s |
- |whole sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Missing sum |0s |120s |0s |15s |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |60s |120s |60s |60s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Optimistic |100% |0% |50% |75% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |100% |100% |50% |100% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Classification|Upper |Undecided |Lower Bound |Upper Bound |
- |Result |Bound | | | |
- +--------------+----------+------------+------------+-------------+
-
- Table 4
-
- The one missing trial for "1s final" was Low-Loss, half of trial
- results are Low-Loss which exactly matches 50% exceed ratio. This
- shows time savings are not guaranteed.
-
-C.3.5. Point 5
-
- This is the "first long bad" point. Code for available trial results
- is: 60x1s0l+60x1s1l+1x60s.1l
-
-
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 71]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- +==============+==========+============+============+=============+
- |Goal name |RFC2544 |TST009 |1s final |20% exceed |
- +==============+==========+============+============+=============+
- |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
- +--------------+----------+------------+------------+-------------+
- |Full-length |60s |60s |60s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Full-length |0s |0s |120s |60s |
- |low-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short high- |60s |60s |0s |60s |
- |loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short low-loss|60s |60s |0s |60s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Balancing sum |0s |60s |0s |15s |
- +--------------+----------+------------+------------+-------------+
- |Excess sum |60s |0s |0s |45s |
- +--------------+----------+------------+------------+-------------+
- |Positive |60s |0s |0s |45s |
- |excess sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |120s |60s |60s |45s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective full|120s |60s |180s |105s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |120s |120s |180s |105s |
- |whole sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Missing sum |0s |60s |0s |0s |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |120s |120s |60s |45s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Optimistic |100% |50% |33.333% |42.857% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |100% |100% |33.333% |42.857% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Classification|Upper |Undecided |Lower Bound |Lower Bound |
- |Result |Bound | | | |
- +--------------+----------+------------+------------+-------------+
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 72]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Table 5
-
- As designed for TST009 goal, one Full-Length High-Loss Trial can be
- tolerated. 120s worth of 1-second trials is not useful, as this is
- allowed when Exceed Probability does not depend on Trial Duration.
- As Goal Loss Ratio is zero, it is not possible for 60-second trials
- to compensate for losses seen in 1-second results. But Load
- Classification logic does not have that knowledge hardcoded, so
- optimistic exceed ratio is still only 50%.
-
- But the 0.1% Trial Loss Ratio is lower than "20% exceed" Goal Loss
- Ratio, so this unexpected Full-Length Low-Loss trial changed the
- classification result of this Load to Lower Bound.
-
-C.3.6. Point 6
-
- This is the "first long good" point. Code for available trial
- results is: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l
-
- +==============+==========+============+============+=============+
- |Goal name |RFC2544 |TST009 |1s final |20% exceed |
- +==============+==========+============+============+=============+
- |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
- +--------------+----------+------------+------------+-------------+
- |Full-length |60s |60s |60s |0s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Full-length |60s |60s |180s |120s |
- |low-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short high- |60s |60s |0s |60s |
- |loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Short low-loss|60s |60s |0s |60s |
- |sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Balancing sum |0s |60s |0s |15s |
- +--------------+----------+------------+------------+-------------+
- |Excess sum |60s |0s |0s |45s |
- +--------------+----------+------------+------------+-------------+
- |Positive |60s |0s |0s |45s |
- |excess sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective |120s |60s |60s |45s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Effective full|180s |120s |240s |165s |
- |sum | | | | |
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 73]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- +--------------+----------+------------+------------+-------------+
- |Effective |180s |120s |240s |165s |
- |whole sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Missing sum |0s |0s |0s |0s |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |120s |60s |60s |45s |
- |high-loss sum | | | | |
- +--------------+----------+------------+------------+-------------+
- |Optimistic |66.667% |50% |25% |27.273% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Pessimistic |66.667% |50% |25% |27.273% |
- |exceed ratio | | | | |
- +--------------+----------+------------+------------+-------------+
- |Classification|Upper |Lower Bound |Lower Bound |Lower Bound |
- |Result |Bound | | | |
- +--------------+----------+------------+------------+-------------+
-
- Table 6
-
- This is the Low-Loss Trial the "TST009" goal was waiting for. This
- Load is now classified for all goals; the search may end. Or, more
- realistically, it can focus on larger load only, as the three goals
- will want an Upper Bound (unless this Load is Max Load).
-
-C.4. Conditional Throughput Computations
-
- At the end of this hypothetical search, the "RFC2544" goal labels the
- load as an Upper Bound, making it ineligible for Conditional-
- Throughput calculations. By contrast, the other three goals treat
- the same load as a Lower Bound; if it is also accepted as their
- Relevant Lower Bound, we can compute Conditional-Throughput values
- for each of them.
-
- (The load under discussion is 1 000 000 frames per second.)
-
-C.4.1. Goal 2
-
- The Conditional Throughput is computed from sorted list of Full-
- Length Trial results. As TST009 Goal Final Trial Duration is 60
- seconds, only two of 122 Trials are considered Full-Length Trials.
- One has Trial Loss Ratio of 0%, the other of 0.1%.
-
- * Full-length high-loss sum is 60 seconds.
-
- * Full-length low-loss sum is 60 seconds.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 74]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * Full-length is 120 seconds.
-
- * Subceed ratio is 50%.
-
- * Remaining sum initially is 0.5x12s = 60 seconds.
-
- * Current loss ratio initially is 100%.
-
- * For first result (duration 60s, loss 0%):
-
- - Remaining sum is larger than zero, not exiting the loop.
-
- - Set current loss ratio to this trial's Trial Loss Ratio which
- is 0%.
-
- - Decrease the remaining sum by this trial's Trial Effective
- Duration.
-
- - New remaining sum is 60s - 60s = 0s.
-
- * For second result (duration 60s, loss 0.1%):
-
- * Remaining sum is not larger than zero, exiting the loop.
-
- * Current loss ratio was most recently set to 0%.
-
- * Current forwarding ratio is one minus the current loss ratio, so
- 100%.
-
- * Conditional Throughput is the current forwarding ratio multiplied
- by the Load value.
-
- * Conditional Throughput is one million frames per second.
-
-C.4.2. Goal 3
-
- The "1s final" has Goal Final Trial Duration of 1 second, so all 122
- Trial Results are considered Full-Length Trials. They are ordered
- like this:
-
- 60 1-second 0% loss trials,
- 1 60-second 0% loss trial,
- 1 60-second 0.1% loss trial,
- 60 1-second 1% loss trials.
-
- The result does not depend on the order of 0% loss trials.
-
- * Full-length high-loss sum is 60 seconds.
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 75]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * Full-length low-loss sum is 180 seconds.
-
- * Full-length is 240 seconds.
-
- * Subceed ratio is 50%.
-
- * Remaining sum initially is 0.5x240s = 120 seconds.
-
- * Current loss ratio initially is 100%.
-
- * For first 61 results (duration varies, loss 0%):
-
- - Remaining sum is larger than zero, not exiting the loop.
-
- - Set current loss ratio to this trial's Trial Loss Ratio which
- is 0%.
-
- - Decrease the remaining sum by this trial's Trial Effective
- Duration.
-
- - New remaining sum varies.
-
- * After 61 trials, duration of 60x1s + 1x60s has been subtracted
- from 120s, leaving 0s.
-
- * For 62-th result (duration 60s, loss 0.1%):
-
- - Remaining sum is not larger than zero, exiting the loop.
-
- * Current loss ratio was most recently set to 0%.
-
- * Current forwarding ratio is one minus the current loss ratio, so
- 100%.
-
- * Conditional Throughput is the current forwarding ratio multiplied
- by the Load value.
-
- * Conditional Throughput is one million frames per second.
-
-C.4.3. Goal 4
-
- The Conditional Throughput is computed from sorted list of Full-
- Length Trial results. As "20% exceed" Goal Final Trial Duration is
- 60 seconds, only two of 122 Trials are considered Full-Length Trials.
- One has Trial Loss Ratio of 0%, the other of 0.1%.
-
- * Full-length high-loss sum is 60 seconds.
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 76]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- * Full-length low-loss sum is 60 seconds.
-
- * Full-length is 120 seconds.
-
- * Subceed ratio is 80%.
-
- * Remaining sum initially is 0.8x120s = 96 seconds.
-
- * Current loss ratio initially is 100%.
-
- * For first result (duration 60s, loss 0%):
-
- - Remaining sum is larger than zero, not exiting the loop.
-
- - Set current loss ratio to this trial's Trial Loss Ratio which
- is 0%.
-
- - Decrease the remaining sum by this trial's Trial Effective
- Duration.
-
- - New remaining sum is 96s - 60s = 36s.
-
- * For second result (duration 60s, loss 0.1%):
-
- - Remaining sum is larger than zero, not exiting the loop.
-
- - Set current loss ratio to this trial's Trial Loss Ratio which
- is 0.1%.
-
- - Decrease the remaining sum by this trial's Trial Effective
- Duration.
-
- - New remaining sum is 36s - 60s = -24s.
-
- * No more trials (and remaining sum is not larger than zero),
- exiting loop.
-
- * Current loss ratio was most recently set to 0.1%.
-
- * Current forwarding ratio is one minus the current loss ratio, so
- 99.9%.
-
- * Conditional Throughput is the current forwarding ratio multiplied
- by the Load value.
-
- * Conditional Throughput is 999 thousand frames per second.
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 77]
-\f
-Internet-Draft MLRsearch September 2025
-
-
- Due to stricter Goal Exceed Ratio, this Conditional Throughput is
- smaller than Conditional Throughput of the other two goals.
-
-Authors' Addresses
-
- Maciek Konstantynowicz
- Cisco Systems
-
-
- Vratko Polak
- Cisco Systems
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Konstantynowicz & Polak Expires 6 March 2026 [Page 78]
+++ /dev/null
-<?xml version="1.0" encoding="us-ascii"?>
- <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
- <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.1.2) -->
-
-
-<!DOCTYPE rfc [
- <!ENTITY nbsp " ">
- <!ENTITY zwsp "​">
- <!ENTITY nbhy "‑">
- <!ENTITY wj "⁠">
-
-<!ENTITY RFC1242 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.1242.xml">
-<!ENTITY RFC2119 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
-<!ENTITY RFC2285 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2285.xml">
-<!ENTITY RFC2544 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2544.xml">
-<!ENTITY RFC8174 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml">
-<!ENTITY RFC5180 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.5180.xml">
-<!ENTITY RFC6349 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6349.xml">
-<!ENTITY RFC6985 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6985.xml">
-<!ENTITY RFC8219 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8219.xml">
-]>
-
-
-<rfc ipr="trust200902" docName="draft-ietf-bmwg-mlrsearch-12" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true">
- <front>
- <title abbrev="MLRsearch">Multiple Loss Ratio Search</title>
-
- <author initials="M." surname="Konstantynowicz" fullname="Maciek Konstantynowicz">
- <organization>Cisco Systems</organization>
- <address>
- </address>
- </author>
- <author initials="V." surname="Polak" fullname="Vratko Polak">
- <organization>Cisco Systems</organization>
- <address>
- </address>
- </author>
-
- <date year="2025" month="September" day="02"/>
-
- <area>ops</area>
- <workgroup>Benchmarking Working Group</workgroup>
- <keyword>Internet-Draft</keyword>
-
- <abstract>
-
-
-<?line 72?>
-
-<t>This document specifies extensions to "Benchmarking Methodology for
-Network Interconnect Devices" (RFC 2544) throughput search by
-defining a new methodology called Multiple Loss Ratio search
-(MLRsearch). MLRsearch aims to minimize search duration,
-support multiple loss ratio searches, and improve result repeatability
-and comparability.</t>
-
-<t>MLRsearch is motivated by the pressing need to address the challenges of
-evaluating and testing the various data plane solutions, especially in
-software-based networking systems based on Commercial Off-the-Shelf
-(COTS) CPU hardware vs purpose-built ASIC / NPU / FPGA hardware.</t>
-
-
-
- </abstract>
-
-
-
- </front>
-
- <middle>
-
-
-<?line 86?>
-
-
-<section anchor="introduction"><name>Introduction</name>
-
-<t>This document describes the Multiple Loss Ratio search
-(MLRsearch) methodology, optimized for determining data plane
-throughput in software-based networking functions running on commodity systems with
-x86/ARM CPUs (vs purpose-built ASIC / NPU / FPGA). Such network
-functions can be deployed on dedicated physical appliance (e.g., a
-standalone hardware device) or as virtual appliance (e.g., Virtual
-Network Function running on shared servers in the compute cloud).</t>
-
-<section anchor="purpose"><name>Purpose</name>
-
-<t>The purpose of this document is to describe the Multiple Loss Ratio search
-(MLRsearch) methodology, optimized for determining
-data plane throughput in software-based networking devices and functions.</t>
-
-<t>Applying the vanilla throughput binary search,
-as specified for example in <xref target="TST009"></xref> and <xref target="RFC2544"></xref>
-to software devices under test (DUTs) results in several problems:</t>
-
-<t><list style="symbols">
- <t>Binary search takes long as most trials are done far from the
-eventually found throughput.</t>
- <t>The required final trial duration and pauses between trials
-prolong the overall search duration.</t>
- <t>Software DUTs show noisy trial results,
-leading to a big spread of possible discovered throughput values.</t>
- <t>Throughput requires a loss of exactly zero frames, but the industry best practices
-frequently allow for low but non-zero losses tolerance (<xref target="Y.1564"></xref>, test-equipment manuals).</t>
- <t>The definition of throughput is not clear when trial results are inconsistent.
-(e.g., when successive trials at the same - or even a higher - offered
-load yield different loss ratios, the classical <xref target="RFC1242"></xref> / <xref target="RFC2544"></xref>
-throughput metric can no longer be pinned to a single, unambiguous
-value.)</t>
-</list></t>
-
-<t>To address these problems,
-early MLRsearch implementations employed the following enhancements:</t>
-
-<t><list style="numbers" type="1">
- <t>Allow multiple short trials instead of one big trial per load.
- <list style="symbols">
- <t>Optionally, tolerate a percentage of trial results with higher loss.</t>
- </list></t>
- <t>Allow searching for multiple Search Goals, with differing loss ratios.
- <list style="symbols">
- <t>Any trial result can affect each Search Goal in principle.</t>
- </list></t>
- <t>Insert multiple coarse targets for each Search Goal, earlier ones need
-to spend less time on trials.
- <list style="symbols">
- <t>Earlier targets also aim for lesser precision.</t>
- <t>Use Forwarding Rate (FR) at Maximum Offered Load (FRMOL), as defined
-in Section 3.6.2 of <xref target="RFC2285"></xref>, to initialize bounds.</t>
- </list></t>
- <t>Be careful when dealing with inconsistent trial results.
- <list style="symbols">
- <t>Reported throughput is smaller than the smallest load with high loss.</t>
- <t>Smaller load candidates are measured first.</t>
- </list></t>
- <t>Apply several time-saving load selection heuristics that deliberately
-prevent the bounds from narrowing unnecessarily.</t>
-</list></t>
-
-<t>Enhancements 1, 2 and partly 4 are formalized as MLRsearch Specification
-within this document, other implementation details are out the scope.</t>
-
-<t>The remaining enhancements are treated as implementation details,
-thus achieving high comparability without limiting future improvements.</t>
-
-<t>MLRsearch configuration
-supports both conservative settings and aggressive settings.
-Conservative enough settings lead to results
-unconditionally compliant with <xref target="RFC2544"></xref>,
-but without much improvement on search duration and repeatability - see
-<xref target="mlrsearch-compliant-with-rfc-2544">MLRsearch Compliant with RFC 2544</xref>.
-Conversely, aggressive settings lead to shorter search durations
-and better repeatability, but the results are not compliant with <xref target="RFC2544"></xref>.
-Exact settings are not specified, but see the discussion in
-<xref target="overview-of-rfc-2544-problems">Overview of RFC 2544 Problems</xref>
-for the impact of different settings on result quality.</t>
-
-<t>This document does not change or obsolete any part of <xref target="RFC2544"></xref>.</t>
-
-</section>
-<section anchor="positioning-within-bmwg-methodologies"><name>Positioning within BMWG Methodologies</name>
-
-<t>The Benchmarking Methodology Working Group (BMWG) produces recommendations (RFCs)
-that describe various benchmarking methodologies for use in a controlled laboratory environment.
-A large number of these benchmarks are based on the terminology from <xref target="RFC1242"></xref>
-and the foundational methodology from <xref target="RFC2544"></xref>.
-A common pattern has emerged where BMWG documents reference the methodology of <xref target="RFC2544"></xref>
-and augment it with specific requirements for testing particular network systems or protocols,
-without modifying the core benchmark definitions.</t>
-
-<t>While BMWG documents are formally recommendations,
-they are widely treated as industry norms to ensure the comparability of results between different labs.
-The set of benchmarks defined in <xref target="RFC2544"></xref>, in particular,
-became a de facto standard for performance testing.
-In this context, the MLRsearch Specification formally defines a new
-class of benchmarks that fits within the wider <xref target="RFC2544"></xref> framework
-(see <xref target="scope">Scope </xref>).</t>
-
-<t>A primary consideration in the design of MLRsearch is the trade-off
-between configurability and comparability. The methodology's flexibility,
-especially the ability to define various sets of Search Goals,
-supporting both single-goal and multiple-goal benchmarks in an unified way
-is powerful for detailed characterization and internal testing.
-However, this same flexibility is detrimental to inter-lab comparability
-unless a specific, common set of Search Goals is agreed upon.</t>
-
-<t>Therefore, MLRsearch should not be seen as a direct extension
-nor a replacement for the <xref target="RFC2544"></xref> Throughput benchmark.
-Instead, this document provides a foundational methodology
-that future BMWG documents can use to define new, specific, and comparable benchmarks
-by mandating particular Search Goal configurations.
-For operators of existing test procedures, it is worth noting
-that many test setups measuring <xref target="RFC2544"></xref> Throughput
-can be adapted to produce results compliant with the MLRsearch Specification,
-often without affecting Trials,
-merely by augmenting the content of the final test report.</t>
-
-</section>
-</section>
-<section anchor="overview-of-rfc-2544-problems"><name>Overview of RFC 2544 Problems</name>
-
-<t>This section describes the problems affecting usability
-of various performance testing methodologies,
-mainly a binary search for <xref target="RFC2544"></xref> unconditionally compliant throughput.</t>
-
-<section anchor="long-search-duration"><name>Long Search Duration</name>
-
-<t>The proliferation of software DUTs, with frequent software updates
-and a number of different frame processing modes and configurations,
-has increased both the number of performance tests
-required to verify the DUT update and the frequency of running those tests.
-This makes the overall test execution time even more important than before.</t>
-
-<t>The throughput definition per <xref target="RFC2544"></xref> restricts the potential
-for time-efficiency improvements.
-The bisection method, when used in a manner unconditionally compliant
-with <xref target="RFC2544"></xref>, is excessively slow due to two main factors.</t>
-
-<t>Firstly, a significant amount of time is spent on trials
-with loads that, in retrospect, are far from the final determined throughput.</t>
-
-<t>Secondly, <xref target="RFC2544"></xref> does not specify any stopping condition for
-throughput search, so users of testing equipment implementing the
-procedure already have access to a limited trade-off
-between search duration and achieved precision.
-However, each of the full 60-second trials doubles the precision.</t>
-
-<t>As such, not many trials can be removed without a substantial loss of precision.</t>
-
-<t>For reference, here is a brief <xref target="RFC2544"></xref> throughput binary
-(bisection) reminder, based on Sections 24 and 26 of [RFC2544:</t>
-
-<t><list style="symbols">
- <t>Set Max = line-rate and Min = a proven loss-free load.</t>
- <t>Run a single 60-s trial at the midpoint.</t>
- <t>Zero-loss -> midpoint becomes new Min; any loss-> new Max.</t>
- <t>Repeat until the Max-Min gap meets the desired precision, then report
-the highest zero-loss rate for every mandatory frame size.</t>
-</list></t>
-
-</section>
-<section anchor="dut-in-sut"><name>DUT in SUT</name>
-
-<t><xref target="RFC2285"></xref> defines:</t>
-
-<t>DUT as:</t>
-
-<t><list style="symbols">
- <t>The network frame forwarding device to which stimulus is offered and
-response measured Section 3.1.1 of <xref target="RFC2285"></xref>.</t>
-</list></t>
-
-<t>SUT as:</t>
-
-<t><list style="symbols">
- <t>The collective set of network devices as a single entity to which
-stimulus is offered and response measured Section 3.1.2 of <xref target="RFC2285"></xref>.</t>
-</list></t>
-
-<t>Section 19 of <xref target="RFC2544"></xref> specifies a test setup with an external tester
-stimulating the networking system, treating it either as a single
-Device Under Test (DUT), or as a system of devices, a System Under
-Test (SUT).</t>
-
-<t>For software-based data-plane forwarding running on commodity x86/ARM
-CPUs, the SUT comprises not only the forwarding application itself, the
-DUT, but the entire execution environment: host hardware, firmware and
-kernel/hypervisor services, as well as any other software workloads
-that share the same CPUs, memory and I/O resources.</t>
-
-<t>Given that a SUT is a shared multi-tenant environment,
-the DUT might inadvertently
-experience interference from the operating system
-or from other software operating on the same server.</t>
-
-<t>Some of this interference can be mitigated.
-For instance, in multi-core CPU systems, pinning DUT program threads to
-specific CPU cores
-and isolating those cores can prevent context switching.</t>
-
-<t>Despite taking all feasible precautions, some adverse effects may still impact
-the DUT's network performance.
-In this document, these effects are collectively
-referred to as SUT noise, even if the effects are not as unpredictable
-as what other engineering disciplines call noise.</t>
-
-<t>A DUT can also exhibit fluctuating performance itself,
-for reasons not related to the rest of SUT. For example, this can be
-due to pauses in execution as needed for internal stateful processing.
-In many cases this may be an expected per-design behavior,
-as it would be observable even in a hypothetical scenario
-where all sources of SUT noise are eliminated.
-Such behavior affects trial results in a way similar to SUT noise.
-As the two phenomena are hard to distinguish,
-in this document the term 'noise' is used to encompass
-both the internal performance fluctuations of the DUT
-and the genuine noise of the SUT.</t>
-
-<t>A simple model of SUT performance consists of an idealized noiseless performance,
-and additional noise effects.
-For a specific SUT, the noiseless performance is assumed to be constant,
-with all observed performance variations being attributed to noise.
-The impact of the noise can vary in time, sometimes wildly,
-even within a single trial.
-The noise can sometimes be negligible, but frequently
-it lowers the observed SUT performance as observed in trial results.</t>
-
-<t>In this simple model, a SUT does not have a single performance value, it has a spectrum.
-One end of the spectrum is the idealized noiseless performance value,
-the other end can be called a noiseful performance.
-In practice, trial results close to the noiseful end of the spectrum
-happen only rarely.
-The worse a possible performance value is, the more rarely it is seen in a trial.
-Therefore, the extreme noiseful end of the SUT spectrum is not observable
-among trial results.</t>
-
-<t>Furthermore, the extreme noiseless end of the SUT spectrum is unlikely
-to be observable, this time because minor noise events almost always
-occur during each trial, nudging the measured performance slightly
-below the theoretical maximum.</t>
-
-<t>Unless specified otherwise, this document's focus is
-on the potentially observable ends of the SUT performance spectrum,
-as opposed to the extreme ones.</t>
-
-<t>When focusing on the DUT, the benchmarking effort should ideally aim
-to eliminate only the SUT noise from SUT measurements.
-However, this is currently not feasible in practice,
-as there are no realistic enough models that would be capable
-to distinguish SUT noise from DUT fluctuations
-(based on the available literature at the time of writing).</t>
-
-<t>Provided SUT execution environment and any co-resident workloads place
-only negligible demands on SUT shared resources, so that
-the DUT remains the principal performance limiter,
-the DUT's ideal noiseless performance is defined
-as the noiseless end of the SUT performance spectrum.</t>
-
-<t>Note that by this definition, DUT noiseless performance
-also minimizes the impact of DUT fluctuations, as much as realistically possible
-for a given trial duration.</t>
-
-<t>The MLRsearch methodology aims to solve the DUT in SUT problem
-by estimating the noiseless end of the SUT performance spectrum
-using a limited number of trial results.</t>
-
-<t>Improvements to the throughput search algorithm, aimed at better dealing
-with software networking SUT and DUT setups, should adopt methods that
-explicitly model SUT-generated noise, enabling to derive surrogate
-metrics that approximate the (proxies for) DUT noiseless performance
-across a range of SUT noise-tolerance levels.</t>
-
-</section>
-<section anchor="repeatability-and-comparability"><name>Repeatability and Comparability</name>
-
-<t><xref target="RFC2544"></xref> does not suggest repeating throughput search. Also, note that
-from simply one discovered throughput value,
-it cannot be determined how repeatable that value is.
-Unsatisfactory repeatability then leads to unacceptable comparability,
-as different benchmarking teams may obtain varying throughput values
-for the same SUT, exceeding the expected differences from search precision.
-Repeatability is important also when the test procedure is kept the same,
-but SUT is varied in small ways. For example, during development
-of software-based DUTs, repeatability is needed to detect small regressions.</t>
-
-<t><xref target="RFC2544"></xref> throughput requirements (60 seconds trial and
-no tolerance of a single frame loss) affect the throughput result as follows:</t>
-
-<t>The SUT behavior close to the noiseful end of its performance spectrum
-consists of rare occasions of significantly low performance,
-but the long trial duration makes those occasions not so rare on the trial level.
-Therefore, the binary search results tend to spread away from the noiseless end
-of SUT performance spectrum, more frequently and more widely than shorter
-trials would, thus causing unacceptable throughput repeatability.</t>
-
-<t>The repeatability problem can be better addressed by defining a search procedure
-that identifies a consistent level of performance,
-even if it does not meet the strict definition of throughput in <xref target="RFC2544"></xref>.</t>
-
-<t>According to the SUT performance spectrum model, better repeatability
-will be at the noiseless end of the spectrum.
-Therefore, solutions to the DUT in SUT problem
-will help also with the repeatability problem.</t>
-
-<t>Conversely, any alteration to <xref target="RFC2544"></xref> throughput search
-that improves repeatability should be considered
-as less dependent on the SUT noise.</t>
-
-<t>An alternative option is to simply run a search multiple times, and
-report some statistics (e.g., average and standard deviation, and/or
-percentiles like p95).</t>
-
-<t>This can be used for a subset of tests deemed more important,
-but it makes the search duration problem even more pronounced.</t>
-
-</section>
-<section anchor="throughput-with-non-zero-loss"><name>Throughput with Non-Zero Loss</name>
-
-<dl>
- <dt>Section 3.17 of <xref target="RFC1242"></xref> defines throughput as:</dt>
- <dd>
- <t>The maximum rate at which none of the offered frames
-are dropped by the device.</t>
- </dd>
- <dt>Then, it says:</dt>
- <dd>
- <t>Since even the loss of one frame in a
-data stream can cause significant delays while
-waiting for the higher-level protocols to time out,
-it is useful to know the actual maximum data
-rate that the device can support.</t>
- </dd>
-</dl>
-
-<t>However, many benchmarking teams accept a low,
-non-zero loss ratio as the goal for their load search.</t>
-
-<t>Motivations are many:</t>
-
-<t><list style="symbols">
- <t>Networking protocols tolerate frame loss better,
-compared to the time when <xref target="RFC1242"></xref> and <xref target="RFC2544"></xref> were specified.</t>
- <t>Increased link speeds require trials sending more frames within the same duration,
-increasing the chance of a small SUT performance fluctuation
-being enough to cause frame loss.</t>
- <t>Because noise-related drops usually arrive in small bursts, their
-impact on the trial's overall frame loss ratio is diluted by the
-longer intervals in which the SUT operates close to its noiseless
-performance; consequently, the averaged Trial Loss Ratio can still
-end up below the specified Goal Loss Ratio value.</t>
- <t>If an approximation of the SUT noise impact on the Trial Loss Ratio is known,
-it can be set as the Goal Loss Ratio (see definitions of
-Trial and Goal terms in <xref target="trial-terms">Trial Terms</xref> and <xref target="goal-terms">Goal Terms</xref>).</t>
- <t>For more information, see an earlier draft <xref target="Lencze-Shima"></xref> (Section 5)
-and references there.</t>
-</list></t>
-
-<t>Regardless of the validity of all similar motivations,
-support for non-zero loss goals makes a
-search algorithm more user-friendly.
-<xref target="RFC2544"></xref> throughput is not user-friendly in this regard.</t>
-
-<t>Furthermore, allowing users to specify multiple loss ratio values,
-and enabling a single search to find all relevant bounds,
-significantly enhances the usefulness of the search algorithm.</t>
-
-<t>Searching for multiple Search Goals also helps to describe the SUT performance
-spectrum better than the result of a single Search Goal.
-For example, the repeated wide gap between zero and non-zero loss loads
-indicates the noise has a large impact on the observed performance,
-which is not evident from a single goal load search procedure result.</t>
-
-<t>It is easy to modify the vanilla bisection to find a lower bound
-for the load that satisfies a non-zero Goal Loss Ratio.
-But it is not that obvious how to search for multiple goals at once,
-hence the support for multiple Search Goals remains a problem.</t>
-
-<t>At the time of writing there does not seem to be a consensus in the industry
-on which ratio value is the best.
-For users, performance of higher protocol layers is important, for
-example, goodput of TCP connection (TCP throughput, <xref target="RFC6349"></xref>), but relationship
-between goodput and loss ratio is not simple. Refer to
-<xref target="Lencze-Kovacs-Shima"></xref> for examples of various corner cases,
-Section 3 of <xref target="RFC6349"></xref> for loss ratios acceptable for an accurate
-measurement of TCP throughput, and <xref target="Ott-Mathis-Semke-Mahdavi"></xref> for
-models and calculations of TCP performance in presence of packet loss.</t>
-
-</section>
-<section anchor="inconsistent-trial-results"><name>Inconsistent Trial Results</name>
-
-<t>While performing throughput search by executing a sequence of
-measurement trials, there is a risk of encountering inconsistencies
-between trial results.</t>
-
-<t>Examples include, but are not limited to:</t>
-
-<t><list style="symbols">
- <t>A trial at the same load (same or different trial duration) results
-in a different Trial Loss Ratio.</t>
- <t>A trial at a larger load (same or different trial duration) results
-in a lower Trial Loss Ratio.</t>
-</list></t>
-
-<t>The plain bisection never encounters inconsistent trials.
-But <xref target="RFC2544"></xref> hints about the possibility of inconsistent trial results,
-in two places in its text.
-The first place is Section 24 of <xref target="RFC2544"></xref>,
-where full trial durations are required,
-presumably because they can be inconsistent with the results
-from short trial durations.
-The second place is Section 26.3 of <xref target="RFC2544"></xref>,
-where two successive zero-loss trials
-are recommended, presumably because after one zero-loss trial
-there can be a subsequent inconsistent non-zero-loss trial.</t>
-
-<t>A robust throughput search algorithm needs to decide how to continue
-the search in the presence of such inconsistencies.
-Definitions of throughput in <xref target="RFC1242"></xref> and <xref target="RFC2544"></xref> are not specific enough
-to imply a unique way of handling such inconsistencies.</t>
-
-<t>Ideally, there will be a definition of a new quantity which both generalizes
-throughput for non-zero Goal Loss Ratio values
-(and other possible repeatability enhancements), while being precise enough
-to force a specific way to resolve trial result inconsistencies.
-But until such a definition is agreed upon, the correct way to handle
-inconsistent trial results remains an open problem.</t>
-
-<t>Relevant Lower Bound is the MLRsearch term that addresses this problem.</t>
-
-</section>
-</section>
-<section anchor="requirements-language"><name>Requirements Language</name>
-
-<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL"
-in this document are to be interpreted as described in BCP 14, <xref target="RFC2119"></xref>
-and <xref target="RFC8174"></xref> when, and only when, they appear in all capitals, as shown here.</t>
-
-<t>This document is categorized as an Informational RFC.
-While it does not mandate the adoption of the MLRsearch methodology,
-it uses the normative language of BCP 14 to provide an unambiguous specification.
-This ensures that if a test procedure or test report claims compliance with the MLRsearch Specification,
-it MUST adhere to all the absolute requirements defined herein.
-The use of normative language is intended to promote repeatable and comparable results
-among those who choose to implement this methodology.</t>
-
-</section>
-<section anchor="mlrsearch-specification"><name>MLRsearch Specification</name>
-
-<t>This chapter provides all technical definitions
-needed for evaluating whether a particular test procedure
-complies with MLRsearch Specification.</t>
-
-<t>Some terms used in the specification are capitalized.
-It is just a stylistic choice for this document,
-reminding the reader this term is introduced, defined or explained
-elsewhere in the document. Lower case variants are equally valid.</t>
-
-<t>This document does not separate terminology from methodology. Terms are
-fully specified and discussed in their own subsections, under sections
-titled "Terms". This way, the list of terms is visible in table of
-contents.</t>
-
-<t>Each per term subsection contains a short <em>Definition</em> paragraph
-containing a minimal definition and all strict requirements, followed
-by <em>Discussion</em> paragraphs focusing on important consequences and
-recommendations. Requirements about how other components can use the
-defined quantity are also included in the discussion.</t>
-
-<section anchor="scope"><name>Scope</name>
-
-<t>This document specifies the Multiple Loss Ratio search (MLRsearch) methodology.
-The MLRsearch Specification details a new class of benchmarks
-by listing all terminology definitions and methodology requirements.
-The definitions support "multi-goal" benchmarks, with "single-goal" as a subset.</t>
-
-<t>The normative scope of this specification includes:</t>
-
-<t><list style="symbols">
- <t>The terminology for all required quantities and their attributes.</t>
- <t>An abstract architecture consisting of functional components
-(Manager, Controller, Measurer) and the requirements for their inputs and outputs.</t>
- <t>The required structure and attributes of the Controller Input,
-including one or more Search Goal instances.</t>
- <t>The required logic for Load Classification, which determines whether a given Trial Load
-qualifies as a Lower Bound or an Upper Bound for a Search Goal.</t>
- <t>The required structure and attributes of the Controller Output,
-including a Goal Result for each Search Goal.</t>
-</list></t>
-
-<section anchor="relationship-to-rfc-2544"><name>Relationship to RFC 2544</name>
-
-<t>MLRsearch Specification is an independent methodology
-and does not change or obsolete any part of <xref target="RFC2544"></xref>.</t>
-
-<t>This specification permits deviations from the Trial procedure
-as described in <xref target="RFC2544"></xref>. Any deviation from the <xref target="RFC2544"></xref> procedure
-must be documented explicitly in the Test Report,
-and such variations remain outside the scope of the original <xref target="RFC2544"></xref> benchmarks.</t>
-
-<t>A specific single-goal MLRsearch benchmark can be configured
-to be compliant with <xref target="RFC2544"></xref> Throughput,
-and most procedures reporting <xref target="RFC2544"></xref> Throughput
-can be adapted to satisfy also MLRsearch requirements for specific search goal.</t>
-
-</section>
-<section anchor="applicability-of-other-specifications"><name>Applicability of Other Specifications</name>
-
-<t>Methodology extensions from other BMWG documents that specify details
-for testing particular DUTs, configurations, or protocols
-(e.g., by defining a particular Traffic Profile) are considered orthogonal
-to MLRsearch and are applicable to a benchmark conducted using MLRsearch methodology.</t>
-
-</section>
-<section anchor="out-of-scope"><name>Out of Scope</name>
-
-<t>The following aspects are explicitly out of the normative scope of this document:</t>
-
-<t><list style="symbols">
- <t>This specification does not mandate or recommend any single,
-universal Search Goal configuration for all use cases.
-The selection of Search Goal parameters is left
-to the operator of the test procedure or may be defined by future specifications.</t>
- <t>The internal heuristics or algorithms used by the Controller to select Trial Input values
-(e.g., the load selection strategy) are considered implementation details.</t>
- <t>The potential for, and the effects of, interference between different Search Goal instances
-within a multiple-goal search are considered outside the normative scope of this specification.</t>
-</list></t>
-
-</section>
-</section>
-<section anchor="architecture-overview"><name>Architecture Overview</name>
-
-<t>Although the normative text references only terminology that has already
-been introduced, explanatory passages beside it sometimes profit from
-terms that are defined later in the document. To keep the initial
-read-through clear, this informative section offers a concise, top-down
-sketch of the complete MLRsearch architecture.</t>
-
-<t>The architecture is modelled as a set of abstract, interacting
-components. Information exchange between components is expressed in an
-imperative-programming style: one component "calls" another, supplying
-inputs (arguments) and receiving outputs (return values). This notation
-is purely conceptual; actual implementations need not exchange explicit
-messages. When the text contrasts alternative behaviours, it refers to
-the different implementations of the same component.</t>
-
-<t>A test procedure is considered compliant with the MLRsearch
-Specification if it can be conceptually decomposed into the abstract
-components defined herein, and each component satisfies the
-requirements defined for its corresponding MLRsearch element.</t>
-
-<t>The Measurer component is tasked to perform Trials,
-the Controller component is tasked to select Trial Durations and Loads,
-the Manager component is tasked to pre-configure involved entities
-and to produce the Test Report.
-The Test Report explicitly states Search Goals (as Controller Input)
-and corresponding Goal Results (Controller Output).</t>
-
-<t>This constitutes one benchmark (single-goal or multi-goal).
-Repeated or slightly differing benchmarks are realized
-by calling Controller once for each benchmark.</t>
-
-<t>The Manager calls a Controller once,
-and the Controller then invokes the Measurer repeatedly
-until Controller decides it has enough information to return outputs.</t>
-
-<t>The part during which the Controller invokes the Measurer is termed the
-Search. Any work the Manager performs either before invoking the
-Controller or after Controller returns, falls outside the scope of the
-Search.</t>
-
-<t>MLRsearch Specification prescribes Regular Search Results and recommends
-corresponding search completion conditions.</t>
-
-<t>Irregular Search Results are also allowed,
-they have different requirements and their corresponding stopping conditions are out of scope.</t>
-
-<t>Search Results are based on Load Classification. When measured enough,
-a chosen Load can either achieve or fail each Search Goal
-(separately), thus becoming a Lower Bound or an Upper Bound for that
-Search Goal.</t>
-
-<t>When the Relevant Lower Bound is close enough to Relevant Upper Bound
-according to Goal Width, the Regular Goal Result is found.
-Search stops when all Regular Goal Results are found,
-or when some Search Goals are proven to have only Irregular Goal Results.</t>
-
-<section anchor="test-report"><name>Test Report</name>
-
-<t>A primary responsibility of the Manager is to produce a Test Report,
-which serves as the final and formal output of the test procedure.</t>
-
-<t>This document does not provide a single, complete, normative definition
-for the structure of the Test Report. For example, Test Report may contain
-results for a single benchmark, or it could aggregate results of many benchmarks.</t>
-
-<t>Instead, normative requirements for the content of the Test Report
-are specified throughout this document in conjunction
-with the definitions of the quantities and procedures to which they apply.
-Readers should note that any clause requiring a value to be "reported"
-or "stated in the test report" constitutes a normative requirement
-on the content of this final artifact.</t>
-
-<t>Even where not stated explicitly, the "Reporting format"
-paragraphs in <xref target="RFC2544"></xref> sections are still requirements on Test Report
-if they apply to a MLRsearch benchmark.</t>
-
-</section>
-<section anchor="behavior-correctness"><name>Behavior Correctness</name>
-
-<t>MLRsearch Specification by itself does not guarantee that
-the Search ends in finite time, as the freedom the Controller has
-for Load selection also allows for clearly deficient choices.</t>
-
-<t>For deeper insights on these matters, refer to <xref target="FDio-CSIT-MLRsearch"></xref>.</t>
-
-<t>The primary MLRsearch implementation, used as the prototype
-for this specification, is <xref target="PyPI-MLRsearch"></xref>.</t>
-
-</section>
-</section>
-<section anchor="quantities"><name>Quantities</name>
-
-<t>MLRsearch Specification
-uses a number of specific quantities,
-some of them can be expressed in several different units.</t>
-
-<t>In general, MLRsearch Specification does not require particular units to be used,
-but it is REQUIRED for the test report to state all the units.
-For example, ratio quantities can be dimensionless numbers between zero and one,
-but may be expressed as percentages instead.</t>
-
-<t>For convenience, a group of quantities can be treated as a composite quantity.
-One constituent of a composite quantity is called an attribute.
-A group of attribute values is called an instance of that composite quantity.</t>
-
-<t>Some attributes may depend on others and can be calculated from other
-attributes. Such quantities are called derived quantities.</t>
-
-<section anchor="current-and-final-values"><name>Current and Final Values</name>
-
-<t>Some quantities are defined in a way that makes it possible to compute their
-values in the middle of a Search. Other quantities are specified so
-that their values can be computed only after a Search ends. Some
-quantities are important only after a Search ended, but their values
-are computable also before a Search ends.</t>
-
-<t>For a quantity that is computable before a Search ends,
-the adjective <strong>current</strong> is used to mark a value of that quantity
-available before the Search ends.
-When such value is relevant for the search result, the adjective <strong>final</strong>
-is used to denote the value of that quantity at the end of the Search.</t>
-
-<t>If a time evolution of such a dynamic quantity is guided by
-configuration quantities, those adjectives can be used to distinguish
-quantities. For example, if the current value of "duration"
-(dynamic quantity) increases from "initial duration" to "final
-duration" (configuration quantities), all the quoted names denote
-separate but related quantities. As the naming suggests, the final
-value of "duration" is expected to be equal to "final duration" value.</t>
-
-</section>
-</section>
-<section anchor="existing-terms"><name>Existing Terms</name>
-
-<t>This specification relies on the following three documents that should
-be consulted before attempting to make use of this document:</t>
-
-<t><list style="symbols">
- <t>"Benchmarking Terminology for Network Interconnect Devices" <xref target="RFC1242"></xref>
-contains basic term definitions.</t>
- <t>"Benchmarking Terminology for LAN Switching Devices" <xref target="RFC2285"></xref> adds
-more terms and discussions, describing some known network
-benchmarking situations in a more precise way.</t>
- <t>"Benchmarking Methodology for Network Interconnect Devices"
- <xref target="RFC2544"></xref> contains discussions about terms and additional
- methodology requirements.</t>
-</list></t>
-
-<t>Definitions of some central terms from above documents are copied and
-discussed in the following subsections.</t>
-
-<section anchor="sut"><name>SUT</name>
-
-<t>Defined in Section 3.1.2 of <xref target="RFC2285"></xref> as follows.</t>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The collective set of network devices to which stimulus is offered
-as a single entity and response measured.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>An SUT consisting of a single network device is allowed by this definition.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In software-based networking SUT may comprise multitude of
-networking applications and the entire host hardware and software
-execution environment.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>SUT is the only entity that can be benchmarked directly,
-even though only the performance of some sub-components are of interest.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="dut"><name>DUT</name>
-
-<t>Defined in Section 3.1.1 of <xref target="RFC2285"></xref> as follows.</t>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The network forwarding device
-to which stimulus is offered and response measured.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Contrary to SUT, the DUT stimulus and response are frequently
-initiated and observed only indirectly, on different parts of SUT.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>DUT, as a sub-component of SUT, is only indirectly mentioned in
-MLRsearch Specification, but is of key relevance for its motivation.
-The device can represent a software-based networking functions running
-on commodity x86/ARM CPUs (vs purpose-built ASIC / NPU / FPGA).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>A well-designed SUTs should have the primary DUT as their performance bottleneck.
-The ways to achieve that are outside of MLRsearch Specification scope.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial"><name>Trial</name>
-
-<t>A trial is the part of the test described in Section 23 of <xref target="RFC2544"></xref>.</t>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A particular test consists of multiple trials. Each trial returns
-one piece of information, for example the loss rate at a particular
-input frame rate. Each trial consists of a number of phases:</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>a) If the DUT is a router, send the routing update to the "input"
-port and pause two seconds to be sure that the routing has settled.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>b) Send the "learning frames" to the "output" port and wait 2
-seconds to be sure that the learning has settled. Bridge learning
-frames are frames with source addresses that are the same as the
-destination addresses used by the test frames. Learning frames for
-other protocols are used to prime the address resolution tables in
-the DUT. The formats of the learning frame that should be used are
-shown in the Test Frame Formats document.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>c) Run the test trial.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>d) Wait for two seconds for any residual frames to be received.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>e) Wait for at least five seconds for the DUT to restabilize.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The traffic is sent only in phase c) and received in phases c) and d).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Trials are the only stimuli the SUT is expected to experience during the Search.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In some discussion paragraphs, it is useful to consider the traffic
-as sent and received by a tester, as implicitly defined
-in Section 6 of <xref target="RFC2544"></xref>.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The definition describes some traits, not using capitalized verbs
-to signify strength of the requirements.
-For the purposes of the MLRsearch Specification,
-the test procedure MAY deviate from the <xref target="RFC2544"></xref> description,
-but any such deviation MUST be described explicitly in the Test Report.
-It is still RECOMMENDED to not deviate from the description,
-as any deviation weakens comparability.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>An example of deviation from <xref target="RFC2544"></xref> is using shorter wait times,
-compared to those described in phases a), b), d) and e).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The <xref target="RFC2544"></xref> document itself seems to be treating phase b)
-as any type of configuration that cannot be configured only once (by Manager,
-before Search starts), as some crucial SUT state could time-out during the Search.
-It is RECOMMENDED to interpret the "learning frames" to be
-any such time-sensitive per-trial configuration method,
-with bridge MAC learning being only one possible examples.
-Appendix C.2.4.1 of <xref target="RFC2544"></xref> lists another example: ARP with wait time of 5 seconds.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Some methodologies describe recurring tests.
-If those are based on Trials, they are treated as multiple independent Trials.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-<section anchor="trial-terms"><name>Trial Terms</name>
-
-<t>This section defines new and redefine existing terms for quantities
-relevant as inputs or outputs of a Trial, as used by the Measurer component.
-This includes also any derived quantities related to results of one Trial.</t>
-
-<section anchor="trial-duration"><name>Trial Duration</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Duration is the intended duration of the phase c) of a Trial.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The value MUST be positive.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>While any positive real value may be provided, some Measurer
-implementations MAY limit possible values, e.g., by rounding down to
-nearest integer in seconds. In that case, it is RECOMMENDED to give
-such inputs to the Controller so that the Controller
-only uses the accepted values.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-load"><name>Trial Load</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Load is the per-interface Intended Load for a Trial.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Load is equivalent to the quantities defined
-as constant load (Section 3.4 of <xref target="RFC1242"></xref>),
-data rate (Section 14 of <xref target="RFC2544"></xref>),
-and Intended Load (Section 3.5.1 of <xref target="RFC2285"></xref>),
-in the sense that all three definitions specify that this value
-applies to one (input or output) interface.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For specification purposes, it is assumed that this is a constant load by default,
-as specified in Section 3.4 of <xref target="RFC1242"></xref>).
-Informally, Traffic Load is a single number that can "scale" any traffic pattern
-as long as the intuition of load intended against a single interface can be applied.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It MAY be possible to use a Trial Load value to describe a non-constant traffic
-(using average load when the traffic consists of repeated bursts of frames
-e.g., as suggested in Section 21 of <xref target="RFC2544"></xref>).
-In the case of a non-constant load, the Test Report
-MUST explicitly mention how exactly non-constant the traffic is
-and how it reacts to Traffic Load value.
-But the rest of the MLRsearch Specification assumes that is not the case,
-to avoid discussing corner cases (e.g., which values are possible within medium limitations).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Similarly, traffic patterns where different interfaces are subject to different loads
-MAY be described by a single Trial Load value (e.g. using largest load among interfaces),
-but again the Test Report MUST explicitly describe how the traffic pattern
-reacts to Traffic Load value,
-and this specification does not discuss all the implications of that approach.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In the common case of bidirectional traffic, as described in
-Section 14. Bidirectional Traffic of <xref target="RFC2544"></xref>,
-Trial Load is the data rate per direction, half of aggregate data rate.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Traffic patterns where a single Trial Load does not describe their scaling
-cannot be used for MLRsearch benchmarks.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Similarly to Trial Duration, some Measurers MAY limit the possible values
-of Trial Load. Contrary to Trial Duration,
-documenting such behavior in the test report is OPTIONAL.
-This is because the load differences are negligible (and frequently
-undocumented) in practice.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The Controller MAY select Trial Load and Trial Duration values in a way
-that would not be possible to achieve using any integer number of data frames.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>If a particular Trial Load value is not tied to a single Trial,
-e.g., if there are no Trials yet or if there are multiple Trials,
-this document uses a shorthand <strong>Load</strong>.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The test report MAY present the aggregate load across multiple
-interfaces, treating it as the same quantity expressed using different
-units. Each reported Trial Load value MUST state unambiguously whether
-it refers to (i) a single interface, (ii) a specified subset of
-interfaces (such as all logical interfaces mapped to one physical
-port), or (iii) the total across every interface. For any aggregate
-load value, the report MUST also give the fixed conversion factor that
-links the per-interface and multi-interface load values.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The per-interface value remains the primary unit, consistent
-with prevailing practice in <xref target="RFC1242"></xref>, <xref target="RFC2544"></xref>, and <xref target="RFC2285"></xref>.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The last paragraph also applies to other terms related to Load.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For example, tests with symmetric bidirectional traffic
-can report load-related values as "bidirectional load"
-(double of "unidirectional load").</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-input"><name>Trial Input</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Input is a composite quantity, consisting of two attributes:
-Trial Duration and Trial Load.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>When talking about multiple Trials, it is common to say "Trial Inputs"
-to denote all corresponding Trial Input instances.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>A Trial Input instance acts as the input for one call of the Measurer component.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Contrary to other composite quantities, MLRsearch implementations
-MUST NOT add optional attributes into Trial Input.
-This improves interoperability between various implementations of
-a Controller and a Measurer.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Note that both attributes are <strong>intended</strong> quantities,
-as only those can be fully controlled by the Controller.
-The actual offered quantities, as realized by the Measurer, can be different
-(and must be different if not multiplying into integer number of frames),
-but questions around those offered quantities are generally
-outside of the scope of this document.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="traffic-profile"><name>Traffic Profile</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Traffic Profile is a composite quantity containing
-all attributes other than Trial Load and Trial Duration,
-that are needed for unique determination of the Trial to be performed.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>All the attributes are assumed to be constant during the Search,
-and the composite is configured on the Measurer by the Manager
-before the Search starts.
-This is why the traffic profile is not part of the Trial Input.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Specification of traffic properties included in the Traffic Profile is
-the responsibility of the Manager, but the specific configuration mechanisms
-are outside of the scope of this document.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Informally, implementations of the Manager and the Measurer
-must be aware of their common set of capabilities,
-so that Traffic Profile instance uniquely defines the traffic during the Search.
-Typically, Manager and Measurer implementations are tightly integrated.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Integration efforts between independent Manager and Measurer implementations
-are outside of the scope of this document.
-An example standardization effort is <xref target="Vassilev"></xref>.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Examples of traffic properties include:
-- Data link frame size
-- Fixed sizes as listed in Section 3.5 of <xref target="RFC1242"></xref> and in Section
- 9 of <xref target="RFC2544"></xref>
-- IMIX mixed sizes as defined in <xref target="RFC6985"></xref>
-- Frame formats and protocol addresses
-- Section 8, 12 and Appendix C of <xref target="RFC2544"></xref>
-- Symmetric bidirectional traffic
-- Section 14 of <xref target="RFC2544"></xref>.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Other traffic properties that need to be somehow specified
-in Traffic Profile, and MUST be mentioned in Test Report
-if they apply to the benchmark, include:</t>
- </dd>
- <dt> </dt>
- <dd>
- <t><list style="symbols">
- <t>bidirectional traffic from Section 14 of <xref target="RFC2544"></xref>,</t>
- <t>fully meshed traffic from Section 3.3.3 of <xref target="RFC2285"></xref>,</t>
- <t>modifiers from Section 11 of <xref target="RFC2544"></xref>.</t>
- <t>IP version mixing from Section 5.3 of <xref target="RFC8219"></xref>.</t>
- </list></t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-forwarding-ratio"><name>Trial Forwarding Ratio</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Trial Forwarding Ratio is a dimensionless floating point value.
-It MUST range between 0.0 and 1.0, both inclusive.
-It is calculated by dividing the number of frames
-successfully forwarded by the SUT
-by the total number of frames expected to be forwarded during the trial.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>For most Traffic Profiles, "expected to be forwarded" means
-"intended to get received by SUT from tester".
-This SHOULD be the default interpretation.
-Only if this is not the case, the test report MUST describe the Traffic Profile
-in a detail sufficient to imply how Trial Forwarding Ratio should be calculated.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Trial Forwarding Ratio MAY be expressed in other units
-(e.g., as a percentage) in the test report.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Note that, contrary to Load terms, frame counts used to compute
-Trial Forwarding Ratio are generally aggregates over all SUT output interfaces,
-as most test procedures verify all outgoing frames.
-The procedure for <xref target="RFC2544"></xref> Throughput counts received frames,
-so implicitly it implies bidirectional counts for bidirectional traffic,
-even though the final value is "rate" that is still per-interface.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For example, in a test with symmetric bidirectional traffic,
-if one direction is forwarded without losses, but the opposite direction
-does not forward at all, the Trial Forwarding Ratio would be 0.5 (50%).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In future extensions, more general ways to compute Trial Forwarding Ratio
-may be allowed, but the current MLRsearch Specification relies on this specific
-averaged counters approach.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-loss-ratio"><name>Trial Loss Ratio</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Trial Loss Ratio is equal to one minus the Trial Forwarding Ratio.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>100% minus the Trial Forwarding Ratio, when expressed as a percentage.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>This is almost identical to Frame Loss Rate of Section 3.6 of <xref target="RFC1242"></xref>.
-The only minor differences are that Trial Loss Ratio does not need to
-be expressed as a percentage, and Trial Loss Ratio is explicitly
-based on averaged frame counts when more than one data stream is present.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-forwarding-rate"><name>Trial Forwarding Rate</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Trial Forwarding Rate is a derived quantity, calculated by
-multiplying the Trial Load by the Trial Forwarding Ratio.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>This quantity differs from the Forwarding Rate described in Section
-3.6.1 of <xref target="RFC2285"></xref>. Under the RFC 2285 method, each output interface is
-measured separately, so every interface may report a distinct rate. The
-Trial Forwarding Rate, by contrast, uses a single set of frame counts
-and therefore yields one value that represents the whole system,
-while still preserving the direct link to the per-interface load.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>When the Traffic Profile is symmetric and bidirectional, as defined in
-Section 14 of <xref target="RFC2544"></xref>, the Trial Forwarding Rate is numerically equal
-to the arithmetic average of the individual per-interface forwarding rates
-that would be produced by the RFC 2285 procedure.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For more complex traffic patterns, such as many-to-one as mentioned
-in Section 3.3.2 Partially Meshed Traffic of <xref target="RFC2285"></xref>,
-the meaning of Trial Forwarding Rate is less straightforward.
-For example, if two input interfaces receive one million frames per second each,
-and a single interface outputs 1.4 million frames per second (fps),
-Trial Load is 1 million fps, Trial Loss Ratio is 30%,
-and Trial Forwarding Rate is 0.7 million fps.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Because this rate is anchored to the Load defined for one interface,
-a test report MAY show it either as the single averaged figure just described,
-or as the sum of the separate per-interface forwarding rates.
-For the example above, the aggregate trial forwarding rate is 1.4 million fps.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-effective-duration"><name>Trial Effective Duration</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Effective Duration is a time quantity related to a Trial,
-by default equal to the Trial Duration.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>This is an optional feature.
-If the Measurer does not return any Trial Effective Duration value,
-the Controller MUST use the Trial Duration value instead.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Trial Effective Duration may be any positive time quantity
-chosen by the Measurer to be used for time-based decisions in the Controller.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The test report MUST explain how the Measurer computes the returned
-Trial Effective Duration values, if they are not always
-equal to the Trial Duration.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>This feature can be beneficial for time-critical benchmarks
-designed to manage the overall search duration,
-rather than solely the traffic portion of it.
-An approach is to measure the duration of the whole trial (including all wait times)
-and use that as the Trial Effective Duration.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>This is also a way for the Measurer to inform the Controller about
-its surprising behavior, for example, when rounding the Trial Duration value.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-output"><name>Trial Output</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Output is a composite quantity consisting of several attributes.
-Required attributes are: Trial Loss Ratio, Trial Effective Duration and
-Trial Forwarding Rate.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>When referring to more than one trial, plural term "Trial Outputs" is
-used to collectively describe multiple Trial Output instances.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Measurer implementations may provide additional optional attributes.
-The Controller implementations SHOULD
-ignore values of any optional attribute
-they are not familiar with,
-except when passing Trial Output instances to the Manager.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Example of an optional attribute:
-The aggregate number of frames expected to be forwarded during the trial,
-especially if it is not (a rounded-down value)
-implied by Trial Load and Trial Duration.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>While Section 3.5.2 of <xref target="RFC2285"></xref> requires the Offered Load value
-to be reported for forwarding rate measurements,
-it is not required in MLRsearch Specification,
-as search results do not depend on it.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="trial-result"><name>Trial Result</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Trial Result is a composite quantity,
-consisting of the Trial Input and the Trial Output.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>When referring to more than one trial, plural term "Trial Results" is
-used to collectively describe multiple Trial Result instances.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-<section anchor="goal-terms"><name>Goal Terms</name>
-
-<t>This section defines new terms for quantities relevant (directly or indirectly)
-for inputs and outputs of the Controller component.</t>
-
-<t>Several goal attributes are defined before introducing
-the main composite quantity: the Search Goal.</t>
-
-<t>Contrary to other sections, definitions in subsections of this section
-are necessarily vague, as their fundamental meaning is to act as
-coefficients in formulas for Controller Output, which are not defined yet.</t>
-
-<t>The discussions in this section relate the attributes to concepts mentioned in Section
-<xref target="overview-of-rfc-2544-problems">Overview of RFC 2544 Problems</xref>, but even these discussion
-paragraphs are short, informal, and mostly referencing later sections,
-where the impact on search results is discussed after introducing
-the complete set of auxiliary terms.</t>
-
-<section anchor="goal-final-trial-duration"><name>Goal Final Trial Duration</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Minimal value for Trial Duration that must be reached.
-The value MUST be positive.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Certain trials must reach this minimum duration before a load can be
-classified as a lower bound.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The Controller may choose shorter durations,
-results of those may be enough for classification as an Upper Bound.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It is RECOMMENDED for all search goals to share the same
-Goal Final Trial Duration value. Otherwise, Trial Duration values larger than
-the Goal Final Trial Duration may occur, weakening the assumptions
-the <xref target="load-classification-logic">Load Classification Logic</xref> is based on.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-duration-sum"><name>Goal Duration Sum</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A threshold value for a particular sum of Trial Effective Duration values.
-The value MUST be positive.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Informally, this prescribes the sufficient number of trials performed
-at a specific Trial Load and Goal Final Trial Duration during the search.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>If the Goal Duration Sum is larger than the Goal Final Trial Duration,
-multiple trials may be needed to be performed at the same load.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Refer to Section <xref target="mlrsearch-compliant-with-tst009">MLRsearch Compliant with TST009</xref>
-for an example where the possibility of multiple trials
-at the same load is intended.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>A Goal Duration Sum value shorter than the Goal Final Trial Duration
-(of the same goal) could save some search time, but is NOT RECOMMENDED,
-as the time savings come at the cost of decreased repeatability.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In practice, the Search can spend less than Goal Duration Sum measuring
-a Load value when the results are particularly one-sided,
-but also, the Search can spend more than Goal Duration Sum measuring a Load
-when the results are balanced and include
-trials shorter than Goal Final Trial Duration.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-loss-ratio"><name>Goal Loss Ratio</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A threshold value for Trial Loss Ratio values.
-The value MUST be non-negative and smaller than one.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A trial with Trial Loss Ratio larger than this value
-signals the SUT may be unable to process this Trial Load well enough.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>See <xref target="throughput-with-non-zero-loss">Throughput with Non-Zero Loss</xref>
-for reasons why users may want to set this value above zero.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Since multiple trials may be needed for one Load value,
-the Load Classification may be more complicated than mere comparison
-of Trial Loss Ratio to Goal Loss Ratio.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-exceed-ratio"><name>Goal Exceed Ratio</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A threshold value for a particular ratio of sums
-of Trial Effective Duration values.
-The value MUST be non-negative and smaller than one.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Informally, up to this proportion of Trial Results
-with Trial Loss Ratio above Goal Loss Ratio is tolerated at a Lower Bound.
-This is the full impact if every Trial was measured at Goal Final Trial Duration.
-The actual full logic is more complicated, as shorter Trials are allowed.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For explainability reasons, the RECOMMENDED value for exceed ratio is 0.5 (50%),
-as in practice that value leads to
-the smallest variation in overall Search Duration.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Refer to Section <xref target="exceed-ratio-and-multiple-trials">Exceed Ratio and Multiple Trials</xref>
-for more details.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-width"><name>Goal Width</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A threshold value for deciding whether two Trial Load values are close enough.
-This is an OPTIONAL attribute. If present, the value MUST be positive.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Informally, this acts as a stopping condition,
-controlling the precision of the search result.
-The search stops if every goal has reached its precision.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Implementations without this attribute
-MUST provide the Controller with other means to control the search stopping conditions.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Absolute load difference and relative load difference are two popular choices,
-but implementations may choose a different way to specify width.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The test report MUST make it clear what specific quantity is used as Goal Width.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It is RECOMMENDED to express Goal Width as a relative difference and
-setting it to a value not lower than the Goal Loss Ratio.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Refer to Section
-<xref target="generalized-throughput">Generalized Throughput</xref> for more elaboration on the reasoning.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-initial-trial-duration"><name>Goal Initial Trial Duration</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Minimal value for Trial Duration suggested to use for this goal.
-If present, this value MUST be positive.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>This is an example of an optional Search Goal.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>A typical default value is equal to the Goal Final Trial Duration value.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Informally, this is the shortest Trial Duration the Controller should select
-when focusing on the goal.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Note that shorter Trial Duration values can still be used,
-for example, selected while focusing on a different Search Goal.
-Such results MUST be still accepted by the Load Classification logic.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Goal Initial Trial Duration is a mechanism for a user to discourage
-trials with Trial Duration values deemed as too unreliable
-for a particular SUT and a given Search Goal.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="search-goal"><name>Search Goal</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Search Goal is a composite quantity consisting of several attributes,
-some of them are required.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Required attributes: Goal Final Trial Duration, Goal Duration Sum, Goal
-Loss Ratio and Goal Exceed Ratio.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Optional attributes: Goal Initial Trial Duration and Goal Width.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Implementations MAY add their own attributes.
-Those additional attributes may be required by an implementation
-even if they are not required by MLRsearch Specification.
-However, it is RECOMMENDED for those implementations
-to support missing attributes by providing typical default values.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For example, implementations with Goal Initial Trial Durations
-may also require users to specify "how quickly" should Trial Durations increase.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Refer to Section <xref target="compliance"></xref> for important Search Goal settings.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="controller-input"><name>Controller Input</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Controller Input is a composite quantity
-required as an input for the Controller.
-The only REQUIRED attribute is a list of Search Goal instances.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>MLRsearch implementations MAY use additional attributes.
-Those additional attributes may be required by an implementation
-even if they are not required by MLRsearch Specification.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Formally, the Manager does not apply any Controller configuration
-apart from one Controller Input instance.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For example, Traffic Profile is configured on the Measurer by the Manager,
-without explicit assistance of the Controller.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The order of Search Goal instances in a list SHOULD NOT
-have a big impact on Controller Output,
-but MLRsearch implementations MAY base their behavior on the order
-of Search Goal instances in a list.</t>
- </dd>
-</dl>
-
-<section anchor="max-load"><name>Max Load</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Max Load is an optional attribute of Controller Input.
-It is the maximal value the Controller is allowed to use for Trial Load values.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Max Load is an example of an optional attribute (outside the list of Search Goals)
-required by some implementations of MLRsearch.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>If the Max Load value is provided, Controller MUST NOT select
-Trial Load values larger than that value.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In theory, each search goal could have its own Max Load value,
-but as all Trial Results are possibly affecting all Search Goals,
-it makes more sense for a single Max Load value to apply
-to all Search Goal instances.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>While Max Load is a frequently used configuration parameter, already governed
-(as maximum frame rate) by <xref target="RFC2544"></xref> (Section 20)
-and (as maximum offered load) by <xref target="RFC2285"></xref> (Section 3.5.3),
-some implementations may detect or discover it
-(instead of requiring a user-supplied value).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In MLRsearch Specification, one reason for listing
-the <xref target="relevant-upper-bound">Relevant Upper Bound</xref> as a required attribute
-is that it makes the search result independent of Max Load value.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Given that Max Load is a quantity based on Load,
-Test Report MAY express this quantity using multi-interface values,
-as sum of per-interface maximal loads.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="min-load"><name>Min Load</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Min Load is an optional attribute of Controller Input.
-It is the minimal value the Controller is allowed to use for Trial Load values.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Min Load is another example of an optional attribute
-required by some implementations of MLRsearch.
-Similarly to Max Load, it makes more sense to prescribe one common value,
-as opposed to using a different value for each Search Goal.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>If the Min Load value is provided, Controller MUST NOT select
-Trial Load values smaller than that value.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Min Load is mainly useful for saving time by failing early,
-arriving at an Irregular Goal Result when Min Load gets classified
-as an Upper Bound.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For implementations, it is RECOMMENDED to require Min Load to be non-zero
-and large enough to result in at least one frame being forwarded
-even at shortest allowed Trial Duration,
-so that Trial Loss Ratio is always well-defined,
-and the implementation can apply relative Goal Width safely.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Given that Min Load is a quantity based on Load,
-Test Report MAY express this quantity using multi-interface values,
-as sum of per-interface minimal loads.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-</section>
-<section anchor="auxiliary-terms"><name>Auxiliary Terms</name>
-
-<t>While the terms defined in this section are not strictly needed
-when formulating MLRsearch requirements, they simplify the language used
-in discussion paragraphs and explanation sections.</t>
-
-<section anchor="trial-classification"><name>Trial Classification</name>
-
-<t>When one Trial Result instance is compared to one Search Goal instance,
-several relations can be named using short adjectives.</t>
-
-<t>As trial results do not affect each other, this <strong>Trial Classification</strong>
-does not change during a Search.</t>
-
-<section anchor="high-loss-trial"><name>High-Loss Trial</name>
-
-<t>A trial with Trial Loss Ratio larger than a Goal Loss Ratio value
-is called a <strong>high-loss trial</strong>, with respect to given Search Goal
-(or lossy trial, if Goal Loss Ratio is zero).</t>
-
-</section>
-<section anchor="low-loss-trial"><name>Low-Loss Trial</name>
-
-<t>If a trial is not high-loss, it is called a <strong>low-loss trial</strong>
-(or zero-loss trial, if Goal Loss Ratio is zero).</t>
-
-</section>
-<section anchor="short-trial"><name>Short Trial</name>
-
-<t>A trial with Trial Duration shorter than the Goal Final Trial Duration
-is called a <strong>short trial</strong> (with respect to the given Search Goal).</t>
-
-</section>
-<section anchor="full-length-trial"><name>Full-Length Trial</name>
-
-<t>A trial that is not short is called a <strong>full-length</strong> trial.</t>
-
-<t>Note that this includes Trial Durations larger than Goal Final Trial Duration.</t>
-
-</section>
-<section anchor="long-trial"><name>Long Trial</name>
-
-<t>A trial with Trial Duration longer than the Goal Final Trial Duration
-is called a <strong>long trial</strong>.</t>
-
-</section>
-</section>
-<section anchor="load-classification"><name>Load Classification</name>
-
-<t>When a set of all Trial Result instances, performed so far
-at one Trial Load, is compared to one Search Goal instance,
-their relation can be named using the concept of a bound.</t>
-
-<t>In general, such bounds are a current quantity,
-even though cases of a Load changing its classification more than once
-during the Search is rare in practice.</t>
-
-<section anchor="upper-bound"><name>Upper Bound</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A Load value is called an Upper Bound if and only if it is classified
-as such by <xref target="load-classification-code">Appendix A</xref>
-algorithm for the given Search Goal at the current moment of the Search.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>In more detail, the set of all Trial Result instances
-performed so far at the Trial Load (and any Trial Duration)
-is certain to fail to uphold all the requirements of the given Search Goal,
-mainly the Goal Loss Ratio in combination with the Goal Exceed Ratio.
-In this context, "certain to fail" relates to any possible results within the time
-remaining till Goal Duration Sum.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>One search goal can have multiple different Trial Load values
-classified as its Upper Bounds.
-While search progresses and more trials are measured,
-any load value can become an Upper Bound in principle.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Moreover, a Load can stop being an Upper Bound, but that
-can only happen when more than Goal Duration Sum of trials are measured
-(e.g., because another Search Goal needs more trials at this load).
-Informally, the previous Upper Bound got invalidated.
-In practice, the Load frequently becomes a <xref target="lower-bound">Lower Bound</xref> instead.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="lower-bound"><name>Lower Bound</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A Load value is called a Lower Bound if and only if it is classified
-as such by <xref target="load-classification-code">Appendix A</xref>
-algorithm for the given Search Goal at the current moment of the search.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>In more detail, the set of all Trial Result instances
-performed so far at the Trial Load (and any Trial Duration)
-is certain to uphold all the requirements of the given Search Goal,
-mainly the Goal Loss Ratio in combination with the Goal Exceed Ratio.
-Here "certain to uphold" relates to any possible results within the time
-remaining till Goal Duration Sum.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>One search goal can have multiple different Trial Load values
-classified as its Lower Bounds.
-As search progresses and more trials are measured,
-any load value can become a Lower Bound in principle.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>No load can be both an Upper Bound and a Lower Bound for the same Search goal
-at the same time, but it is possible for a larger load to be a Lower Bound
-while a smaller load is an Upper Bound.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Moreover, a Load can stop being a Lower Bound, but that
-can only happen when more than Goal Duration Sum of trials are measured
-(e.g., because another Search Goal needs more trials at this load).
-Informally, the previous Lower Bound got invalidated.
-In practice, the Load frequently becomes an <xref target="upper-bound">Upper Bound</xref> instead.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="undecided"><name>Undecided</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A Load value is called Undecided if it is currently
-neither an Upper Bound nor a Lower Bound.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>A Load value that has not been measured so far is Undecided.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It is possible for a Load to transition from an Upper Bound to Undecided
-by adding Short Trials with Low-Loss results.
-That is yet another reason for users to avoid using Search Goal instances
-with different Goal Final Trial Duration values.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-</section>
-<section anchor="result-terms"><name>Result Terms</name>
-
-<t>Before defining the full structure of a Controller Output,
-it is useful to define the composite quantity, called Goal Result.
-The following subsections define its attribute first,
-before describing the Goal Result quantity.</t>
-
-<t>There is a correspondence between Search Goals and Goal Results.
-Most of the following subsections refer to a given Search Goal,
-when defining their terms.
-Conversely, at the end of the search, each Search Goal instance
-has its corresponding Goal Result instance.</t>
-
-<section anchor="relevant-upper-bound"><name>Relevant Upper Bound</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Relevant Upper Bound is the smallest Trial Load value
-classified as an Upper Bound for a given Search Goal at the end of the Search.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>If no measured load had enough High-Loss Trials,
-the Relevant Upper Bound MAY be non-existent.
-For example, when Max Load is classified as a Lower Bound.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Conversely, when Relevant Upper Bound does exist,
-it is not affected by Max Load value.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Given that Relevant Upper Bound is a quantity based on Load,
-Test Report MAY express this quantity using multi-interface values,
-as sum of per-interface loads.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="relevant-lower-bound"><name>Relevant Lower Bound</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Relevant Lower Bound is the largest Trial Load value
-among those smaller than the Relevant Upper Bound, that got classified
-as a Lower Bound for a given Search Goal at the end of the search.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>If no load had enough Low-Loss Trials, the Relevant Lower Bound
-MAY be non-existent.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Strictly speaking, if the Relevant Upper Bound does not exist,
-the Relevant Lower Bound also does not exist.
-In a typical case, Max Load is classified as a Lower Bound,
-making it impossible to increase the Load to continue the search
-for an Upper Bound.
-Thus, it is not clear whether a larger value would be found
-for a Relevant Lower Bound if larger Loads were possible.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Given that Relevant Lower Bound is a quantity based on Load,
-Test Report MAY express this quantity using multi-interface values,
-as sum of per-interface loads.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="conditional-throughput"><name>Conditional Throughput</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Conditional Throughput is a value computed at the Relevant Lower Bound
-according to algorithm defined in
-<xref target="conditional-throughput-code">Appendix B</xref>.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Relevant Lower Bound is defined only at the end of the Search,
-and so is the Conditional Throughput.
-But the algorithm can be applied at any time on any Lower Bound load,
-so the final Conditional Throughput value may appear sooner
-than at the end of a Search.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Informally, the Conditional Throughput should be
-a typical Trial Forwarding Rate, expected to be seen
-at the Relevant Lower Bound of a given Search Goal.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>But frequently it is only a conservative estimate thereof,
-as MLRsearch implementations tend to stop measuring more Trials
-as soon as they confirm the value cannot get worse than this estimate
-within the Goal Duration Sum.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>This value is RECOMMENDED to be used when evaluating repeatability
-and comparability of different MLRsearch implementations.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Refer to Section <xref target="generalized-throughput">Generalized Throughput</xref> for more details.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Given that Conditional Throughput is a quantity based on Load,
-Test Report MAY express this quantity using multi-interface values,
-as sum of per-interface forwarding rates.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-results"><name>Goal Results</name>
-
-<t>MLRsearch Specification is based on a set of requirements
-for a "regular" result. But in practice, it is not always possible
-for such result instance to exist, so also "irregular" results
-need to be supported.</t>
-
-<section anchor="regular-goal-result"><name>Regular Goal Result</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Regular Goal Result is a composite quantity consisting of several attributes.
-Relevant Upper Bound and Relevant Lower Bound are REQUIRED attributes.
-Conditional Throughput is a RECOMMENDED attribute.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Implementations MAY add their own attributes.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Test report MUST display Relevant Lower Bound.
-Displaying Relevant Upper Bound is RECOMMENDED,
-especially if the implementation does not use Goal Width.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>In general, stopping conditions for the corresponding Search Goal MUST
-be satisfied to produce a Regular Goal Result.
-Specifically, if an implementation offers Goal Width as a Search Goal attribute,
-the distance between the Relevant Lower Bound
-and the Relevant Upper Bound MUST NOT be larger than the Goal Width.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For stopping conditions refer to Sections <xref target="goal-width">Goal Width</xref> and
-<xref target="stopping-conditions-and-precision">Stopping Conditions and Precision</xref>.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="irregular-goal-result"><name>Irregular Goal Result</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Irregular Goal Result is a composite quantity. No attributes are required.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>It is RECOMMENDED to report any useful quantity even if it does not
-satisfy all the requirements. For example, if Max Load is classified
-as a Lower Bound, it is fine to report it as an "effective" Relevant Lower Bound
-(although not a real one, as that requires
-Relevant Upper Bound which does not exist in this case),
-and compute Conditional Throughput for it. In this case,
-only the missing Relevant Upper Bound signals this result instance is irregular.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Similarly, if both relevant bounds exist, it is RECOMMENDED
-to include them as Irregular Goal Result attributes,
-and let the Manager decide if their distance is too far for Test Report purposes.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>If Test Report displays some Irregular Goal Result attribute values,
-they MUST be clearly marked as coming from irregular results.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The implementation MAY define additional attributes,
-for example explicit flags for expected situations, so the Manager logic can be simpler.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="goal-result"><name>Goal Result</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Goal Result is a composite quantity.
-Each instance is either a Regular Goal Result or an Irregular Goal Result.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Manager MUST be able of distinguishing whether the instance is regular or not.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-<section anchor="search-result"><name>Search Result</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Search Result is a single composite object
-that maps each Search Goal instance to a corresponding Goal Result instance.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>As an alternative to mapping, the Search Result may be represented
-as an ordered list of Goal Result instances that appears in the exact
-sequence of their corresponding Search Goal instances.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>When the Search Result is expressed as a mapping, it MUST contain an
-entry for every Search Goal instance supplied in the Controller Input.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Identical Goal Result instances MAY be listed for different Search Goals,
-but their status as regular or irregular may be different.
-For example, if two goals differ only in Goal Width value,
-and the relevant bound values are close enough according to only one of them.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="controller-output"><name>Controller Output</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Controller Output is a composite quantity returned from the Controller
-to the Manager at the end of the search.
-The Search Result instance is its only required attribute.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>MLRsearch implementation MAY return additional data in the Controller Output,
-e.g., number of trials performed and the total Search Duration.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-<section anchor="architecture-terms"><name>Architecture Terms</name>
-
-<t>MLRsearch architecture consists of three main system components:
-the Manager, the Controller, and the Measurer.
-The components were introduced in <xref target="architecture-overview">Architecture Overview</xref>,
-and the following subsections finalize their definitions
-using terms from previous sections.</t>
-
-<t>Note that the architecture also implies the presence of other components,
-such as the SUT and the tester (as a sub-component of the Measurer).</t>
-
-<t>Communication protocols and interfaces between components are left
-unspecified. For example, when MLRsearch Specification mentions
-"Controller calls Measurer",
-it is possible that the Controller notifies the Manager
-to call the Measurer indirectly instead. In doing so, the Measurer implementations
-can be fully independent from the Controller implementations,
-e.g., developed in different programming languages.</t>
-
-<section anchor="measurer"><name>Measurer</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Measurer is a functional element that when called
-with a <xref target="trial-input">Trial Input</xref> instance, performs one <xref target="trial">Trial </xref>
-and returns a <xref target="trial-output">Trial Output</xref> instance.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>This definition assumes the Measurer is already initialized.
-In practice, there may be additional steps before the Search,
-e.g., when the Manager configures the traffic profile
-(either on the Measurer or on its tester sub-component directly)
-and performs a warm-up (if the tester or the test procedure requires one).</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It is the responsibility of the Measurer implementation to uphold
-any requirements and assumptions present in MLRsearch Specification,
-e.g., Trial Forwarding Ratio not being larger than one.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Implementers have some freedom.
-For example, Section 10 of <xref target="RFC2544"></xref>
-gives some suggestions (but not requirements) related to
-duplicated or reordered frames.
-Implementations are RECOMMENDED to document their behavior
-related to such freedoms in as detailed a way as possible.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It is RECOMMENDED to benchmark the test equipment first,
-e.g., connect sender and receiver directly (without any SUT in the path),
-find a load value that guarantees the Offered Load is not too far
-from the Intended Load and use that value as the Max Load value.
-When testing the real SUT, it is RECOMMENDED to turn any severe deviation
-between the Intended Load and the Offered Load into increased Trial Loss Ratio.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Neither of the two recommendations are made into mandatory requirements,
-because it is not easy to provide guidance about when the difference is severe enough,
-in a way that would be disentangled from other Measurer freedoms.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For a sample situation where the Offered Load cannot keep up
-with the Intended Load, and the consequences on MLRsearch result,
-refer to Section <xref target="hard-performance-limit">Hard Performance Limit</xref>.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="controller"><name>Controller</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Controller is a functional element that, upon receiving a Controller
-Input instance, repeatedly generates Trial Input instances for the
-Measurer and collects the corresponding Trial Output instances. This
-cycle continues until the stopping conditions are met, at which point
-the Controller produces a final Controller Output instance and
-terminates.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>Informally, the Controller has big freedom in selection of Trial Inputs,
-and the implementations want to achieve all the Search Goals
-in the shortest average time.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The Controller's role in optimizing the overall Search Duration
-distinguishes MLRsearch algorithms from simpler search procedures.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Informally, each implementation can have different stopping conditions.
-Goal Width is only one example.
-In practice, implementation details do not matter,
-as long as Goal Result instances are regular.</t>
- </dd>
-</dl>
-
-</section>
-<section anchor="manager"><name>Manager</name>
-
-<t>Definition:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Manager is a functional element that is responsible for
-provisioning other components, calling a Controller component once,
-and for creating the test report following the reporting format as
-defined in Section 26 of <xref target="RFC2544"></xref>.</t>
- </dd>
-</dl>
-
-<t>Discussion:</t>
-
-<dl>
- <dt> </dt>
- <dd>
- <t>The Manager initializes the SUT, the Measurer
-(and the tester if independent from Measurer)
-with their intended configurations before calling the Controller.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>Note that Section 7 of <xref target="RFC2544"></xref> already puts requirements on SUT setups:</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>"It is expected that all of the tests will be run without changing the
-configuration or setup of the DUT in any way other than that required
-to do the specific test. For example, it is not acceptable to change
-the size of frame handling buffers between tests of frame handling
-rates or to disable all but one transport protocol when testing the
-throughput of that protocol."</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>It is REQUIRED for the test report to encompass all the SUT configuration
-details, including description of a "default" configuration common for most tests
-and configuration changes if required by a specific test.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>For example, Section 5.1.1 of <xref target="RFC5180"></xref> recommends testing jumbo frames
-if SUT can forward them, even though they are outside the scope
-of the 802.3 IEEE standard. In this case, it is acceptable
-for the SUT default configuration to not support jumbo frames,
-and only enable this support when testing jumbo traffic profiles,
-as the handling of jumbo frames typically has different packet buffer
-requirements and potentially higher processing overhead.
-Non-jumbo frame sizes should also be tested on the jumbo-enabled setup.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>The Manager does not need to be able to tweak any Search Goal attributes,
-but it MUST report all applied attribute values even if not tweaked.</t>
- </dd>
- <dt> </dt>
- <dd>
- <t>A "user" - human or automated - invokes the Manager once to launch a
-single Search and receive its report. Every new invocation is treated
-as a fresh, independent Search; how the system behaves across multiple
-calls (for example, combining or comparing their results) is explicitly
-out of scope for this document.</t>
- </dd>
-</dl>
-
-</section>
-</section>
-<section anchor="compliance"><name>Compliance</name>
-
-<t>This section discusses compliance relations between MLRsearch
-and other test procedures.</t>
-
-<section anchor="test-procedure-compliant-with-mlrsearch"><name>Test Procedure Compliant with MLRsearch</name>
-
-<t>Any networking measurement setup that could be understood as consisting of
-functional elements satisfying requirements
-for the Measurer, the Controller and the Manager,
-is compliant with MLRsearch Specification.</t>
-
-<t>These components can be seen as abstractions present in any testing procedure.
-For example, there can be a single component acting both
-as the Manager and the Controller, but if values of required attributes
-of Search Goals and Goal Results are visible in the test report,
-the Controller Input instance and Controller Output instance are implied.</t>
-
-<t>For example, any setup for conditionally (or unconditionally)
-compliant <xref target="RFC2544"></xref> throughput testing
-can be understood as a MLRsearch architecture,
-if there is enough data to reconstruct the Relevant Upper Bound.</t>
-
-<t>Refer to section
-<xref target="mlrsearch-compliant-with-rfc-2544">MLRsearch Compliant with RFC 2544</xref>
-for an equivalent Search Goal.</t>
-
-<t>Any test procedure that can be understood as one call to the Manager of
-MLRsearch architecture is said to be compliant with MLRsearch Specification.</t>
-
-</section>
-<section anchor="mlrsearch-compliant-with-rfc-2544"><name>MLRsearch Compliant with RFC 2544</name>
-
-<t>The following Search Goal instance makes the corresponding Search Result
-unconditionally compliant with Section 24 of <xref target="RFC2544"></xref>.</t>
-
-<t><list style="symbols">
- <t>Goal Final Trial Duration = 60 seconds</t>
- <t>Goal Duration Sum = 60 seconds</t>
- <t>Goal Loss Ratio = 0%</t>
- <t>Goal Exceed Ratio = 0%</t>
-</list></t>
-
-<t>Goal Loss Ratio and Goal Exceed Ratio attributes,
-are enough to make the Search Goal conditionally compliant.
-Adding Goal Final Trial Duration
-makes the Search Goal unconditionally compliant.</t>
-
-<t>Goal Duration Sum prevents MLRsearch
-from repeating zero-loss Full-Length Trials.</t>
-
-<t>The presence of other Search Goals does not affect the compliance
-of this Goal Result.
-The Relevant Lower Bound and the Conditional Throughput are in this case
-equal to each other, and the value is the <xref target="RFC2544"></xref> throughput.</t>
-
-<t>Non-zero exceed ratio is not strictly disallowed, but it could
-needlessly prolong the search when Low-Loss short trials are present.</t>
-
-</section>
-<section anchor="mlrsearch-compliant-with-tst009"><name>MLRsearch Compliant with TST009</name>
-
-<t>One of the alternatives to <xref target="RFC2544"></xref> is Binary search with loss verification
-as described in Section 12.3.3 of <xref target="TST009"></xref>.</t>
-
-<t>The rationale of such search is to repeat high-loss trials, hoping for zero loss on second try,
-so the results are closer to the noiseless end of performance spectrum,
-thus more repeatable and comparable.</t>
-
-<t>Only the variant with "z = infinity" is achievable with MLRsearch.</t>
-
-<t>For example, for "max(r) = 2" variant, the following Search Goal instance
-should be used to get compatible Search Result:</t>
-
-<t><list style="symbols">
- <t>Goal Final Trial Duration = 60 seconds</t>
- <t>Goal Duration Sum = 120 seconds</t>
- <t>Goal Loss Ratio = 0%</t>
- <t>Goal Exceed Ratio = 50%</t>
-</list></t>
-
-<t>If the first 60 seconds trial has zero loss, it is enough for MLRsearch to stop
-measuring at that load, as even a second high-loss trial
-would still fit within the exceed ratio.</t>
-
-<t>But if the first trial is high-loss, MLRsearch needs to perform also
-the second trial to classify that load.
-Goal Duration Sum is twice as long as Goal Final Trial Duration,
-so third full-length trial is never needed.</t>
-
-</section>
-</section>
-</section>
-<section anchor="methodology-rationale-and-design-considerations"><name>Methodology Rationale and Design Considerations</name>
-
-<t>This section explains the Why behind MLRsearch. Building on the
-normative specification in Section
-<xref target="mlrsearch-specification">MLRsearch Specification</xref>,
-it contrasts MLRsearch with the classic
-<xref target="RFC2544"></xref> single-ratio binary-search procedure and walks through the
-key design choices: binary-search mechanics, stopping-rule precision,
-loss-inversion for multiple goals, exceed-ratio handling, short-trial
-strategies, and the generalised throughput concept. Together, these
-considerations show how the methodology reduces test time, supports
-multiple loss ratios, and improves repeatability.</t>
-
-<section anchor="binary-search"><name>Binary Search</name>
-
-<t>A typical binary search implementation for <xref target="RFC2544"></xref>
-tracks only the two tightest bounds.
-To start, the search needs both Max Load and Min Load values.
-Then, one trial is used to confirm Max Load is an Upper Bound,
-and one trial to confirm Min Load is a Lower Bound.</t>
-
-<t>Then, next Trial Load is chosen as the mean of the current tightest upper bound
-and the current tightest lower bound, and becomes a new tightest bound
-depending on the Trial Loss Ratio.</t>
-
-<t>After some number of trials, the tightest lower bound becomes the throughput,
-but <xref target="RFC2544"></xref> does not specify when, if ever, the search should stop.
-In practice, the search stops either at some distance
-between the tightest upper bound and the tightest lower bound,
-or after some number of Trials.</t>
-
-<t>For a given pair of Max Load and Min Load values,
-there is one-to-one correspondence between number of Trials
-and final distance between the tightest bounds.
-Thus, the search always takes the same time,
-assuming initial bounds are confirmed.</t>
-
-</section>
-<section anchor="stopping-conditions-and-precision"><name>Stopping Conditions and Precision</name>
-
-<t>MLRsearch Specification requires listing both Relevant Bounds for each
-Search Goal, and the difference between the bounds implies
-whether the result precision is achieved.
-Therefore, it is not necessary to report the specific stopping condition used.</t>
-
-<t>MLRsearch implementations may use Goal Width
-to allow direct control of result precision
-and indirect control of the Search Duration.</t>
-
-<t>Other MLRsearch implementations may use different stopping conditions:
-for example based on the Search Duration, trading off precision control
-for duration control.</t>
-
-<t>Due to various possible time optimizations, there is no strict
-correspondence between the Search Duration and Goal Width values.
-In practice, noisy SUT performance increases both average search time
-and its variance.</t>
-
-</section>
-<section anchor="loss-ratios-and-loss-inversion"><name>Loss Ratios and Loss Inversion</name>
-
-<t>The biggest difference between MLRsearch and <xref target="RFC2544"></xref> binary search
-is in the goals of the search.
-<xref target="RFC2544"></xref> has a single goal, based on classifying a single full-length trial
-as either zero-loss or non-zero-loss.
-MLRsearch supports searching for multiple Search Goals at once,
-usually differing in their Goal Loss Ratio values.</t>
-
-<section anchor="single-goal-and-hard-bounds"><name>Single Goal and Hard Bounds</name>
-
-<t>Each bound in <xref target="RFC2544"></xref> simple binary search is "hard",
-in the sense that all further Trial Load values
-are smaller than any current upper bound and larger than any current lower bound.</t>
-
-<t>This is also possible for MLRsearch implementations,
-when the search is started with only one Search Goal instance.</t>
-
-</section>
-<section anchor="multiple-goals-and-loss-inversion"><name>Multiple Goals and Loss Inversion</name>
-
-<t>MLRsearch Specification supports multiple Search Goals, making the search procedure
-more complicated compared to binary search with single goal,
-but most of the complications do not affect the final results much.
-Except for one phenomenon: Loss Inversion.</t>
-
-<t>Depending on Search Goal attributes, Load Classification results may be resistant
-to small amounts of Section <xref target="inconsistent-trial-results">Inconsistent Trial Results</xref>.
-However, for larger amounts, a Load that is classified
-as an Upper Bound for one Search Goal
-may still be a Lower Bound for another Search Goal.
-Due to this other goal, MLRsearch will probably perform subsequent Trials
-at Trial Loads even larger than the original value.</t>
-
-<t>This introduces questions any many-goals search algorithm has to address.
-For example: What to do when all such larger load trials happen to have zero loss?
-Does it mean the earlier upper bound was not real?
-Does it mean the later Low-Loss trials are not considered a lower bound?</t>
-
-<t>The situation where a smaller Load is classified as an Upper Bound,
-while a larger Load is classified as a Lower Bound (for the same search goal),
-is called Loss Inversion.</t>
-
-<t>Conversely, only single-goal search algorithms can have hard bounds
-that shield them from Loss Inversion.</t>
-
-</section>
-<section anchor="conservativeness-and-relevant-bounds"><name>Conservativeness and Relevant Bounds</name>
-
-<t>MLRsearch is conservative when dealing with Loss Inversion:
-the Upper Bound is considered real, and the Lower Bound
-is considered to be a fluke, at least when computing the final result.</t>
-
-<t>This is formalized using definitions of
-<xref target="relevant-upper-bound">Relevant Upper Bound</xref> and
-<xref target="relevant-lower-bound">Relevant Lower Bound</xref>.</t>
-
-<t>The Relevant Upper Bound (for specific goal) is the smallest Load classified
-as an Upper Bound. But the Relevant Lower Bound is not simply
-the largest among Lower Bounds. It is the largest Load among Loads
-that are Lower Bounds while also being smaller than the Relevant Upper Bound.</t>
-
-<t>With these definitions, the Relevant Lower Bound is always smaller
-than the Relevant Upper Bound (if both exist), and the two relevant bounds
-are used analogously as the two tightest bounds in the binary search.
-When they meet the stopping conditions, the Relevant Bounds are used in the output.</t>
-
-</section>
-<section anchor="consequences"><name>Consequences</name>
-
-<t>The consequence of the way the Relevant Bounds are defined is that
-every Trial Result can have an impact
-on any current Relevant Bound larger than that Trial Load,
-namely by becoming a new Upper Bound.</t>
-
-<t>This also applies when that Load is measured
-before another Load gets enough measurements to become a current Relevant Bound.</t>
-
-<t>This also implies that if the SUT tested (or the Traffic Generator used)
-needs a warm-up, it should be warmed up before starting the Search,
-otherwise the first few measurements could become unjustly limiting.</t>
-
-<t>For MLRsearch implementations, it means it is better to measure
-at smaller Loads first, so bounds found earlier are less likely
-to get invalidated later.</t>
-
-</section>
-</section>
-<section anchor="exceed-ratio-and-multiple-trials"><name>Exceed Ratio and Multiple Trials</name>
-
-<t>The idea of performing multiple Trials at the same Trial Load comes from
-a model where some Trial Results (those with high Trial Loss Ratio) are affected
-by infrequent effects, causing unsatisfactory repeatability</t>
-
-<t>of <xref target="RFC2544"></xref> Throughput results. Refer to Section <xref target="dut-in-sut">DUT in SUT</xref>
-for a discussion about noiseful and noiseless ends
-of the SUT performance spectrum.
-Stable results are closer to the noiseless end of the SUT performance spectrum,
-so MLRsearch may need to allow some frequency of high-loss trials
-to ignore the rare but big effects near the noiseful end.</t>
-
-<t>For MLRsearch to perform such Trial Result filtering, it needs
-a configuration option to tell how frequent the "infrequent" big loss can be.
-This option is called the <xref target="goal-exceed-ratio">Goal Exceed Ratio</xref>.
-It tells MLRsearch what ratio of trials (more specifically,
-what ratio of Trial Effective Duration seconds)
-can have a <xref target="trial-loss-ratio">Trial Loss Ratio</xref>
-larger than the <xref target="goal-loss-ratio">Goal Loss Ratio</xref>
-and still be classified as a <xref target="lower-bound">Lower Bound</xref>.</t>
-
-<t>Zero exceed ratio means all Trials must have a Trial Loss Ratio
-equal to or lower than the Goal Loss Ratio.</t>
-
-<t>When more than one Trial is intended to classify a Load,
-MLRsearch also needs something that controls the number of trials needed.
-Therefore, each goal also has an attribute called Goal Duration Sum.</t>
-
-<t>The meaning of a <xref target="goal-duration-sum">Goal Duration Sum</xref> is that
-when a Load has (Full-Length) Trials
-whose Trial Effective Durations when summed up give a value at least as big
-as the Goal Duration Sum value,
-the Load is guaranteed to be classified either as an Upper Bound
-or a Lower Bound for that Search Goal instance.</t>
-
-</section>
-<section anchor="short-trials-and-duration-selection"><name>Short Trials and Duration Selection</name>
-
-<t>MLRsearch requires each Search Goal to specify its Goal Final Trial Duration.</t>
-
-<t>Section 24 of <xref target="RFC2544"></xref> already anticipates possible time savings
-when Short Trials are used.</t>
-
-<t>An MLRsearch implementation MAY expose configuration parameters that
-decide whether, when, and how short trial durations are used. The exact
-heuristics and controls are left to the discretion of the implementer.</t>
-
-<t>While MLRsearch implementations are free to use any logic to select
-Trial Input values, comparability between MLRsearch implementations
-is only assured when the Load Classification logic
-handles any possible set of Trial Results in the same way.</t>
-
-<t>The presence of Short Trial Results complicates
-the Load Classification logic, see more details in Section
-<xref target="load-classification-logic">Load Classification Logic</xref>.</t>
-
-<t>While the Load Classification algorithm is designed to avoid any unneeded Trials,
-for explainability reasons it is recommended for users to use
-such Controller Input instances that lead to all Trial Duration values
-selected by Controller to be the same,
-e.g., by setting any Goal Initial Trial Duration to be a single value
-also used in all Goal Final Trial Duration attributes.</t>
-
-</section>
-<section anchor="generalized-throughput"><name>Generalized Throughput</name>
-
-<t>Because testing equipment takes the Intended Load
-as an input parameter for a Trial measurement,
-any load search algorithm needs to deal with Intended Load values internally.</t>
-
-<t>But in the presence of Search Goals with a non-zero
-<xref target="goal-loss-ratio">Goal Loss Ratio</xref>, the Load usually does not match
-the user's intuition of what a throughput is.
-The forwarding rate as defined in Section Section 3.6.1 of <xref target="RFC2285"></xref> is better,
-but it is not obvious how to generalize it
-for Loads with multiple Trials and a non-zero Goal Loss Ratio.</t>
-
-<t>The clearest illustration - and the chief reason for adopting a
-generalized throughput definition - is the presence of a hard
-performance limit.</t>
-
-<section anchor="hard-performance-limit"><name>Hard Performance Limit</name>
-
-<t>Even if bandwidth of a medium allows higher traffic forwarding performance,
-the SUT interfaces may have their additional own limitations,
-e.g., a specific frames-per-second limit on the NIC (a common occurrence).</t>
-
-<t>Those limitations should be known and provided as Max Load, Section
-<xref target="max-load">Max Load</xref>.</t>
-
-<t>But if Max Load is set larger than what the interface can receive or transmit,
-there will be a "hard limit" behavior observed in Trial Results.</t>
-
-<t>Consider that the hard limit is at hundred million frames per second (100 Mfps),
-Max Load is larger, and the Goal Loss Ratio is 0.5%.
-If DUT has no additional losses, 0.5% Trial Loss Ratio will be achieved
-at Relevant Lower Bound of 100.5025 Mfps.</t>
-
-<t>Reporting a throughput that exceeds the SUT's verified hard limit is
-counter-intuitive. Accordingly, the <xref target="RFC2544"></xref> Throughput metric should
-be generalized - rather than relying solely on the Relevant Lower
-Bound - to reflect realistic, limit-aware performance.</t>
-
-<t>MLRsearch defines one such generalization,
-the <xref target="conditional-throughput">Conditional Throughput</xref>.
-It is the Trial Forwarding Rate from one of the Full-Length Trials
-performed at the Relevant Lower Bound.
-The algorithm to determine which trial exactly is in
-<xref target="conditional-throughput-code">Appendix B</xref>.</t>
-
-<t>In the hard limit example, 100.5025 Mfps Load will still have
-only 100.0 Mfps forwarding rate, nicely confirming the known limitation.</t>
-
-</section>
-<section anchor="performance-variability"><name>Performance Variability</name>
-
-<t>With non-zero Goal Loss Ratio, and without hard performance limits,
-Low-Loss trials at the same Load may achieve different Trial Forwarding Rate
-values just due to DUT performance variability.</t>
-
-<t>By comparing the best case (all Relevant Lower Bound trials have zero loss)
-and the worst case (all Trial Loss Ratios at Relevant Lower Bound
-are equal to the Goal Loss Ratio),
-one can prove that Conditional Throughput
-values may have up to the Goal Loss Ratio relative difference.</t>
-
-<t>Setting the Goal Width below the Goal Loss Ratio
-may cause the Conditional Throughput for a larger Goal Loss Ratio to become smaller
-than a Conditional Throughput for a goal with a lower Goal Loss Ratio,
-which is counter-intuitive, considering they come from the same Search.
-Therefore, it is RECOMMENDED to set the Goal Width to a value no lower
-than the Goal Loss Ratio of the higher-loss Search Goal.</t>
-
-<t>Although Conditional Throughput can fluctuate from one run to the next,
-it still offers a more discriminating basis for comparison than the
-Relevant Lower Bound - particularly when deterministic load selection
-yields the same Lower Bound value across multiple runs.</t>
-
-</section>
-</section>
-</section>
-<section anchor="mlrsearch-logic-and-example"><name>MLRsearch Logic and Example</name>
-
-<t>This section uses informal language to describe two aspects of MLRsearch logic:
-Load Classification and Conditional Throughput,
-reflecting formal pseudocode representation provided in
-<xref target="load-classification-code">Appendix A</xref>
-and <xref target="conditional-throughput-code">Appendix B</xref>.
-This is followed by example search.</t>
-
-<t>The logic is equivalent but not identical to the pseudocode
-on appendices. The pseudocode is designed to be short and frequently
-combines multiple operations into one expression.
-The logic as described in this section lists each operation separately
-and uses more intuitive names for the intermediate values.</t>
-
-<section anchor="load-classification-logic"><name>Load Classification Logic</name>
-
-<t>Note: For clarity of explanation, variables are tagged as (I)nput,
-(T)emporary, (O)utput.</t>
-
-<t><list style="symbols">
- <t>Collect Trial Results: <list style="symbols">
- <t>Take all Trial Result instances (I) measured at a given load.</t>
- </list></t>
- <t>Aggregate Trial Durations: <list style="symbols">
- <t>Full-length high-loss sum (T) is the sum of Trial Effective Duration
-values of all full-length high-loss trials (I).</t>
- <t>Full-length low-loss sum (T) is the sum of Trial Effective Duration
-values of all full-length low-loss trials (I).</t>
- <t>Short high-loss sum is the sum (T) of Trial Effective Duration values
-of all short high-loss trials (I).</t>
- <t>Short low-loss sum is the sum (T) of Trial Effective Duration values
-of all short low-loss trials (I).</t>
- </list></t>
- <t>Derive goal-based ratios: <list style="symbols">
- <t>Subceed ratio (T) is One minus the Goal Exceed Ratio (I).</t>
- <t>Exceed coefficient (T) is the Goal Exceed Ratio divided by the subceed
-ratio.</t>
- </list></t>
- <t>Balance short-trial effects: <list style="symbols">
- <t>Balancing sum (T) is the short low-loss sum
-multiplied by the exceed coefficient.</t>
- <t>Excess sum (T) is the short high-loss sum minus the balancing sum.</t>
- <t>Positive excess sum (T) is the maximum of zero and excess sum.</t>
- </list></t>
- <t>Compute effective duration totals <list style="symbols">
- <t>Effective high-loss sum (T) is the full-length high-loss sum
-plus the positive excess sum.</t>
- <t>Effective full sum (T) is the effective high-loss sum
-plus the full-length low-loss sum.</t>
- <t>Effective whole sum (T) is the larger of the effective full sum
-and the Goal Duration Sum.</t>
- <t>Missing sum (T) is the effective whole sum minus the effective full sum.</t>
- </list></t>
- <t>Estimate exceed ratios: <list style="symbols">
- <t>Pessimistic high-loss sum (T) is the effective high-loss sum
-plus the missing sum.</t>
- <t>Optimistic exceed ratio (T) is the effective high-loss sum
-divided by the effective whole sum.</t>
- <t>Pessimistic exceed ratio (T) is the pessimistic high-loss sum
-divided by the effective whole sum.</t>
- </list></t>
- <t>Classify the Load: <list style="symbols">
- <t>The load is classified as an Upper Bound (O) if the optimistic exceed
-ratio is larger than the Goal Exceed Ratio.</t>
- <t>The load is classified as a Lower Bound (O) if the pessimistic exceed
-ratio is not larger than the Goal Exceed Ratio.</t>
- <t>The load is classified as undecided (O) otherwise.</t>
- </list></t>
-</list></t>
-
-</section>
-<section anchor="conditional-throughput-logic"><name>Conditional Throughput Logic</name>
-
-<t><list style="symbols">
- <t>Collect Trial Results <list style="symbols">
- <t>Take all Trial Result instances (I) measured at a given Load.</t>
- </list></t>
- <t>Sum Full-Length Durations: <list style="symbols">
- <t>Full-length high-loss sum (T) is the sum of Trial Effective Duration
-values of all full-length high-loss trials (I).</t>
- <t>Full-length low-loss sum (T) is the sum of Trial Effective Duration
-values of all full-length low-loss trials (I).</t>
- <t>Full-length sum (T) is the full-length high-loss sum (I) plus the
-full-length low-loss sum (I).</t>
- </list></t>
- <t>Derive initial thresholds: <list style="symbols">
- <t>Subceed ratio (T) is One minus the Goal Exceed Ratio (I) is called.</t>
- <t>Remaining sum (T) initially is full-lengths sum multiplied by subceed
-ratio.</t>
- <t>Current loss ratio (T) initially is 100%.</t>
- </list></t>
- <t>Iterate through ordered trials <list style="symbols">
- <t>For each full-length trial result, sorted in increasing order by Trial
-Loss Ratio: <list style="symbols">
- <t>If remaining sum is not larger than zero, exit the loop.</t>
- <t>Set current loss ratio to this trial's Trial Loss Ratio (I).</t>
- <t>Decrease the remaining sum by this trial's Trial Effective Duration (I).</t>
- </list></t>
- </list></t>
- <t>Compute Conditional Throughput <list style="symbols">
- <t>Current forwarding ratio (T) is One minus the current loss ratio.</t>
- <t>Conditional Throughput (T) is the current forwarding ratio multiplied
-by the Load value.</t>
- </list></t>
-</list></t>
-
-<section anchor="conditional-throughput-and-load-classification"><name>Conditional Throughput and Load Classification</name>
-
-<t>Conditional Throughput and results of Load Classification overlap but
-are not identical.</t>
-
-<t><list style="symbols">
- <t>When a load is marked as a Relevant Lower Bound, its Conditional
-Throughput is taken from a trial whose loss ratio never exceeds the
-Goal Loss Ratio.</t>
- <t>The reverse is not guaranteed: if the Goal Width is narrower than the
-Goal Loss Ratio, Conditional Throughput can still end up higher than
-the Relevant Upper Bound.</t>
-</list></t>
-
-</section>
-</section>
-<section anchor="sut-behaviors"><name>SUT Behaviors</name>
-
-<t>In Section <xref target="dut-in-sut">DUT in SUT</xref>, the notion of noise has been introduced.
-This section uses new terms
-to describe possible SUT behaviors more precisely.</t>
-
-<t>From measurement point of view, noise is visible as inconsistent trial results.
-See <xref target="inconsistent-trial-results">Inconsistent Trial Results</xref> for general points
-and <xref target="loss-ratios-and-loss-inversion">Loss Ratios and Loss Inversion</xref>
-for specifics when comparing different Load values.</t>
-
-<t>Load Classification and Conditional Throughput apply to a single Load value,
-but even the set of Trial Results measured at that Trial Load value
-may appear inconsistent.</t>
-
-<t>As MLRsearch aims to save time, it executes only a small number of Trials,
-getting only a limited amount of information about SUT behavior.
-It is useful to introduce an "SUT expert" point of view to contrast
-with that limited information.</t>
-
-<section anchor="expert-predictions"><name>Expert Predictions</name>
-
-<t>Imagine that before the Search starts, a human expert had unlimited time
-to measure SUT and obtain all reliable information about it.
-The information is not perfect, as there is still random noise influencing SUT.
-But the expert is familiar with possible noise events, even the rare ones,
-and thus the expert can do probabilistic predictions about future Trial Outputs.</t>
-
-<t>When several outcomes are possible,
-the expert can assess probability of each outcome.</t>
-
-</section>
-<section anchor="exceed-probability"><name>Exceed Probability</name>
-
-<t>When the Controller selects new Trial Duration and Trial Load,
-and just before the Measurer starts performing the Trial,
-the SUT expert can envision possible Trial Results.</t>
-
-<t>With respect to a particular Search Goal instance, the possibilities
-can be summarized into a single number: Exceed Probability.
-It is the probability (according to the expert) that the measured
-Trial Loss Ratio will be higher than the Goal Loss Ratio.</t>
-
-</section>
-<section anchor="trial-duration-dependence"><name>Trial Duration Dependence</name>
-
-<t>When comparing Exceed Probability values for the same Trial Load value
-but different Trial Duration values,
-there are several patterns that commonly occur in practice.</t>
-
-<section anchor="strong-increase"><name>Strong Increase</name>
-
-<t>Exceed Probability is very low at short durations but very high at full-length.
-This SUT behavior is undesirable, and may hint at faulty SUT,
-e.g., SUT leaks resources and is unable to sustain the desired performance.</t>
-
-<t>But this behavior is also seen when SUT uses large amount of buffers.
-This is the main reasons users may want to set large Goal Final Trial Duration.</t>
-
-</section>
-<section anchor="mild-increase"><name>Mild Increase</name>
-
-<t>Short trials are slightly less likely to exceed the loss-ratio limit,
-but the improvement is modest. This mild benefit is typical when noise
-is dominated by rare, large loss spikes: during a full-length trial,
-the good-performing periods cannot fully offset the heavy frame loss
-that occurs in the brief low-performing bursts.</t>
-
-</section>
-<section anchor="independence"><name>Independence</name>
-
-<t>Short trials have basically the same Exceed Probability as full-length trials.
-This is possible only if loss spikes are small (so other parts can compensate)
-and if Goal Loss Ratio is more than zero (otherwise, other parts
-cannot compensate at all).</t>
-
-</section>
-<section anchor="decrease"><name>Decrease</name>
-
-<t>Short trials have larger Exceed Probability than full-length trials.
-This can be possible only for non-zero Goal Loss Ratio,
-for example if SUT needs to "warm up" to best performance within each trial.
-Not commonly seen in practice.</t>
-
-</section>
-</section>
-</section>
-</section>
-<section anchor="iana-considerations"><name>IANA Considerations</name>
-
-<t>This document does not make any request to IANA.</t>
-
-</section>
-<section anchor="security-considerations"><name>Security Considerations</name>
-
-<t>Benchmarking activities as described in this memo are limited to
-technology characterization of a DUT/SUT using controlled stimuli in a
-laboratory environment, with dedicated address space and the constraints
-specified in the sections above.</t>
-
-<t>The benchmarking network topology will be an independent test setup and
-MUST NOT be connected to devices that may forward the test traffic into
-a production network or misroute traffic to the test management network.</t>
-
-<t>Further, benchmarking is performed on an "opaque" basis, relying
-solely on measurements observable external to the DUT/SUT.</t>
-
-<t>The DUT/SUT SHOULD NOT include features that serve only to boost
-benchmark scores - such as a dedicated "fast-track" test mode that is
-never used in normal operation.</t>
-
-<t>Any implications for network security arising from the DUT/SUT SHOULD be
-identical in the lab and in production networks.</t>
-
-</section>
-<section anchor="acknowledgements"><name>Acknowledgements</name>
-
-<t>Special wholehearted gratitude and thanks to the late Al Morton for his
-thorough reviews filled with very specific feedback and constructive
-guidelines. Thank You Al for the close collaboration over the years, Your Mentorship,
-Your continuous unwavering encouragement full of empathy and energizing
-positive attitude. Al, You are dearly missed.</t>
-
-<t>Thanks to Gabor Lencse, Giuseppe Fioccola, Carsten Rossenhoevel and BMWG
-contributors for good discussions and thorough reviews, guiding and
-helping us to improve the clarity and formality of this document.</t>
-
-<t>Many thanks to Alec Hothan of the OPNFV NFVbench project for a thorough
-review and numerous useful comments and suggestions in the earlier
-versions of this document.</t>
-
-<t>We are equally indebted to Mohamed Boucadair for a very thorough and
-detailed AD review and providing many good comments and suggestions,
-helping us make this document complete.</t>
-
-<t>Our appreciation is also extended to Shawn Emery, Yoshifumi Nishida,
-David Dong, Nabeel Cocker and Lars Eggert for their reviews and valueable comments.</t>
-
-</section>
-
-
- </middle>
-
- <back>
-
-
-<references title='References' anchor="sec-combined-references">
-
- <references title='Normative References' anchor="sec-normative-references">
-
-&RFC1242;
-&RFC2119;
-&RFC2285;
-&RFC2544;
-&RFC8174;
-
-
- </references>
-
- <references title='Informative References' anchor="sec-informative-references">
-
-&RFC5180;
-&RFC6349;
-&RFC6985;
-&RFC8219;
-<reference anchor="TST009" target="https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf">
- <front>
- <title>TST 009</title>
- <author >
- <organization></organization>
- </author>
- <date year="n.d."/>
- </front>
-</reference>
-<reference anchor="Y.1564" target="https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-Y.1564-201602-I!!PDF-E&type=items">
- <front>
- <title>Y.1564</title>
- <author >
- <organization></organization>
- </author>
- <date year="n.d."/>
- </front>
-</reference>
-<reference anchor="FDio-CSIT-MLRsearch" target="https://csit.fd.io/cdocs/methodology/measurements/data_plane_throughput/mlr_search/">
- <front>
- <title>FD.io CSIT Test Methodology - MLRsearch</title>
- <author >
- <organization></organization>
- </author>
- <date year="2023" month="October"/>
- </front>
-</reference>
-<reference anchor="PyPI-MLRsearch" target="https://pypi.org/project/MLRsearch/1.2.1/">
- <front>
- <title>MLRsearch 1.2.1, Python Package Index</title>
- <author >
- <organization></organization>
- </author>
- <date year="2023" month="October"/>
- </front>
-</reference>
-<reference anchor="Lencze-Shima" target="https://datatracker.ietf.org/doc/html/draft-lencse-bmwg-rfc2544-bis-00">
- <front>
- <title>An Upgrade to Benchmarking Methodology for Network Interconnect Devices - expired</title>
- <author >
- <organization></organization>
- </author>
- <date year="n.d."/>
- </front>
-</reference>
-<reference anchor="Lencze-Kovacs-Shima" target="http://dx.doi.org/10.11601/ijates.v9i2.288">
- <front>
- <title>Gaming with the Throughput and the Latency Benchmarking Measurement Procedures of RFC 2544</title>
- <author >
- <organization></organization>
- </author>
- <date year="n.d."/>
- </front>
-</reference>
-<reference anchor="Ott-Mathis-Semke-Mahdavi" target="https://www.cs.cornell.edu/people/egs/cornellonly/syslunch/fall02/ott.pdf">
- <front>
- <title>The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm</title>
- <author >
- <organization></organization>
- </author>
- <date year="n.d."/>
- </front>
-</reference>
-<reference anchor="Vassilev" target="https://datatracker.ietf.org/doc/draft-ietf-bmwg-network-tester-cfg/06">
- <front>
- <title>A YANG Data Model for Network Tester Management</title>
- <author >
- <organization></organization>
- </author>
- <date year="n.d."/>
- </front>
-</reference>
-
-
- </references>
-
-</references>
-
-
-<?line 2709?>
-
-<section anchor="load-classification-code"><name>Load Classification Code</name>
-
-<t>This appendix specifies how to perform the Load Classification.</t>
-
-<t>Any Trial Load value can be classified,
-according to a given <xref target="search-goal">Search Goal</xref> instance.</t>
-
-<t>The algorithm uses (some subsets of) the set of all available Trial Results
-from Trials measured at a given Load at the end of the Search.</t>
-
-<t>The block at the end of this appendix holds pseudocode
-which computes two values, stored in variables named
-<spanx style="verb">optimistic_is_lower</spanx> and <spanx style="verb">pessimistic_is_lower</spanx>.</t>
-
-<t>Although presented as pseudocode, the listing is syntactically valid
-Python and can be executed without modification.</t>
-
-<t>If values of both variables are computed to be true, the Load in question
-is classified as a Lower Bound according to the given Search Goal instance.
-If values of both variables are false, the Load is classified as an Upper Bound.
-Otherwise, the load is classified as Undecided.</t>
-
-<t>Some variable names are shortened to fit expressions in one line.
-Namely, variables holding sum quantities end in <spanx style="verb">_s</spanx> instead of <spanx style="verb">_sum</spanx>,
-and variables holding effective quantities start in <spanx style="verb">effect_</spanx>
-instead of <spanx style="verb">effective_</spanx>.</t>
-
-<t>The pseudocode expects the following variables to hold the following values:</t>
-
-<t><list style="symbols">
- <t><spanx style="verb">goal_duration_s</spanx>: The Goal Duration Sum value of the given Search Goal.</t>
- <t><spanx style="verb">goal_exceed_ratio</spanx>: The Goal Exceed Ratio value of the given Search Goal.</t>
- <t><spanx style="verb">full_length_low_loss_s</spanx>: Sum of Trial Effective Durations across Trials
-with Trial Duration at least equal to the Goal Final Trial Duration
-and with Trial Loss Ratio not higher than the Goal Loss Ratio
-(across Full-Length Low-Loss Trials).</t>
- <t><spanx style="verb">full_length_high_loss_s</spanx>: Sum of Trial Effective Durations across Trials
-with Trial Duration at least equal to the Goal Final Trial Duration
-and with Trial Loss Ratio higher than the Goal Loss Ratio
-(across Full-Length High-Loss Trials).</t>
- <t><spanx style="verb">short_low_loss_s</spanx>: Sum of Trial Effective Durations across Trials
-with Trial Duration shorter than the Goal Final Trial Duration
-and with Trial Loss Ratio not higher than the Goal Loss Ratio
-(across Short Low-Loss Trials).</t>
- <t><spanx style="verb">short_high_loss_s</spanx>: Sum of Trial Effective Durations across Trials
-with Trial Duration shorter than the Goal Final Trial Duration
-and with Trial Loss Ratio higher than the Goal Loss Ratio
-(across Short High-Loss Trials).</t>
-</list></t>
-
-<t>The code works correctly also when there are no Trial Results at a given Load.</t>
-
-<figure><sourcecode type="python"><![CDATA[
-<CODE BEGINS>
-exceed_coefficient = goal_exceed_ratio / (1.0 - goal_exceed_ratio)
-balancing_s = short_low_loss_s * exceed_coefficient
-positive_excess_s = max(0.0, short_high_loss_s - balancing_s)
-effect_high_loss_s = full_length_high_loss_s + positive_excess_s
-effect_full_length_s = full_length_low_loss_s + effect_high_loss_s
-effect_whole_s = max(effect_full_length_s, goal_duration_s)
-quantile_duration_s = effect_whole_s * goal_exceed_ratio
-pessimistic_high_loss_s = effect_whole_s - full_length_low_loss_s
-pessimistic_is_lower = pessimistic_high_loss_s <= quantile_duration_s
-optimistic_is_lower = effect_high_loss_s <= quantile_duration_s
-<CODE ENDS>
-]]></sourcecode></figure>
-
-</section>
-<section anchor="conditional-throughput-code"><name>Conditional Throughput Code</name>
-
-<t>This section specifies an example of how to compute Conditional Throughput,
-as referred to in Section <xref target="conditional-throughput">Conditional Throughput</xref>.</t>
-
-<t>Any Load value can be used as the basis for the following computation,
-but only the Relevant Lower Bound (at the end of the Search)
-leads to the value called the Conditional Throughput for a given Search Goal.</t>
-
-<t>The algorithm uses (some subsets of) the set of all available Trial Results
-from Trials measured at a given Load at the end of the Search.</t>
-
-<t>The block at the end of this appendix holds pseudocode
-which computes a value stored as variable <spanx style="verb">conditional_throughput</spanx>.</t>
-
-<t>Although presented as pseudocode, the listing is syntactically valid
-Python and can be executed without modification.</t>
-
-<t>Some variable names are shortened in order to fit expressions in one line.
-Namely, variables holding sum quantities end in <spanx style="verb">_s</spanx> instead of <spanx style="verb">_sum</spanx>,
-and variables holding effective quantities start in <spanx style="verb">effect_</spanx>
-instead of <spanx style="verb">effective_</spanx>.</t>
-
-<t>The pseudocode expects the following variables to hold the following values:</t>
-
-<t><list style="symbols">
- <t><spanx style="verb">goal_duration_s</spanx>: The Goal Duration Sum value of the given Search Goal.</t>
- <t><spanx style="verb">goal_exceed_ratio</spanx>: The Goal Exceed Ratio value of the given Search Goal.</t>
- <t><spanx style="verb">full_length_low_loss_s</spanx>: Sum of Trial Effective Durations across Trials
-with Trial Duration at least equal to the Goal Final Trial Duration
-and with Trial Loss Ratio not higher than the Goal Loss Ratio
-(across Full-Length Low-Loss Trials).</t>
- <t><spanx style="verb">full_length_high_loss_s</spanx>: Sum of Trial Effective Durations across Trials
-with Trial Duration at least equal to the Goal Final Trial Duration
-and with Trial Loss Ratio higher than the Goal Loss Ratio
-(across Full-Length High-Loss Trials).</t>
- <t><spanx style="verb">full_length_trials</spanx>: An iterable of all Trial Results from Trials
-with Trial Duration at least equal to the Goal Final Trial Duration
-(all Full-Length Trials), sorted by increasing Trial Loss Ratio.
-One item <spanx style="verb">trial</spanx> is a composite with the following two attributes available: <list style="symbols">
- <t><spanx style="verb">trial.loss_ratio</spanx>: The Trial Loss Ratio as measured for this Trial.</t>
- <t><spanx style="verb">trial.effect_duration</spanx>: The Trial Effective Duration of this Trial.</t>
- </list></t>
-</list></t>
-
-<t>The code works correctly only when there is at least one
-Trial Result measured at a given Load.</t>
-
-<figure><sourcecode type="python"><![CDATA[
-<CODE BEGINS>
-full_length_s = full_length_low_loss_s + full_length_high_loss_s
-whole_s = max(goal_duration_s, full_length_s)
-remaining = whole_s * (1.0 - goal_exceed_ratio)
-quantile_loss_ratio = None
-for trial in full_length_trials:
- if quantile_loss_ratio is None or remaining > 0.0:
- quantile_loss_ratio = trial.loss_ratio
- remaining -= trial.effect_duration
- else:
- break
-else:
- if remaining > 0.0:
- quantile_loss_ratio = 1.0
-conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
-<CODE ENDS>
-]]></sourcecode></figure>
-
-</section>
-<section anchor="example-search"><name>Example Search</name>
-
-<t>The following example Search is related to
-one hypothetical run of a Search test procedure
-that has been started with multiple Search Goals.
-Several points in time are chosen, to show how the logic works,
-with specific sets of Trial Result available.
-The trial results themselves are not very realistic, as
-the intention is to show several corner cases of the logic.</t>
-
-<t>In all Trials, the Effective Trial Duration is equal to Trial Duration.</t>
-
-<t>Only one Trial Load is in focus, its value is one million frames per second.
-Trial Results at other Trial Loads are not mentioned,
-as the parts of logic present here do not depend on those.
-In practice, Trial Results at other Load values would be present,
-e.g., MLRsearch will look for a Lower Bound smaller than any Upper Bound found.</t>
-
-<t>At any given moment, exactly one Search Goal is designated as in focus.
-This designation affects only the Trial Duration chosen for new trials;
-it does not alter the rest of the decision logic.</t>
-
-<t>An MLRsearch implementation is free to evaluate several goals
-simultaneously - the "focus" mechanism is optional and appears here only
-to show that a load can still be classified against goals that are not
-currently in focus.</t>
-
-<section anchor="example-goals"><name>Example Goals</name>
-
-<t>The following four Search Goal instances are selected for the example Search.
-Each goal has a readable name and dense code,
-the code is useful to show Search Goal attribute values.</t>
-
-<t>As the variable "exceed coefficient" does not depend on trial results,
-it is also precomputed here.</t>
-
-<t>Goal 1:</t>
-
-<figure><artwork><![CDATA[
-name: RFC2544
-Goal Final Trial Duration: 60s
-Goal Duration Sum: 60s
-Goal Loss Ratio: 0%
-Goal Exceed Ratio: 0%
-exceed coefficient: 0% / (100% / 0%) = 0.0
-code: 60f60d0l0e
-]]></artwork></figure>
-
-<t>Goal 2:</t>
-
-<figure><artwork><![CDATA[
-name: TST009
-Goal Final Trial Duration: 60s
-Goal Duration Sum: 120s
-Goal Loss Ratio: 0%
-Goal Exceed Ratio: 50%
-exceed coefficient: 50% / (100% - 50%) = 1.0
-code: 60f120d0l50e
-]]></artwork></figure>
-
-<t>Goal 3:</t>
-
-<figure><artwork><![CDATA[
-name: 1s final
-Goal Final Trial Duration: 1s
-Goal Duration Sum: 120s
-Goal Loss Ratio: 0.5%
-Goal Exceed Ratio: 50%
-exceed coefficient: 50% / (100% - 50%) = 1.0
-code: 1f120d.5l50e
-]]></artwork></figure>
-
-<t>Goal 4:</t>
-
-<figure><artwork><![CDATA[
-name: 20% exceed
-Goal Final Trial Duration: 60s
-Goal Duration Sum: 60s
-Goal Loss Ratio: 0.5%
-Goal Exceed Ratio: 20%
-exceed coefficient: 20% / (100% - 20%) = 0.25
-code: 60f60d0.5l20e
-]]></artwork></figure>
-
-<t>The first two goals are important for compliance reasons,
-the other two cover less frequent cases.</t>
-
-</section>
-<section anchor="example-trial-results"><name>Example Trial Results</name>
-
-<t>The following six sets of trial results are selected for the example Search.
-The sets are defined as points in time, describing which Trial Results
-were added since the previous point.</t>
-
-<t>Each point has a readable name and dense code,
-the code is useful to show Trial Output attribute values
-and number of times identical results were added.</t>
-
-<t>Point 1:</t>
-
-<figure><artwork><![CDATA[
-name: first short good
-goal in focus: 1s final (1f120d.5l50e)
-added Trial Results: 59 trials, each 1 second and 0% loss
-code: 59x1s0l
-]]></artwork></figure>
-
-<t>Point 2:</t>
-
-<figure><artwork><![CDATA[
-name: first short bad
-goal in focus: 1s final (1f120d.5l50e)
-added Trial Result: one trial, 1 second, 1% loss
-code: 59x1s0l+1x1s1l
-]]></artwork></figure>
-
-<t>Point 3:</t>
-
-<figure><artwork><![CDATA[
-name: last short bad
-goal in focus: 1s final (1f120d.5l50e)
-added Trial Results: 59 trials, 1 second each, 1% loss each
-code: 59x1s0l+60x1s1l
-]]></artwork></figure>
-
-<t>Point 4:</t>
-
-<figure><artwork><![CDATA[
-name: last short good
-goal in focus: 1s final (1f120d.5l50e)
-added Trial Results: one trial 1 second, 0% loss
-code: 60x1s0l+60x1s1l
-]]></artwork></figure>
-
-<t>Point 5:</t>
-
-<figure><artwork><![CDATA[
-name: first long bad
-goal in focus: TST009 (60f120d0l50e)
-added Trial Results: one trial, 60 seconds, 0.1% loss
-code: 60x1s0l+60x1s1l+1x60s.1l
-]]></artwork></figure>
-
-<t>Point 6:</t>
-
-<figure><artwork><![CDATA[
-name: first long good
-goal in focus: TST009 (60f120d0l50e)
-added Trial Results: one trial, 60 seconds, 0% loss
-code: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l
-]]></artwork></figure>
-
-<t>Comments on point in time naming:</t>
-
-<t><list style="symbols">
- <t>When a name contains "short", it means the added trial
-had Trial Duration of 1 second, which is Short Trial for 3 of the Search Goals,
-but it is a Full-Length Trial for the "1s final" goal.</t>
- <t>Similarly, "long" in name means the added trial
-had Trial Duration of 60 seconds, which is Full-Length Trial for 3 goals
-but Long Trial for the "1s final" goal.</t>
- <t>When a name contains "good" it means the added trial is Low-Loss Trial
-for all the goals.</t>
- <t>When a name contains "short bad" it means the added trial is High-Loss Trial
-for all the goals.</t>
- <t>When a name contains "long bad", it means the added trial
-is a High-Loss Trial for goals "RFC2544" and "TST009",
-but it is a Low-Loss Trial for the two other goals.</t>
-</list></t>
-
-</section>
-<section anchor="load-classification-computations"><name>Load Classification Computations</name>
-
-<t>This section shows how Load Classification logic is applied
-by listing all temporary values at the specific time point.</t>
-
-<section anchor="point-1"><name>Point 1</name>
-
-<t>This is the "first short good" point.
-Code for available results is: 59x1s0l</t>
-
-<texttable>
- <ttcol align='left'>Goal name</ttcol>
- <ttcol align='left'>RFC2544</ttcol>
- <ttcol align='left'>TST009</ttcol>
- <ttcol align='left'>1s final</ttcol>
- <ttcol align='left'>20% exceed</ttcol>
- <c>Goal code</c>
- <c>60f60d0l0e</c>
- <c>60f120d0l50e</c>
- <c>1f120d.5l50e</c>
- <c>60f60d0.5l20e</c>
- <c>Full-length high-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Full-length low-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>Short high-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Short low-loss sum</c>
- <c>59s</c>
- <c>59s</c>
- <c>0s</c>
- <c>59s</c>
- <c>Balancing sum</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>14.75s</c>
- <c>Excess sum</c>
- <c>0s</c>
- <c>-59s</c>
- <c>0s</c>
- <c>-14.75s</c>
- <c>Positive excess sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Effective high-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Effective full sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>Effective whole sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>120s</c>
- <c>60s</c>
- <c>Missing sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>61s</c>
- <c>60s</c>
- <c>Pessimistic high-loss sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>61s</c>
- <c>60s</c>
- <c>Optimistic exceed ratio</c>
- <c>0%</c>
- <c>0%</c>
- <c>0%</c>
- <c>0%</c>
- <c>Pessimistic exceed ratio</c>
- <c>100%</c>
- <c>100%</c>
- <c>50.833%</c>
- <c>100%</c>
- <c>Classification Result</c>
- <c>Undecided</c>
- <c>Undecided</c>
- <c>Undecided</c>
- <c>Undecided</c>
-</texttable>
-
-<t>This is the last point in time where all goals have this load as Undecided.</t>
-
-</section>
-<section anchor="point-2"><name>Point 2</name>
-
-<t>This is the "first short bad" point.
-Code for available results is: 59x1s0l+1x1s1l</t>
-
-<texttable>
- <ttcol align='left'>Goal name</ttcol>
- <ttcol align='left'>RFC2544</ttcol>
- <ttcol align='left'>TST009</ttcol>
- <ttcol align='left'>1s final</ttcol>
- <ttcol align='left'>20% exceed</ttcol>
- <c>Goal code</c>
- <c>60f60d0l0e</c>
- <c>60f120d0l50e</c>
- <c>1f120d.5l50e</c>
- <c>60f60d0.5l20e</c>
- <c>Full-length high-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>1s</c>
- <c>0s</c>
- <c>Full-length low-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>Short high-loss sum</c>
- <c>1s</c>
- <c>1s</c>
- <c>0s</c>
- <c>1s</c>
- <c>Short low-loss sum</c>
- <c>59s</c>
- <c>59s</c>
- <c>0s</c>
- <c>59s</c>
- <c>Balancing sum</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>14.75s</c>
- <c>Excess sum</c>
- <c>1s</c>
- <c>-58s</c>
- <c>0s</c>
- <c>-13.75s</c>
- <c>Positive excess sum</c>
- <c>1s</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Effective high-loss sum</c>
- <c>1s</c>
- <c>0s</c>
- <c>1s</c>
- <c>0s</c>
- <c>Effective full sum</c>
- <c>1s</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>Effective whole sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>120s</c>
- <c>60s</c>
- <c>Missing sum</c>
- <c>59s</c>
- <c>120s</c>
- <c>60s</c>
- <c>60s</c>
- <c>Pessimistic high-loss sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>61s</c>
- <c>60s</c>
- <c>Optimistic exceed ratio</c>
- <c>1.667%</c>
- <c>0%</c>
- <c>0.833%</c>
- <c>0%</c>
- <c>Pessimistic exceed ratio</c>
- <c>100%</c>
- <c>100%</c>
- <c>50.833%</c>
- <c>100%</c>
- <c>Classification Result</c>
- <c>Upper Bound</c>
- <c>Undecided</c>
- <c>Undecided</c>
- <c>Undecided</c>
-</texttable>
-
-<t>Due to zero Goal Loss Ratio, RFC2544 goal must have mild or strong increase
-of exceed probability, so the one lossy trial would be lossy even if measured
-at 60 second duration.
-Due to zero exceed ratio, one High-Loss Trial is enough to preclude this Load
-from becoming a Lower Bound for RFC2544. That is why this Load
-is classified as an Upper Bound for RFC2544 this early.</t>
-
-<t>This is an example how significant time can be saved, compared to 60-second trials.</t>
-
-</section>
-<section anchor="point-3"><name>Point 3</name>
-
-<t>This is the "last short bad" point.
-Code for available trial results is: 59x1s0l+60x1s1l</t>
-
-<texttable>
- <ttcol align='left'>Goal name</ttcol>
- <ttcol align='left'>RFC2544</ttcol>
- <ttcol align='left'>TST009</ttcol>
- <ttcol align='left'>1s final</ttcol>
- <ttcol align='left'>20% exceed</ttcol>
- <c>Goal code</c>
- <c>60f60d0l0e</c>
- <c>60f120d0l50e</c>
- <c>1f120d.5l50e</c>
- <c>60f60d0.5l20e</c>
- <c>Full-length high-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>Full-length low-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>Short high-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Short low-loss sum</c>
- <c>59s</c>
- <c>59s</c>
- <c>0s</c>
- <c>59s</c>
- <c>Balancing sum</c>
- <c>0s</c>
- <c>59s</c>
- <c>0s</c>
- <c>14.75s</c>
- <c>Excess sum</c>
- <c>60s</c>
- <c>1s</c>
- <c>0s</c>
- <c>45.25s</c>
- <c>Positive excess sum</c>
- <c>60s</c>
- <c>1s</c>
- <c>0s</c>
- <c>45.25s</c>
- <c>Effective high-loss sum</c>
- <c>60s</c>
- <c>1s</c>
- <c>60s</c>
- <c>45.25s</c>
- <c>Effective full sum</c>
- <c>60s</c>
- <c>1s</c>
- <c>119s</c>
- <c>45.25s</c>
- <c>Effective whole sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>120s</c>
- <c>60s</c>
- <c>Missing sum</c>
- <c>0s</c>
- <c>119s</c>
- <c>1s</c>
- <c>14.75s</c>
- <c>Pessimistic high-loss sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>61s</c>
- <c>60s</c>
- <c>Optimistic exceed ratio</c>
- <c>100%</c>
- <c>0.833%</c>
- <c>50%</c>
- <c>75.417%</c>
- <c>Pessimistic exceed ratio</c>
- <c>100%</c>
- <c>100%</c>
- <c>50.833%</c>
- <c>100%</c>
- <c>Classification Result</c>
- <c>Upper Bound</c>
- <c>Undecided</c>
- <c>Undecided</c>
- <c>Upper Bound</c>
-</texttable>
-
-<t>This is the last point for "1s final" goal to have this Load still Undecided.
-Only one 1-second trial is missing within the 120-second Goal Duration Sum,
-but its result will decide the classification result.</t>
-
-<t>The "20% exceed" started to classify this load as an Upper Bound
-somewhere between points 2 and 3.</t>
-
-</section>
-<section anchor="point-4"><name>Point 4</name>
-
-<t>This is the "last short good" point.
-Code for available trial results is: 60x1s0l+60x1s1l</t>
-
-<texttable>
- <ttcol align='left'>Goal name</ttcol>
- <ttcol align='left'>RFC2544</ttcol>
- <ttcol align='left'>TST009</ttcol>
- <ttcol align='left'>1s final</ttcol>
- <ttcol align='left'>20% exceed</ttcol>
- <c>Goal code</c>
- <c>60f60d0l0e</c>
- <c>60f120d0l50e</c>
- <c>1f120d.5l50e</c>
- <c>60f60d0.5l20e</c>
- <c>Full-length high-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>Full-length low-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>Short high-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Short low-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Balancing sum</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>15s</c>
- <c>Excess sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>45s</c>
- <c>Positive excess sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>45s</c>
- <c>Effective high-loss sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>45s</c>
- <c>Effective full sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>120s</c>
- <c>45s</c>
- <c>Effective whole sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>120s</c>
- <c>60s</c>
- <c>Missing sum</c>
- <c>0s</c>
- <c>120s</c>
- <c>0s</c>
- <c>15s</c>
- <c>Pessimistic high-loss sum</c>
- <c>60s</c>
- <c>120s</c>
- <c>60s</c>
- <c>60s</c>
- <c>Optimistic exceed ratio</c>
- <c>100%</c>
- <c>0%</c>
- <c>50%</c>
- <c>75%</c>
- <c>Pessimistic exceed ratio</c>
- <c>100%</c>
- <c>100%</c>
- <c>50%</c>
- <c>100%</c>
- <c>Classification Result</c>
- <c>Upper Bound</c>
- <c>Undecided</c>
- <c>Lower Bound</c>
- <c>Upper Bound</c>
-</texttable>
-
-<t>The one missing trial for "1s final" was Low-Loss,
-half of trial results are Low-Loss which exactly matches 50% exceed ratio.
-This shows time savings are not guaranteed.</t>
-
-</section>
-<section anchor="point-5"><name>Point 5</name>
-
-<t>This is the "first long bad" point.
-Code for available trial results is: 60x1s0l+60x1s1l+1x60s.1l</t>
-
-<texttable>
- <ttcol align='left'>Goal name</ttcol>
- <ttcol align='left'>RFC2544</ttcol>
- <ttcol align='left'>TST009</ttcol>
- <ttcol align='left'>1s final</ttcol>
- <ttcol align='left'>20% exceed</ttcol>
- <c>Goal code</c>
- <c>60f60d0l0e</c>
- <c>60f120d0l50e</c>
- <c>1f120d.5l50e</c>
- <c>60f60d0.5l20e</c>
- <c>Full-length high-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>Full-length low-loss sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>120s</c>
- <c>60s</c>
- <c>Short high-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Short low-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Balancing sum</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>15s</c>
- <c>Excess sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>45s</c>
- <c>Positive excess sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>45s</c>
- <c>Effective high-loss sum</c>
- <c>120s</c>
- <c>60s</c>
- <c>60s</c>
- <c>45s</c>
- <c>Effective full sum</c>
- <c>120s</c>
- <c>60s</c>
- <c>180s</c>
- <c>105s</c>
- <c>Effective whole sum</c>
- <c>120s</c>
- <c>120s</c>
- <c>180s</c>
- <c>105s</c>
- <c>Missing sum</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Pessimistic high-loss sum</c>
- <c>120s</c>
- <c>120s</c>
- <c>60s</c>
- <c>45s</c>
- <c>Optimistic exceed ratio</c>
- <c>100%</c>
- <c>50%</c>
- <c>33.333%</c>
- <c>42.857%</c>
- <c>Pessimistic exceed ratio</c>
- <c>100%</c>
- <c>100%</c>
- <c>33.333%</c>
- <c>42.857%</c>
- <c>Classification Result</c>
- <c>Upper Bound</c>
- <c>Undecided</c>
- <c>Lower Bound</c>
- <c>Lower Bound</c>
-</texttable>
-
-<t>As designed for TST009 goal, one Full-Length High-Loss Trial can be tolerated.
-120s worth of 1-second trials is not useful, as this is allowed when
-Exceed Probability does not depend on Trial Duration.
-As Goal Loss Ratio is zero, it is not possible for 60-second trials
-to compensate for losses seen in 1-second results.
-But Load Classification logic does not have that knowledge hardcoded,
-so optimistic exceed ratio is still only 50%.</t>
-
-<t>But the 0.1% Trial Loss Ratio is lower than "20% exceed" Goal Loss Ratio,
-so this unexpected Full-Length Low-Loss trial changed the classification result
-of this Load to Lower Bound.</t>
-
-</section>
-<section anchor="point-6"><name>Point 6</name>
-
-<t>This is the "first long good" point.
-Code for available trial results is: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l</t>
-
-<texttable>
- <ttcol align='left'>Goal name</ttcol>
- <ttcol align='left'>RFC2544</ttcol>
- <ttcol align='left'>TST009</ttcol>
- <ttcol align='left'>1s final</ttcol>
- <ttcol align='left'>20% exceed</ttcol>
- <c>Goal code</c>
- <c>60f60d0l0e</c>
- <c>60f120d0l50e</c>
- <c>1f120d.5l50e</c>
- <c>60f60d0.5l20e</c>
- <c>Full-length high-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>Full-length low-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>180s</c>
- <c>120s</c>
- <c>Short high-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Short low-loss sum</c>
- <c>60s</c>
- <c>60s</c>
- <c>0s</c>
- <c>60s</c>
- <c>Balancing sum</c>
- <c>0s</c>
- <c>60s</c>
- <c>0s</c>
- <c>15s</c>
- <c>Excess sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>45s</c>
- <c>Positive excess sum</c>
- <c>60s</c>
- <c>0s</c>
- <c>0s</c>
- <c>45s</c>
- <c>Effective high-loss sum</c>
- <c>120s</c>
- <c>60s</c>
- <c>60s</c>
- <c>45s</c>
- <c>Effective full sum</c>
- <c>180s</c>
- <c>120s</c>
- <c>240s</c>
- <c>165s</c>
- <c>Effective whole sum</c>
- <c>180s</c>
- <c>120s</c>
- <c>240s</c>
- <c>165s</c>
- <c>Missing sum</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>0s</c>
- <c>Pessimistic high-loss sum</c>
- <c>120s</c>
- <c>60s</c>
- <c>60s</c>
- <c>45s</c>
- <c>Optimistic exceed ratio</c>
- <c>66.667%</c>
- <c>50%</c>
- <c>25%</c>
- <c>27.273%</c>
- <c>Pessimistic exceed ratio</c>
- <c>66.667%</c>
- <c>50%</c>
- <c>25%</c>
- <c>27.273%</c>
- <c>Classification Result</c>
- <c>Upper Bound</c>
- <c>Lower Bound</c>
- <c>Lower Bound</c>
- <c>Lower Bound</c>
-</texttable>
-
-<t>This is the Low-Loss Trial the "TST009" goal was waiting for.
-This Load is now classified for all goals; the search may end.
-Or, more realistically, it can focus on larger load only,
-as the three goals will want an Upper Bound (unless this Load is Max Load).</t>
-
-</section>
-</section>
-<section anchor="conditional-throughput-computations"><name>Conditional Throughput Computations</name>
-
-<t>At the end of this hypothetical search, the "RFC2544" goal labels the
-load as an Upper Bound, making it ineligible for Conditional-Throughput
-calculations. By contrast, the other three goals treat the same load as
-a Lower Bound; if it is also accepted as their Relevant Lower Bound, we
-can compute Conditional-Throughput values for each of them.</t>
-
-<t>(The load under discussion is 1 000 000 frames per second.)</t>
-
-<section anchor="goal-2"><name>Goal 2</name>
-
-<t>The Conditional Throughput is computed from sorted list
-of Full-Length Trial results. As TST009 Goal Final Trial Duration is 60 seconds,
-only two of 122 Trials are considered Full-Length Trials.
-One has Trial Loss Ratio of 0%, the other of 0.1%.</t>
-
-<t><list style="symbols">
- <t>Full-length high-loss sum is 60 seconds.</t>
- <t>Full-length low-loss sum is 60 seconds.</t>
- <t>Full-length is 120 seconds.</t>
- <t>Subceed ratio is 50%.</t>
- <t>Remaining sum initially is 0.5x12s = 60 seconds.</t>
- <t>Current loss ratio initially is 100%.</t>
- <t>For first result (duration 60s, loss 0%):
- <list style="symbols">
- <t>Remaining sum is larger than zero, not exiting the loop.</t>
- <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0%.</t>
- <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
- <t>New remaining sum is 60s - 60s = 0s.</t>
- </list></t>
- <t>For second result (duration 60s, loss 0.1%):</t>
- <t>Remaining sum is not larger than zero, exiting the loop.</t>
- <t>Current loss ratio was most recently set to 0%.</t>
- <t>Current forwarding ratio is one minus the current loss ratio, so 100%.</t>
- <t>Conditional Throughput is the current forwarding ratio multiplied by the Load value.</t>
- <t>Conditional Throughput is one million frames per second.</t>
-</list></t>
-
-</section>
-<section anchor="goal-3"><name>Goal 3</name>
-
-<t>The "1s final" has Goal Final Trial Duration of 1 second,
-so all 122 Trial Results are considered Full-Length Trials.
-They are ordered like this:</t>
-
-<figure><artwork><![CDATA[
-60 1-second 0% loss trials,
-1 60-second 0% loss trial,
-1 60-second 0.1% loss trial,
-60 1-second 1% loss trials.
-]]></artwork></figure>
-
-<t>The result does not depend on the order of 0% loss trials.</t>
-
-<t><list style="symbols">
- <t>Full-length high-loss sum is 60 seconds.</t>
- <t>Full-length low-loss sum is 180 seconds.</t>
- <t>Full-length is 240 seconds.</t>
- <t>Subceed ratio is 50%.</t>
- <t>Remaining sum initially is 0.5x240s = 120 seconds.</t>
- <t>Current loss ratio initially is 100%.</t>
- <t>For first 61 results (duration varies, loss 0%):
- <list style="symbols">
- <t>Remaining sum is larger than zero, not exiting the loop.</t>
- <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0%.</t>
- <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
- <t>New remaining sum varies.</t>
- </list></t>
- <t>After 61 trials, duration of 60x1s + 1x60s has been subtracted from 120s, leaving 0s.</t>
- <t>For 62-th result (duration 60s, loss 0.1%):
- <list style="symbols">
- <t>Remaining sum is not larger than zero, exiting the loop.</t>
- </list></t>
- <t>Current loss ratio was most recently set to 0%.</t>
- <t>Current forwarding ratio is one minus the current loss ratio, so 100%.</t>
- <t>Conditional Throughput is the current forwarding ratio multiplied by the Load value.</t>
- <t>Conditional Throughput is one million frames per second.</t>
-</list></t>
-
-</section>
-<section anchor="goal-4"><name>Goal 4</name>
-
-<t>The Conditional Throughput is computed from sorted list
-of Full-Length Trial results. As "20% exceed" Goal Final Trial Duration
-is 60 seconds, only two of 122 Trials are considered Full-Length Trials.
-One has Trial Loss Ratio of 0%, the other of 0.1%.</t>
-
-<t><list style="symbols">
- <t>Full-length high-loss sum is 60 seconds.</t>
- <t>Full-length low-loss sum is 60 seconds.</t>
- <t>Full-length is 120 seconds.</t>
- <t>Subceed ratio is 80%.</t>
- <t>Remaining sum initially is 0.8x120s = 96 seconds.</t>
- <t>Current loss ratio initially is 100%.</t>
- <t>For first result (duration 60s, loss 0%):
- <list style="symbols">
- <t>Remaining sum is larger than zero, not exiting the loop.</t>
- <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0%.</t>
- <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
- <t>New remaining sum is 96s - 60s = 36s.</t>
- </list></t>
- <t>For second result (duration 60s, loss 0.1%):
- <list style="symbols">
- <t>Remaining sum is larger than zero, not exiting the loop.</t>
- <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0.1%.</t>
- <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
- <t>New remaining sum is 36s - 60s = -24s.</t>
- </list></t>
- <t>No more trials (and remaining sum is not larger than zero), exiting loop.</t>
- <t>Current loss ratio was most recently set to 0.1%.</t>
- <t>Current forwarding ratio is one minus the current loss ratio, so 99.9%.</t>
- <t>Conditional Throughput is the current forwarding ratio multiplied by the Load value.</t>
- <t>Conditional Throughput is 999 thousand frames per second.</t>
-</list></t>
-
-<t>Due to stricter Goal Exceed Ratio, this Conditional Throughput
-is smaller than Conditional Throughput of the other two goals.</t>
-
-</section>
-</section>
-</section>
-
-
- </back>
-
-<!-- ##markdown-source:
-H4sIAAAAAAAAA+y9644cV5Im+N+fwicL1ZVJRAST1KUlNmp6KVKsIloUNWKq
-arsJoeAZ4ZHpxQj3aHcPJrMWC8yD7L7cPMmafWZ2jh2/BClVD3oG2AYaRWW4
-Hz8XO3a3z5bLZZb1Vb8rn+Svjru+OuzK/Lum6/Ifi75q8jdl0a5vs+L6ui3f
-0yPf/djJXzbNui729NamLbb9sir77fJ6f3ez3O9aeWT56HG2KXp65PHl4y+W
-l18vLx9nWXe83lddVzX11f2Bfnv57dWLrDq0T/K+PXb948vLr+mxoi2LJ3lz
-6LK7myf5N2W9vt0X7buqvsn/3Mj//qFtjofs3R0NUfdlW5f98jlPJVsX/ZO8
-qrdNlq2bDT36JD92y6JbV1V2qJ7k9H+/yddFTX8t86Jti/v8vNrmxW6X35fd
-Rd60+W3R3ea3ZVtmed436yf8A/2za9q+LbfdEwyxKbcF7VhHT9jv93v5mf8z
-K479bdM+yXL831L/N6ep0ROvVvm/NHXXF3V/Xzd31fpv4XfZ1lfFuirfzT7U
-tLSsZ1W3piO67/py34Wfyn1R7Z7k+3fy6v+x5qdW62Y/PZM/rfIfml3xbvD9
-P7VF/64Z/PTxr75vD/yG+2hWN+2eaOl9yVvx44tnjx5//lj/+fjRo6/tn4+/
-+sL++cXnn+s/v3r0j/TPjE8zHeSLR19d6j+//OxzG+TLr8MgXz2Woa/eXBFJ
-ySn0RXtTEnHc9v2he/Lw4d3d3arsu2pF63q4KXc0fPuQ//CXm+7h9y/+tKSX
-H15ePvrL5ddf0//S/3+2uvx8RX/48vLhTfcXfYR+eX/52eXnl48Oq8NmK5+S
-K3VGP+f0+xn98V9Xj7748vP5mVT9cVXV/cO2XD/cNLvmpqr/cjher4ru8M+7
-or75ffkP1eb3V8sfv322lLGWjy8ffXn5ePnyv/yXH56/WH77Dz1dqd9X4Vxs
-EvI0z+HF86pZPnvz8moZbvL0hNZd1a+2m1XVPFzTVe8e7kuiZkzrnv5ddMe2
-3Jd13z2kK1785UATLP/S39KdvLk9HPuHxAX+IuM/TKby4jkNmfMM8quy6/NX
-cdh8GdnLGV4K3OOz5aNL+ssP9z+8/NjED/cHOc9D2/y1XPcPw/MPH60erx6l
-0wk/5vhxQZ+g+dT5D8X6XXFTEmvZlB+mJ/MdMaW/lcs3t9W+mJ4K70zf0khl
-u2L+KGTWrB/e9vvdQ+GbOxqlK4Vztts1k/7yuuqWl5fJPJ/W+U+Hm7bYlMxt
-En7od5BuSf592d8RixSuuG7qmjYhf16+r9ZlR1tcfjhUbbk5iyv4l+Z9se7m
-FsLr+LDaNLKpjy5Xj4jkHj2s/krb0a3ef109Xj3+6qtksn8o9jyxu6q/zfvb
-Mr8KZJEX9QZ/+o7ertf3w5UEssp/aJt1uaH/6vJmy/c5563hWb/u++Wror+l
-TXpT7t+V9B+3m+J9NX+x1h0xIhIQu92Khnx4KBsScg9LuuP656be3T/s7rvd
-kWbzcEuS4PLxw6bvx7eZpk6MuW2IuR2qNU3/lr5Mm05zxEqf/ZA/a+obomwS
-cPnT9021Kep1mT/d3TQt7ceeV/CnguTfjsTpL6OaoaCt5aCXdA500sv19ubh
-5Zcp1eT/+vT7P+TPacT8VUMMLiGQK7xH66mJ0nnTz7JsuVzmxXXH3++z7Io2
-OadPH3Ek3aFcV9uKTqT8QKfHIhyy7+wUOWanyPEsP7eTvcgj88j1Sl7fZyRj
-q5qHLfK6vMsdEyIJvtuVm0mlRVWU83C7L1aRteRFtce8iUirffW30j5H1MZv
-1wvSUQ4HEvX53sbe8ditG7vsFqDlak9s5n2ZE53Ss/Q/h5I2+7raVf19xg+Q
-BDwUrf5llWVxGrS1+4YkGt2EDS0V9HOgcTpebl3SH2mOxWbDf8KP61teMRMX
-kVtWvi92R5oR7w1fKiY5+jc/+L5oq+bYMccqcnBm0lx2R14bTbvEOdJQ9yT/
-s67Z9nekbS2vi44+qTTFI3Ui4HP5gaj5WbPf0xHSq/nr7XZJXyKeUe622fmz
-11dvLvJnP/xEilO74eHy911+OJIuwLztWNHWPH3z8ln+MP+eHnqYv/jhD0/D
-syshu3212ezKLPu/njxZ85fq/v/OQM1Pu/yuhLL2L22x3zR3Ne8MkRJJ6z7f
-ts2eaLh9xz8s8II+zSwivyPtji6nUaeO3OV10/Mo1zTTqquu6YyrGpvXlsTy
-iUGKXrvCgHY1SZ1av6Pjbre75o6Vm4fFw88ff/b4s8++Fnb9ktRIGpk0x5wO
-qqKNpynQXDaLMF/6qE3syUcHf3z51VdffP3VZZa9JTX0Z96ch3FzfsOXqm02
-xzUf7fC6bspu3VbXpRDPp10Tf8EWpH73uCAbsI1NSRd4L7cxUlbmri3t4Dw5
-bYm3ggLz9lhjECIpXgup6P19oDYWGtmHr758+PTHV0xRXX7+cVKi6/3mSFdK
-P5fFb7GaT2e8KQ+75l7IeFNuqjUu3eH2vqN/7vLicNhV4NPn5epmRVc7Y815
-U5BkKCNNb8C1YCAUHdFN2x+nXv6T/BA43wudjV94R4PSDLqypQPvjPaYWRx7
-+t9dc9xc0L34zW/yH2TpfLql7YOIG3/YFTiaHfl//Ilnjpd86olvVOdg9hRO
-hBb1lDbsPvKqutrtCj/odVUX7b3OdZHRVpvkkWmVH4r9QS7sW9G+f8Y33qrp
-8HNGW2HTCrM48rUGm8zPn/90RXae8GxsflfSMdBZEjMnVrDvyORY5t/4eZCY
-fkejEEHc8OHvGxqnb4kX0vr4K0wo26IVdkQLo6tNY9ZMCDsWhEcoPrbGFQ1/
-BWbz70fWxnKScvR5DBjkEBZ1KIh9EBOmbS3LWj9Jg9NMMRfewwaT3w3lGH/k
-jW0DL5morrkjzld19/op3QJmm7uy2OBQSOjQERD/J2FUbJjUiOSERW7YqHsP
-7ujOiwURqYJYUvijrox2R4QnDUPntu5pM/5Wtg3tE1mZJI2u6VleQ1VvyPqn
-7b7mEzqw/sHHRhPb8lC0k/QmLZIWwDTA/8uv1k29xHj8EWZ1zY72ApfxrVg9
-Py9w6EuezwF3ZV/UdCrdhR2C6BjYcdyrSN0iJta0NW1+d2vbHwiH97WqibF3
-FTGvGsJCeQCe7o7rNctz0g+MVGSxHa2dVHGmZaIR2qLb6oZlBv1pu+Xt5fNo
-aPPvq3JHkqjCX2nqUQ+hrQPH2LEuyTzsrVrWPxNPjDch9+uhq96S0so8sW5A
-yvRN4haHirQyUTdy1j525YKuS7EnKjiSHkGD4IhXF8SEEpWkK8ONWWS0SXRE
-Tr/hS8obXggrLvfKg3ne24aPkumtrG/5vCCV6do9WpGuzKcbdC8i2jbctaqm
-nRaq5BvHdCpncihbbBkE9jJ/feCP8t1bKE0QWy34qTXP6EZYaHKaMFf0IHif
-V9ljm4usCFKMzizMTHxj+R8amtlC3pej4ifdUemcntbptcNBFPQ8KcRlQQO5
-8ZgpHWicNX9olX22IlHflV4lXTdFy0oOrIdOGONgENL16EwqWg/tVQedkmfC
-3PFAWg5deT7GimixMc6iU/1W37PR6ZeG9Wa5e/Qa/UbsYV114DN45yeazYum
-JW4DPvIj7/j5ix8vmOhfFR+q/XHPeiO4x3dM3PTjq9ffXSyYn+IOyvTYLUWr
-EJH52erL1WM+q7fqIuLr3OS4r8WOlfdrZq00789XZI3Rlrbl9riT+7cp6REz
-RP1FTU9e5/9jyTp/ytqIA3R71rppK4hM5fLiD10vNzRQjdIMhnqj7+AJOuVN
-xc4DYRjqO2GW33bEMr4gImOBGGQQH8iyI7MSRFSwkrDT3bgtjy2toFrz7StY
-xduRtGfa3t3zl+lI3mN5t7YvIo9IjrVy245sf9H5kYWwY3PkW3f78keL/LEK
-nZbZ7eeYMDxvOygFdFDxfr8RobzG9c54H6DFOK2E9AmowiknYLWiqFRuNsr+
-2Z5mQ0CE4r4QNdPzBjzek0jqZSLTgy5IHyXLhy5CVWIHcTSJEYYj4+/uSNXp
-RTXtj8zKxZbD1xJLjQhnS6xQpKoZhySTaXX8GytxcE3SSfU8oKg8xc1Nq9zf
-/r7KnvnHy5rpLL7FIpipW+kyOzLJbirjZFgGq5q9UF3g8ouMRaEta38U5mtr
-gbqZagaYX2KrEtF2ZZm9jYt+ln7M7PSfz38THfxhQkt+hv1XS9jyWCcrtiXz
-34mNCEsFcycSGUywg+1MSg//lkw0agxeCkNMz+zOKvuWFQ93OPpC0CplTFo/
-xmUl54gABdvHb1/TOt5X5Z13Q7F3ClKPdqPR35fNNmzA0qTiRcYME/oNUSBN
-ggaJ0jzMiG0DEQn/TpqJOAoGxlxTqjJCN4KlF/H0azLqSxZrJFf4xgY2KauG
-8dB0IB/jgnRDv3n15z84J03FkQq+dbNOnCTekp/z+xcs9snspDmREIA9ulEp
-zw4dWrayJzVHzCFx7b+x93OAYGGjvWJ1iMie7Fq4d3bFdUM00ZBmWNZkcjX1
-HprWU/qFZFNeH/fXpfnf6P3wCTnn4LzgMxBrRl1TzBeDzpSZXxKKeiE3LvE2
-hed1d5+K7UpSumAirTlkRCpOSXPasPShb2On7QR5p3Dsa6EyP7Y/N8ykON6I
-Vae0rJS6Nq1aBgRpqduHz79aH2lPzAILFnXDorrpm3XD7DFwCTK7t8EKWzet
-2zqnETMj/PNttRutJkoGYkwDImAmXN7jkbuKZNR9wrdN0eewEKzWsmaBGAzg
-yKhpX+ySm/3jVOHimibHlEvXiB91J6/aBCzEwCShUoVtIp5ZrlkNL+hpstzW
-zIxg87diZZKqiPXhwGSXV9lLlXBMoOWHXnTwGZEYt0em04kPM4PKPpgwrsu2
-Ui1UvQG8d21cgJhM8G+cM696+4ZlZk4sCMLzgv0FT1lp3LPVCmVnUyq/1xHp
-QlY3sHISNyTuBscWiIdtM9vqIPX0NMbeTFhPjpB/RzS5Kz9Uyqkz52nkT9hA
-cFTwjgS+0LGOSbNKFGoTtEyjELVimixvWD3myZgqLH9xu8k8pCZVR1wGd8V9
-Ros8NHd0osedeTZIW6AfiZ2yqUkq+9+iZKwQT2ZVzM79j/Qy8fmFnD7MN7dS
-3sMN21ZQR3aioLJLnmg03TIS6VC6i3ClF8ZHlIr9FvC4BclOmufxwHo2kzux
-EbqsC3eCJEOPZCWydLjm28AWJX9iQ5yCTQtz1nMglv5M4nRXiFKVm3SKROZM
-+LCjTPcwuxYDrxPrGERk/LE5ximSQDWsAQ+xGHwkCLofC7c1nuR2nrdn1/ds
-xm+KIfPzJlSithGveMFC81BCmqhDolKnufgcLOK0YMbLLlqivlveV3aAYR17
-lrV4mo7reOhUm+cxprYwU/djsSkOvVjYKjgDYxtoLSf4ySJrtnSSQc8Ty5E/
-fQXTbZGR9GFmS3ujAiSy9xpmj0ap1NnEy2hh9LCykJ9UdFQf6dQQSX3Lpu24
-KR07o3gaza75BEtNtQBaAmn+7OhJnYCg07jB83qx97Cx/vMdu8iUJp6bAi+e
-VNIvqq3xR5pk5z1las2b6yn+eDzAkhMp7ZSPKJfApYWYJJRDkladoClBLrJb
-iMM1yUZWUsDjeDvjqMMN67LgMCRSovMiEY5XaM46tRBh1bmvRYyq55n2utOh
-VnKie7g1vRcRdFF+KNcIGIl/AG6qfSNGEhGM7DWIm7mRGm7OcHY+tUMixIju
-2QnVK+U0TJdEvaIns+VbbongK0w8Ncj4C9eVUaDQjTraOMIimiPtVU3fm6WQ
-bGA58TUvP6iXjo1wdvdsjuBJpEblTI+iHbSsCr1gmx02Tc6SFHeTtqLYE/eT
-68W7VcFjLZaXOmzxWbbmRdZDGWlJZDTM7Og/oU4597HeUfPAl6nzOHtT8vp4
-InFng4kg/PMedkHXN4eDBL50PxCUHcVaies2vI3CGO1yRp9psLaVpWSBW+bF
-jv3E96QAk4FXwOMprkTY1zzzkWoxZY6Kyc5RmehXCmIXni3jXkci0i8vlx32
-wByDm+ZIPMj4URghe9qxG5bWxzsj7FteUNZMynTDXw1clR6/RsYVe4jMde1H
-ZDESdPkFssQgqPPrtiqdJj8OaWTngX45+kDnuuHFBRtFnV5d/vhzbMnjL71p
-8CTLHtAjcKblv6fdrctlazf+FdHT79nByRemxryXxAFK9Yk+yH881sG7i91T
-H5h6pPfV5tBUbFg9yP+tbJslVr78r+EH2iu6RPAi3vHX/gnkhe/8V/lb8QHf
-gbFOF7CvdiLNig9Lnt1NcaBLW+rFZ0209YcNVbpWeQS3dSnOWGJGfwsTwnq3
-4jdvTQVg21C4blf9rRTGzwyR3Yg/XWVZcB2aIk47yb8XEuZhvmI2kwyzjV5M
-iR4xPd/dVqxr0QU/7o5QzdRXz/tPEybGdmDnTnTwRR/mo9WjxIfJNzidwJqt
-3bU5kfhhm1KIonXx/PgeiiKNWXHm4fS8PjKrx6NZ6Y+Pvk6MUpf6UTj1R4Qk
-3SNWMYPCXLaZzKYI+scoq2AhFiH/gVStsoKb0K0wkwSR/CcE7a4saHex0Nhr
-oeNA9MoGMU+WnER5K5O3aJ8v9NIOIpUc0VxKRNMd+GRwWuPRGcejxebj42Oh
-0ladcl3OIVIXQhgMoWG1Bsm4K3dbvM3UF/1YfJjEQqLEdX6OJ/ktBxotAr1g
-p/EeiggT3TvOet09vL0/sObW8RL5H7IZnDRBjJI3iy6qOGKDGsOnAWkkai2i
-0TEyJcvcE2dsxex7+fA1U1JzbNcc6Mv+UDGTwasFtgL8T2PaMMuWJNdZMrq1
-wCuAi7mne82x42LDGREI7WXlB1pEBfcIbCfzlQSBKJp7JKGsUWk5WFp8Tr0+
-WJIE2pnCm30MnydfUoHAPuEbdlaIwVAhh5YZPbETWRqcJZzsoh6WBQJo/EVe
-HHHgG+IizP1LiPsmC+4bfonfFv2RTizcEVbL8AumYX58dTPkHV00xJ9oAc/p
-RpNY5Xg0SIzOeEt3G/FZ5qaF5fl0vFJsMY1dQjNndY91gopeEoekncnvusBw
-nNIZHR7RpS9uNhuPdzzyLjpHSEZVUIn2mDg44Ez7Bx2yEhnuX+fLU3B8nma/
-Ic2QTT0O+t8xeflcGomrsXuWrH44VDgFTIaH74O3HzE1jlaVH27JPCd9fHdc
-95oo5fVpvY9QPFkDZ7nLUyETqlBbTb3MYpf/dLXi8JZlH6gpLESTqcaowfqq
-dre5kMCb5i4E1wJRVY9IVbQUsNtQUdYFYtmin9/DguQhWVVkmVm2S/XkXGv+
-IbIk2FsITwA931wjzMBEIfuOKPP9gfezR7y4W9MNJbssE2cl8gfkgutyZWNx
-RCWrcrVcCiTa2HfV4usGoVR8jTOvOnqPzXLamzDiinUyuJxIvz6Q0Cc6rQt8
-h/kcvAFilx+r7naRDaNKwZWb/w7j/c5yrMSXCH9B12XBnApb7k8/UAUfuyqW
-RD7BD3xT1kc4JLAH+gDTANNZB20Yht3ONssProFGDEwHVyEOyfEzjAYHkHt8
-IcbkxswV/aZeEeFC0V/EHxMRNDkYOHHX0UZtNL1trQUE4vrFOQtxCCWFF9lI
-1/24LsFZejpUElIykh7dVRLKCNPAPXjPBnslFqOwH/4XuzR3bKpkIET1bwZV
-BoQj48aR4rvXrDzc7Kob5m8iM2M+SFZxEPaOjRZICFvW8DzoboTfqnoY/A08
-zh/rQgVbMKrEurFZpxu3O5ZwGd0W5tnr2+N+lb2uWbpvbKfsB3O5foQudGBw
-aGODG5NSmvxayKvgIwO2bSk0i8HdXO8acbmFw+O3J6aZ3ZL6QicGxaYt2LUk
-50RSokMyhaUFjSZNKxQahctA3lWfGtyTIIB48ubShGT40HN8Y3JmfCB+E6F2
-BUaXkRVe34xO98Wx5c3bT38Bm37iE8d6V71juSaXKX5N+T/sfY4ksB+Tg0ut
-Xd/3EibZIVOs2BE37LJmvT62bPbCuGabFrMl4/S4uTFlOWjqflu7HetMNI/r
-kj0UYIG3Ja1JmPle8itouT+JgzlmzIF07iCBEzbKznr6J1sMmWpKwRtDp+UF
-COcRuP1J5qV7BQHUHDg5MYhO22hOPkEYqazlk045e27cLAkNEu/jnB91bOOe
-sFOw2vMxBGEUVe4oraAR8n/6EpWh/57l9rFtJaeMiSgoUJW7N7yiXmQjtBTW
-EnbIvrDAPTiFBm+C5F0XB1BjKsaGU2RVxcug7DyJVBbvC5KbPKMd6Xqkz8LR
-oqKvEg32rkX2Als3P4gfXjjfpCEhLhZWLZol3Q16mD3OZgXkiAhk2M/Ibsmy
-YgMb4WncC9HvgyEAnxEvPij2krlhThhkLw3krviE2oXTO3G889LMMoPkNOZv
-7RRV0tZ837CuzCeEFHsbELJ2gUlPfjmDDmkVAt0ghD88PZhbyLsoukgmIFpj
-ktAzi/xGLKckz1O9p9Ht72PDVqxAtsL7Mvh6xbVhjneOhrDHbu8s7l+yTZlc
-yeixc0H1oax0Llm75+PCjcKqXRY8f5ZTvSVzaE6WKCPBaHMOAjhGaM68TAmy
-LIwRFJvm0OvuyLVjs5Es7Iovsqhj9PqSlDckRW2C6VHTXdLk1g2ZEexlIQbQ
-sJ2XSVakXmMSeW3zgXdSNvsc/yn5CRenyIVrglj6t5Ke4TToZcxJ3RET4hw7
-9lD9mKTf8IqfJZHCbMqze7y50ahNaWc92HtOWewauDuF7jPwG+g298iZPJHE
-u2CNijQMDSU67zMnDlsezk4vlIn6FQmdjqbTiZ/8fpBZBL/eTm1hzitdr8uD
-DJPERsFwYxwlEQh9WezFFmque3bJs645WL+kIYeEG5j9UJbZv19u7GYEM8o+
-xRaP7JEQr/P1pmdUdS76Af4gOcG35SCAyE++ozWGeUiCljpKWNEWRRSZhGwn
-dQPTUjWEDZNLAwd85iJU6r+SOFU7nKIam6D0nuO/8pW2lCQsSemYdFInySXn
-X17m4mEPnuJ6k5EUjNTM1o0pxOI4ZR/thaW0DliDJjgVneb+su/zSplSMCdP
-KqecHjHJvry91cIHtCYL2kw7F6jZ3SNxPDG+zA23i8pjiEtYcAwlF2FM3MRG
-v6Tnj/dwvUcabRrJNDW850UhExdZ9gVby8HXlXDvbMLCDGqXaNg+RZ4TIxqX
-ecNROs2yyzT0AV2FJ3dkB4Yw/+ReJsfm6CskaHqaUyFklonyec0Sl9I2V8UX
-LpneFXFAQh8xB7PL08WODiKhakhWTBCRNXJoQe4bwosnkvrrJEnu6XrdtFb4
-cEpIml04lZRI0my3g5umn5e+USlxBBLq8uzzE9Idg9+Wu4MyHcsTmDwGWlKS
-e1lz1URvcW76yOTV17ogOQsR8d1gfJXB6lGoUCPHHBvL3JScUG4hT6+R8xbX
-MoVasl6bg/jDRa0RudRKdErVH0txhwsAWSCZxIXEp8muM82CtnItDl3fSCQs
-JHFxXEASJ/jvD5s20+z/iuOEbNblh6+/uLBUSyVfOJJEV+NAoMRiEDCnAUvW
-ZdIwuDCQqneB9GF00y5IDKLTX8iCIOraiC7gkm5wut839ZKjcCjdilGZz1aP
-/tHiMlLqYZll7ig5qPREMrM06V6ChL1GsGrWAZQkLVIkxTgZCplasuJiQaoE
-V+Te13BydCSu+AtvKr4d5XuVgBYpRR0UhAEb+VI21nGwRxiEGMo+dk6Xikbk
-yZGSfFdoRrYKcSnHWAobCFmMuCuwgo49VBZxALKooF/e1WogF2vU59k28FSy
-tjBzIK5OXE6SbEYrDbYiXLETaogwShQ23S2ypAJJa4TVVkFemi6lai2ZH1pa
-lr2S2l/cfdQF0NcQDvw+6sJ+xVrDEgWtciKu3RI9Klre2BwoJ5FUkgq5/I7t
-2uAk4Arc/GXIRCFV+R3/WG46Uwosat7RNZekllbnkqQsQumKddS5pbeERKRb
-pzlALxmyW2dW0eviiVRzm4t+QT5xDzDxb9T9Itq2+e+ZkJkspACvaKH1B6Xr
-+tjSlV7IyfA81bZz4pxMU0uJcZsuB8xmZLU7xsptFGyhngru5vdSq6RXzhii
-hKZK54OrUImswoKr+uJO/JOUFqhcX6hbAHxuIxlfvrATJMxxHa47rDlfMI+e
-ougMQl6ce03qunD6cFVH8ycITu9dSXdpNAlWe2utwhZDQhISe7sQw88jj9Ul
-G3Nde67jMr3iebZBOin2xA9X/N8/n/8Gh7TErxdC3XjcfubLp79eYIWsYQvv
-NjAZFg08Aw6uaMETir7ztx7b4+f83BjwFxc0PQmrB9MBLiL6wI/lDUkdSEPd
-N9rcaqN5zAivaDBkH29+hBrYwnPoWckNUkBFrBTZ0LKWpXDGznLLcdMNe2cn
-Zbs6SpNHcwuqtJj10E1aWHGeZARJwRhyiqYQEcTwkjBGsLSDZWDlsw2nNG1y
-sUWIoTPvlxol2oRERdeqHyEZ4eu129bhTiBt4aP1eaI7sRY1rpQe8KAsqHyq
-64XSL7VhvOHjviGhGhcfNBUN6UWbEikwlgKFc+YdSw9dgvK0UShRdy4vDS5I
-0UN6D6fCOYtMWI8efvlePH6wMMLcIZ+cVHIWrKyUHT4gIGLhSDeRqgGlbinc
-jil54YglKCOnG+xxfEeyDeArEF0/rH7AGlbZN6JW6QLwYnP9Hnmk7IzoG58b
-Go5cLg0/i024DfUW/p5NE4g5LwunSj+ddLmqWzi6ZUgx1GibWC9c0RCK+q3g
-gb3sciju2lgsiKuehX5w5RaJTKQva12qKQREB/fADXAuiQUy+wL53TTNhq8/
-vcyoNIq8wud0zv8dGYTkEDKI1s8XEmGDCGX2dFsdQs6ejcc0m0pCbAFiZ6v8
-R2aMnPPwdgJc6Gdfvo/7bJnBwOFpJe69iPquKbuYnNZ9h6ra3BmsUNhr/gvr
-HuzTi0hCugF+wZAWczhC+FCmvn2k7Ra79XEXY8U8WuKkRsJGV+pRHRi6pzf1
-5DcM0uEsWpFhP2qFn9bW6GiTDj1WMNSlrwY00nr5U8kyRUFbKG0iIaetunfI
-c6cJHFktQb5VnM2ai78SZAHn6f3Wjole2B03Gny1hI2Q29lAZ32aZhN2oi5x
-jS/+yRUXwbGX+lgCDAOURVQt2IND/WKVfkiZYftrPyRMavwRSQ/fsZcx8raa
-TYK4kd1EOXEnPCvKYBJIHAG8thpXCQWE0qb5gmTJeeDsCA7LgJFUcBl96CUC
-i9Jh+ZVP2u7L48+TrL2FpnYgYTbdDDE5LJl8kTH9Hvd0le5DMLPnEi5V4ZKp
-Ov+D7Kc4T2OVfvyK1WchUXc83S9Xn01OmJfusBNi+qcmU8vcteSMyzcnpk9q
-nNS9D1/P5IJYZYZY+ZLlnyzTJJN7FckfJBuOXT9xT6NqVsNygpaxZrmv8oqz
-uar6WGZOi1EZ4flHh/rd9JausueJljzh0Jqy8gaVrha45Nik+F0KrpKixRtM
-Eik6G6hv05PIXkoo1rhMcHsN3G0C1vXvx0LyVEXqISFHQjOc9dD5JPRE+500
-UrrsnNcmiRAh8yB1Ufly8YuFuBTUhBSnfuk2gD65Ln1iDW+BFF9LqM0jNYx2
-gm+65Dhjp5INSKu2FKejaVGMpR/BPpfZPAuI2kjNZmPtdJIfTXv+DgzsG0DL
-qBoRY4hIkJKQlnpiNaEsDsQxKOfx/66ob45kXQoDfEe3/65piY7PXv305ups
-If+bf/8a//7x2//208sfv33O/37zx6fffRf+YU+8+ePrn76j3zP9V3zz2etX
-r779/rm8TH/NB3969fRfz0RGn73+4erl6++ffnc2zgJDxmoj3IkWSwesBaam
-2iPE8g3J6kefa5nEo0dfS33tW0UW/RlOEvkWgt/yn2B9nPxStBAVROTr4lD1
-kK+FwOjUudp9V0McJlbcmRMoYAId4MtocLLsf/FspXI/cWAjl12UVQQ6nQU+
-GRqG5+vYBSNB8VHznZ4jvyzL17oztgGkMDJAuwTq10A0FiP1uBoQZWDcYXhL
-6441WZ8xaDhIbTU26/ITStlo6iCnYiMcv8EuY+3X8ImXaTjKKnr56aoWuXKU
-zLyJlWtqb70JNXf7po/cYlcOKwtNlGkKEQI+d7fEsm8b89RYAYyQoTsH3KQ5
-WAz1Ld9y/V/r6iVRZ7W+rZG94zwgmcsVdQh/RJdgfIWvdUxPJZP9V3/c3IQs
-A1qcKlY35ZxEmq2OvF6heSbkldqBf2XRxw7de82EoQ1iD6oYeT5POJP6FnP8
-cZCr1GfAmuSIpBiSJLidL+wDKF/lJiMFvBSNwOqXdfSV8j62FyRx0UrSy38X
-fx98L/MoCl3JJ99PQAL4gxVfEo+bsQ517xxpTD8KFBG2sCJ9464WhWKteSGC
-Omb/LUDbxNcw8BkXUAsSoAgJ3lMJN8Dj1SXQhKBT0vm1lhMqOmeQHYBqRjsa
-vwtNQ21Z0cseRPXhAZMQyaficJvpc2JYINkloUZJGmLPlcTU/I1caBiXjokM
-lAfPA2iGG79L0r1i7Dw4NhUdLhvgB6xSwSQaNCtRIv2Z0EmxSyqIb8vMSCio
-HSiTYK+PWjCB1CPChxhoKKSfRzsFI5sF08tnwPRWg6SeFBsgYOBAU5qABOBd
-3Wl5snCLSKjeY4p4r8sX8mckU/BPmxfkTCoZ2Fty5j6qZa9nrtL+TMttEAtT
-4yhyXCAPhHKKlIXorneoW7u6Hdw1NtjhCtR6Vj21Sktl5TqFFGSm9gc5BxIV
-mzaHw4/zG45tSLgGoW0D1iBKwI1UGJpNkG7bBUP0Cr4I/VtBh9uLUDk7BtnA
-ZKqaNFWZHdEj/3tlKwuroMkdZUq4OmH6Jsjjh0kpYFeEhEhon+SSQLbCs5ti
-gUkNysQHuW56jUkCUusZ4OCCmFXNO+TxdE6QSCaamb8Fl7ABfEYcc3zmXr8U
-/8pPh0P4i8RIE/fn37Ebr7Gj6XYUsnrxlUwCnOH6shob/VUsq6163WM4pdev
-gl7G9ZcWt/aQBWDuvwJw52p8CQ688dBf3lt2fcjykL2PAnyou8ahAR0Xhogj
-REsvjrJnEX0dZSWN5ZLklAOiNk4Az8RvDyvGlQCI+cGEzqF+URDiXeftqG5Q
-oRynENmIFEmYXeVhO+J5RLAZy2jX4ngSKFa4MI3n5KLlMndkWEf0BlVLfwEm
-gzik70VYxCmO+EBckjxwE0nwqVT6Re/Oa1yzhOo6okfHqR2ItqtjG0BkiMNc
-gy8qNLIZ4B9JCBuADCTwP5nmS6QpOW6Iq7bg8nuGfdiSgXKh9V2W75EzIEZz
-w6yVD8lBavP95nuu27ArFUc0nnJTM04xG8VQCCZNGt3M1+K2DmLZw0UWCM6o
-shfpujm6gpRp6WSbqvJodFVHlhgKw1QxkWJ6wcYkJnWsuVVERxQ9CzgSRNwR
-RS1dCXBA8YcZnl+K9wLFaV/26tjflVsUQktI34BLbJVjk0wrxUwNojNW1JVk
-mVGGhLoohyqIKasbS80DTQRxvBqhF16CcjBIMvPSBATUEPOJy2XRTbbx/Yis
-prH8wkxDXQLv6SLIaSsibLaLtJJzDBY1KU1prqEaKcUSMqoeEL9jhp+kAol2
-+dRrKgayQhxyx4ADN7eD4VDv6cLLUuPgNCdwBAQDBXchu5aKmmhLwYCqpS6e
-S+GKGxRSYe5V74qrDnzLJSiYic0hDqM2UhHnUrRj8+uqyd+V5UHjW8DizHg2
-S/XpCWaulVvE5i0BQAaZR5rst5bilOawZAj3rHtX9hHtAVKAha5jNm5DVSNN
-tEEA629KKZCC8ipZXKY7KrUUQKrJooq48o4azhoWyR/xsILZAdCQg+Y3Am0q
-IxJGkt37cqkVwAjnsKVcPoFmF97P/8d//3+4OKD7H//9/zW4+AVUc8BiZ6po
-nhftjUiBC805WJcVMC1V/czP25IWXOvVu1BzkgYU30MF3PQS8Cc1h8pIvfsn
-y4oaovOi4QBCxbZu467ZvhQaWuV/jvnOH6Q6uS06VDnF9D7N5T22AqMEUkYV
-tFhediOH37fgPkdxwk5BkRjnVrs7eQo+KRvofFuXlBJ3BPhs+KIcpvLb0ATD
-HXvqhBJGBJU0Hm2MbbNJOunCQhkwkJ9aQWfYpPKwlH2xqhA1T9w32IVS0C0R
-35aEDgMG1IBXz7yV8O/nMShUCzSvjqP20uyn23IZtDbauffsN98IOEWlNe4O
-8Wqgd4px6v7gxTmqo7s0Pn9Od3loP11ojw2/k85ooJdGNkZM+OTC2KoXg6T2
-8IfnXmW1fAH814XVBIiryqryHOzzAH+y1RJPNuf5zvMjbkpNrc4zkJEDXJOz
-t/1nZkF8bPDiIlQre+F8C2nwvrFs1EBAlouyYxQ6jly4tyRQ1VkRqybcuUQp
-iYyA20TjF7KZ7SCtV4jZbm7oybmoF1DAwDV9R6wcQBB44lMC7wwlRBCnZFgD
-I/Ib02roz/1NJs4uK2zknFFj85g3G5nlK+7Zj+WNB5wzelNGLUpjl6WUGeCE
-IdLUUbcJOJsv6dnpQc2RVYjHTfE1UZIcOWrCbKIPZTCFER5UxGHm2KPCME9M
-INQnTjgaVDKEylWhH6JP9g93pb7DrNegXgTsiU9rS6reyLBnhEvx0O7uL7RS
-AQBEYrF83DOByqfUUxCE11wATbIyY7JpeM4NnhW+XgCs5s/Vpr9d6Mhyft5v
-UXWCjbiy6fARdJKdy+bBxDuGrkpvLRjmRJoJsNs+zWiTTHJkYDdCDdAWIx35
-MdW2cgzXw4UqTpBLT/B3UHL1jZEXqftAgZE4Ba2zRE9BLyvkMNipLExj2niZ
-d9OHoFVoS2Dq4MKpzNHBGYu/gufJGnM5wZPWWnkJxCaU+sQzC8RqLYBkzAUO
-Dbua1QkpTGSEay4mDOFbtBzy6eMCNKAomnHuU77GIVyjP7LCp21bFoAkmCRh
-SLCWv6ojNAu60WaYRVAOHa/OjxJAryweyimmPyKQ0znQUc2nR3HxDrkXsii5
-qpLlJh6ds1Yh9s+Yqs8g5INT3kUUzxLZXExvlxWtJ5vFl01Ir+0rLkfkKAlg
-JxBGQuhHvhqVDbm6Zz8Gr5EIvbPMxTG8Ny5Ec3ABBUwnOUaamD8ygbzRDRS3
-yIQbTK9naCj3TBIGOPF1Xh5d3yuMTbwyN0eaNe2IFn8ivVX1Si7lYzRDpoBS
-UTrsvnK6gjoVneAkXSALvuVox0dZJDQLY09xjhm6sdewYKfAWxsyFaEJdKwu
-dZq3yoAJwM5GFaOkDeZvJxpU/myqhvKquRYjC3FYFAGUtG+4H2YWgpOJcQ7o
-x7dpR0lFTf9v4UrMbn2GwLtHAA3OwXihFlkX0KZigVxiOVrThyjGj3Q6ikqi
-yTKL+SiSnbpVaThXHsbRi4cOZFaoRMu27I3AcXw4X4Cw+zJE5HVGCdeU5E/H
-O6zRFmMgs1MTifCyOxG9OyQ9k74t81GnVdyTonNNWkK/FyUk9FCrKwFeLPIb
-INLT9o7n4QDHC7FfOiZ6iw0KNotxGWUfUw9KQoe4EuoYwmD89/D18Fc1xNN3
-zN0kZFD0k7ORoLyLkPC+SHCCbwtcBJaMavgvyElF8ZY5jzMXL5OGaJ65twE1
-RkrgfcxNuc8zQcXAh16Aj/5JvHoyv8FoDmVd0J4UIvmdGBIhSQupb9LYDCpp
-ZtskDFz6/skBmCUg3vPB96Lc65rMarhIw9XhgmWPT2lCj9gCheeBtDW0mGww
-eIxOz7xnbSL8NzPxEPIHJauEGaMaKOk3M4VxCnQlyTWdf3vqRTHEi81fFRzy
-wQNFLnnwwMNewWw1YWuEZt/KIpqIfmIgFVaiG2sMSLPSQ5FG0Kl8AbPWI7l5
-Qe4+eJC5WW1KVQ/KmZlZvrDHqDAb7CUSjwR8WItkQ3ZkkW/u62IfmS3u6c0R
-GCjX91nqiHcMWVN7wrzTss8UssVRyEBlVBA7PYm4tjNLfD3LzocTvAhIzxru
-OVO/aciWPUMvVWxjFv92PreWi0Vg0P9+bAB2gWo82fQsZLiEhP70wucKwlZL
-r15FllDYJpnExLrU8ykACiJbkHETp+6WY8VlxFi+NaR1JL5MRkppjlVpuoEL
-+DCOYjmKiEH7zLQYmeiRz12vD6kU+0OvFhpzI0sUG0aBloPOtVeD/ITTnWtj
-75A85tuQkUxnjnSctIfGx7713dPv8zeG8ph+RKBri82GAxbIDRBfvUtBkkif
-Bo9xnMyuUYsXemLmaSkrCSCDv5MAiNQlS5YsMfOJOf+SRtNomR1V5rBBbsKW
-GR8WEyHw+OX5pJZhLjQWyxpDG6oFpc7pmjvzpi1LuG2zQvYO07cczbkULpWM
-ABJ+HgXePIquQ7hIpkoE9w/1dXf4J63O/jji7ynQ4WwCDngS75fnEPbcz+Fp
-rSC2PnUmDJnOBfkS4nuagFFauVFfnmoIClyu4j4A54pjtT9ukNjmHnS4uV0M
-9wlObgKIKxX/+sFsEvnKz04RWBBPrVFpGGVxgK9QigdCDJtgAbpQ43UBdWxQ
-oiXYBMfrpYsYFOKBQLwJFV4gpucniOnRryKmgF89RK7OPoZc/QsoBoYhG2CC
-5imiAjBNNnQyXpEAk2Qi7nr9aqhZxG5yoqhuNTr0BmOI7RnDI/UHCdg4S0yL
-G65PwrQbDJwDvL6RHZ+z6kTDwwYhAV51IPXRsz0Va3ctuS4gCJD5hCoOJMh+
-ehvkbApp+hd0PvZXGpDPig0raHDBUQP3YO+saEFAV3XWk/J10/c7sjzX7xTu
-keEZ2G2hPtsQIzY/erOdNVDNnwzPI6pvMqvg0ntoeVTBCk1SoEKpUFooNHsX
-no7yoz02UAQWkXqtPP82YDBakCDjUBDJiLVeXFcq7ooXNbvBEOlRjRa/LBFc
-RQ3gB9IvJfCwvtvHLWuHfjnFBdfl2zWTmj7adYSLS8tWbKQyUNt/aPjyDFM4
-y2DRhz7CUlllqE5Q37T1liriNhgHgbg73Q7sIMzn+iKnI9EPn7HXBylEggNx
-Fr4tzt6zPHycYT3yx9mpL4fB/Kfz/Ju22tzEXzOFnChS9AlFLk5KXZRIQ1BZ
-aD3bIG9Kk9zD4z7TBWQjg9MEvksXidpQLUIK4BxFGw0Ivl5WxCF9clFQpL1U
-imupqDT4w5VkBAmRBZdsurFe4w2mCos7KUPxGX0v8PwLHS2ka7gTXF+gE0RY
-p9W1hQc2F/mf+bRg9TlqkQpbBAqqDWv8uh9ylpKbkBJL6UYqGMqp4H3VJJAw
-phG3VF5JKReaOEzLIN6tXpPUgCZrBjsX4fL94RXGbAnhInKx7JdNwjOvYg/v
-oBaIPKvEHpWL540eB1Wvgc/EcE11oX3SUDK6lK3HVMStsawGORxZI5qfm0cm
-LIkbPGmrhYW1QtXAuaFlOt755ZB1Jpvpsv1jUydMm6ZQsUEo4BGIGcaCEG4+
-dN2h0TqwG+4B8lPf9CFxJ9XZX+hRqzgLpD5bIDSR4/bq6b9qEmw5lQIr0z/I
-66hV5pS9I/KfLXMWNUdIkjMhczI31spexMnvytQEEbsfTyeZhPZeiJ+/K8kc
-rbtBG71UKTcJo50tXMZvXCroBoaKtk8FgxW4rCwFA2J3RyJS9TIUXOtP/7+R
-S1FeDAnDw09aVEliDQx3YBc/tPGQy3d9YYtmzzuvIXVfmKatAJMx51cuHtIh
-zom8LVM/U6M+hE5ZH5SO0WL1tUduMCjwtHBaS0AOnaTYupy4oC/VBZ4cZqgh
-nBdt12UWKEp6NLOnGwYcI/IH2e6Wq02qJP52LaLs1dNnkcFLcaquvYwuU8NI
-WGVPGYN7U33In60erz535gGOZgc9QnPI7C0iox9/ELkY6IJf+8IYb2IS8Tam
-rVgDNgoxnGMr+ycdw6CMNKrbh4SAq4g8cD/s0hxULp9rf6Xl8plphalbKPSY
-E2wzLpAR9qdNAl3bPtj6jfcUZ8FvieZqyJPjrFlNmYPKdSWg20Uq98eZVloP
-aYUsGvfCjR460H3bCBcD5kO9UhEbdGDXiW7GokueCnDxVtEYgOWUhwa5Fxd3
-Un6KX89YIWIRtBpPFFKdijIH/RWJTPqmxmw0OL/RTiO2fdkws4/ZNuAiInkr
-YlAestFbznWA1coqTd9kNd0Q5sO8aEHVCsSbvwx8pCsXIaKVXGcubsm0hh3n
-rqqpi20qdvXgz4KBHapqBWKExR1mnJwiimZOniDCpmbnEIuQJGVGQXhpZ4lH
-JMXgIweXjMiylaaEctRmGMZ3iNnW/EExMqKjISBFwIV5sRCgPhgz4alHKZzE
-heSbpVN3Q34x8F1caOsO1vfqzuxGuKzbAeyXFTjogQAkl/Y7gx9IlEy+Sedi
-V4XrfJGHHfXU+8KVamjelmodRi2hQ0b4XmXAo3G3pEii4GAH1LAQfUo8NoON
-5NYLW+37uwjlFHZs0b0mRl9wPJ11azrNM5Gc+pI2lgbGZoPKB2MDxwC0gIkG
-xlDcsI+1j5+JBGdFL9jQRE0nccg39LpMAnaA0PBEF/I4gnQQ8KawZ6axniuY
-uEJyYoYRJ1nXliD2WkKlYPKhgk/wKBXcs7PoxMAnkMrCC21WJMUWalv7CfJM
-FqOEGrBBjyIubiKUm5bcNx4tAvw6EwsEV4KfRapzsRZWk5y7hkG+iW3r+49o
-v0qfVoFvCFiysgUr3cX7poref6TzRQAlq8AQv59GR5EsZgesZQ90BarjXpiz
-MOtEC3wjeHXIkElpstN8GpfObZSmodrj9V+BAd34jt3ANlNiizoprBkl2BHB
-YSmq7ALup9PbKZX68bMXqvPzFRgecj485EDDt9ZLY3DnTh2lpd3OFxDpuYT4
-nNhnPtHdkOaLkcUoKU3oCG10fF2JE1NKWnWqiyHURezd9/kq/yZ5xdaQYu2M
-ZVSUAZyuE0ZY5LcFqf18o0KeW3g0taUnqWTqdONeOSC+ivj2WtoDRBshAPJO
-JE11k+Qqx+ZVqIGK4nUSyOZUL2HA7TjVVeL3HoybmW0UMHMClPk4rY232QBN
-TLXsPN6SULbHpZfeCKEhByBwnE+dlKZQ5Xnhm5cMjTmn9/DSk/R/nAcPPFA7
-Y5YGsjsy1+ZEj8YLDPMOK/ev74Pq5pokM9GoZ81T/XZYhDhgAsYBK9GvU3pa
-qJiQoHzs1qKOnfsSCkPyazBKYtGET57U1C7Y1gzUkz94wJN58GC4q/5seVvN
-/w/NMVwVYVfSH8K+nEXGlbbCVCEPj2XIbYjZUbK7gaVmkpolzmVLrhzvH9if
-mMcO+0XgbpA35Ot0SMu6mNAgFvR3+SFm4Rg+tltNft5pGxRmfqhOR8ld+H1f
-AF5aNbrD7X3HT8BPLd096TP0HVydpudMTtk5afga9T1oeUA3t43OdpFFq6SN
-zB+m202lMZBt9QG1Q8BJh38FnSskZZLRj6d0dpQaoxIk/i1+shsSR/qyHMSg
-Pw7iMHyEC4d5L84C7gBZVDuBrpI7nQB9ucbPiwj6pV1ck3ns2O0a/I5qwzq9
-Gp4DsaSdDQvON9CpI7Ip4NDF8X6/l/4p01Iq08AYnwPvVUBnNq2ky8/SF/kp
-Tp9BO2ckn9AGjR64SCwxlAKdNsWkRrWazgZcDKLg7PeOqXRPsgFjjLxSd2na
-YpN6g2InEW3kOgwZj5ojKu9RiX6fn7kZd2dZzKMCDlVSzZGszYFEuHjY1CM5
-lJtgTxgAG4oU0R9wO+sQmY4GO1wWv7HIuJrL1u0ywxPjSIl2BQCyY0iCRDGe
-W4AJTWtQgOuFwmgtW7A8U4MVHRcZZkkVFbJOwjr96lzbKIatc5NiAfLggdlc
-Dx4k+b6FFexKQ1cxugQ7aG2fnSiplkCrVmZacN5vorWV+tvYXbWIqbcmFc6F
-VSkORNTRt1LeLkR4L2Cg6vtMBbXIaFWpSdnoLN8dpTLaD2U0TWyN5iyTcuLC
-w/3tXDF+uMcJ7sD8XU4em7vPlm7EiiTTs0ceEXbHgM4ntZ9FFiKIDpFLwRIN
-TyXxwsn74hXXcPqp7BtDO0sJa7p159iLHUv/4uqlpDH609M7bFQjjvVsnAgq
-vvWol97d3qemUdxzlOa4uH1yRZ1CnlhH/GwcinYIJDOEZhqfcKZG83x5Umyl
-HXLwh254Lmmuur0k7H4qYTqzLLp0ZuqWrVLKjiV4Q+0aFndFqEZCVR5YvmZ+
-oWcgL02LBoTzjPbC2LeQYYj2dck5TUQ8ru4P0o1ukcwzVmMO1gQfvla2gjm0
-0vrX74j8FblWaNYYk/y9p/9TPvdLzsTFx6zPS/U3Pw0UdfyJaxN35ftEF/rW
-wU7PE+KTbJk/ZzMFPTAk/t4R283yfJm/gNrYoSEge+WqoUvqs9UXiTsQ646/
-c1pj/nVihmPcl69e/p/5Ph3cJdcDAftrUu24iQFmZPkCWrAlkOAhmwFj2pS+
-WuSPHuPBGElKZ0DPfkSJ8+MNnMJ+hyVpf2JvQc0AGNDMDzLG2fMSTAl2FA+o
-XbRaC1L45K3TxVVMPa5KLxxrnOZyepXaOnR6mQvaJRHjJBdveSFTL322+swh
-GrMyzu8BM79i4yr9wqPBRhId/JCbQULUIOFH98YXcfCvHj/6+udEB34RM/+A
-TncqW3D6DZGmaQHPlrRtCe82lWW6I4KKg2kTfIzL1SXO7NHqciFaEza/Q2zp
-pUKjhqoV9q5X76sAEDlUPjIFgJZd17zGqP1wPq7l68BMHL4/TFOPIzgG2Z8M
-uUjDkG7EiEkfO5sb/YzTKImnnXn80ZuyT5I3OFotGQPI4ThToas4udqUQmMP
-MTKtWYevke2yDWGLxDc8cjnhnJJmF0NVS0DXgbhDFv3WKvgCRDRf1BmCcS3B
-wsGOcmvGr6kHOKmDE70M7gwDqUJ6Z6wFu5jwqE1q7AsFJhHTBPodzNuFsnNA
-uMciFa0XymYmm+i00dkgvYFy62Ck5c3Oq8OmAIgnTWLp+IZXwBpDUfRNE7MM
-VlrlaPkurHFOIZjZCgJJyetQHFwqUCUgK8yAU4anr/Pw067lJN25t3qQ6Iw7
-Y33gLEQnJDsm8XfMug6k8zdvyae4DxbM36Vnqf4m5fR2lXkMtqs5E5OvpemB
-aETNOnF4LwsOZ307l2Dkwimwo7MPPZ0vSa6ff3H524uBs14BtiKSm/ZjVIIJ
-qbNW/jbDqjWgbugOYRlWYDQXJvIlMy4akYU2UaFbgQs2+AC2QZl+XFikXZ5C
-zQ+fDRlCx+7ENs7y10eXl7/96NsLCR4mBaKeLaTuLo3kStN1aSq5lpmK4mTL
-kA69QWx/mWhtcg9hx0tH96FHXrXzwb4EClNNJxvWtfppLxIvUrK3IUyVhQyb
-cKAJB8PGSEEQ27O4J67dHrDe4ZA+oSjMm9ozJ6JG9yADhl1oXrZn3snQ3yah
-HxXbv5BYcLbBwJcTcViaw0lOZZFndNCDHIVV/lNtKZdADaU/WtaWgJIMOTtb
-pAHnJIKUoBf6wEeNRBmVw4VWFjKKMhLCaX8nRU6JfBiD1lqEYIQ10+qDiqNk
-YJ4A6Sia31flbiOARhqyZ2INtQly1e5uGx7snrSPPQA8dgZkgMfa93Zswj/F
-GFLlOvVr7wZ+4gCyMuGqicyep5ww/EVq8mRzavgs5YhfgqzEVvuug0VlBuYF
-QMOSAcstL0ENTS4QeS8JzenCXCUNevb58JckPwFtz6g5EI9HNRlokq2h2X0Y
-xdMXuYVNGDVk2TdLPkD+TzN8ssTM/Gz1OP+BI2ZY6yuxSYZhXjFAeHask6pv
-e3bzoO8z5hkb/rr6QdF/Ja7xqk51HVNEVBrsdpKxCi38IOjnjSKlid9qIj/F
-8vMerT4/McT59sBeyTRy/Si+wL3ip3jqZ5e/lS/Prv5y9Y9+GH9434QAbaXl
-HkAPXt82ruOmBLUdwhvvRQyeZcUoWNhp1oghImngTzYmMnwBVwPufuBpAAWy
-F4774DSxkt/TlByTsc2VgkLJxSBsKfmsg5ex3/6IDmlO3LeA5GRa+LQcx/Hz
-Il6Qshq4vYtLWfJmFrOzojYSmYONdlqcVNpeRZXObVkIpOTLQfzDwWwAAY3D
-jrPz1xyRQbYh7DAL808F2h3Sxcf3yBRGn6CZbFmmuFvD1NYIByLFF5zGLFrG
-RpveB1gGH504Efa2zBpOu7FsmiRwdLQ2hrJ5xMlOb11nofz70L6o2LEinZ0+
-5/Ro9SxdVSewaYpdXPeam/mxduhA90PpHErH2Xcp9oT2gR10dl5wO+EQT2Bw
-8N3AZ86IQuL7rsR7aYq4QmqpJiGydpDWK0JaruG5Q0Xf7Vy6vwAgHkN6pVel
-xzs8rSxzQBj4HVaP48lFKuAGFCHhzIwLIulBruWVZHZJfUkq5VR/Dwm+c/Sf
-sBFBajzNOl6rYjYfAnIRXYPZ8Q0FfjSY+jQG82QkQRbzF5HryCdlyumoMJIt
-WoMoSDT4XnLTD7ujVbNbQFgW3DEGQxadF1ZE7rPa0jBz2KmpAPFsDIBZTIBf
-C4X5U2FaMZc84uNgKPFrZXSxeKUa+edsMmJf4/Gy5OpvCxI1VdHC1F9k5Qe0
-3gZJMbRxDIIPF2l8QiMQE5EAmcLEDJ5INDZIwl/vWqQJwyiHlib4s+qxOy/k
-SpQbgB7Lrlxk4rCBVnkyPDlO1vd52ANAAq3EEtbwWsO2MT0osxI+TR/iyzsU
-+67RZGcd1x3uFKys2VIu1LA57BgO6GjunwEcVamNKpCFp+9/hFecTOnIBikd
-aZgyxOk88fxPuLOKvfjL76ytzt1Z2p3Y4PpErcxUVUxE8zkPBfHIUbT/ugBK
-2rjFie3dFKQwcEKFrwIpdxDLDkD0BtgqCOUclIdNUlT1xME98TFpRe4cp5rE
-Bku+jKBKmi9FTHY1/yWYz0EFsgfRI+rmGCDwqpZL8zcF+NYuWEyVlr6zaCWS
-Ks0/Lkh6JBmPu0I2e9zMRBOwjZXZftyHPjoeDMX629mJis47zBGQMlHmgV0a
-EzMXx1sDmOfVWx8UNsK55x+3RW/092WzXbbb9ZJ/X2pPwE77/6r7lxH64hQ9
-HCJSvDk9cmEV8jtxaLHfbXcf8Osla7v3B2YtPm99F+sBe6gCTgxL560g0KfU
-E2DhDdv9+AFyQrDyzSjBfREgs0+rvHqlPbBEJedjHegqAnGmMX3OEr/lkAcf
-5mxV1QyYRtn2SFWXNFUMifGEDNCMi0y7oBQGcLCdQelel9laUXjNv+j6bp/I
-AAYEizS2s/rR0LF1kbnqNUn0MYg+wcUVvEcP/qtNDh1I7qC8ZFCeZX0xXBMV
-aTF/6+v1s9mzU31Rws136B0wnbms7YGZTYNm5kfkJTbcO3qhJbomxZGPc5A0
-Bf7vtxPgxyRKb6o1XS0kOKZ7s0QG7AXyvNWZ60kzTODNcT8PacH1UnRQu40j
-yyRhWn0AH7GrfhWZ+tQXbR8aqsTF/RAihlFNUpoOWVAZ8DFCcs5As5k/FqdO
-dePqehFMo32ELyuefH7y5BfZABPEqF2TvgbpXKP+1n5C1vo8qGJvo0b0LG1T
-cPXm6vLyayKZ/a6VJ5ahkcGSn1j2XU9PiFQuYtJLZJ6DZtKDZWSjRtxVbImZ
-JoiON1BIxFjDxzcxO/fNGwCTr2XYHYPOCC6SdqUF6KyC7Axav0JRhOrMngx6
-lc4eubGlbfu6kdKpTSlIepu0/+8gOGd50wuvUTDj7KB1wt+JxY13QNRdpA/6
-PPpQxtY6mO54D6WIe9mhKFaqkci2nvl81CBPfD6Xz2eTH74udqwabjTJCHku
-mVJxcnizB+dZ0ceDgdOMaORunWc2XEZXs0nFzAnAXcxYbJa0dfPZkuoGkcsz
-/GJ620PxKHtyIFokXcRu9rG29lRw1XfaFtkxJQZRUnGXpDOWxP9dFB6T+Z7W
-9G+MasvzoQsde1rLPU66iMt1ZtJtakmwJJugFZ5zV0jSRVf6CljFsOMR0lIn
-TgM8zbrMCe0L14KneiDB9M0YouAfpD6Wsyf1r6Q108R9dVQ4gn7UtdvT1rdk
-tdNov4K6EjEnmMPAAN27Iq1fJPB+PQ16SXg8iHsB8rBxXr7E6sumyVWOdNjk
-HIbGrmwFOKFPmxzErFxkYxx3O1Oeq61GHeUzdwgZaXSSRjlx9a9ixjkGlE6S
-rHoOqMBaXoOhXEW4HE1XGKd6wBlsKflK7dofwamB8ZBLIY/W9iFkW0AiuKo2
-0b7lxV1ZbEJbITnEro8dDJFTpF5b5b1TvpOxzPa0KimIac0G3XGZ7xKjLemR
-ZWgeJhdRrjm2MXYyC5cB7SJ+2S1Ad5bKtYHmKNywykvxLV0Pi0g1RNpWeOgw
-rFmF0rCwHM/fqxtaWUkx0WYEDhlYIKbSCdioc3gnVqAQaOdbZgRah7fhVmoi
-2AADMl8YLlEDBq5ISxiS6QaXI5Zs3s6BuwOXWLwOSO1TG5x/9rOeaKyS6FnW
-1XxQ4qnIJjthSKMfW4FsOzQHcECF1Vco9wmPrZp1hav6ADB3EyAW7pj8PhrR
-AWwut7dgpBoiu9CEcgC5bID7kbJP236CngVYtPiKkEzYhHRzMsaC0+JIxP+E
-SNmhItZuqp8mAmj+mmdv/yApWqioiUKdbvdN/GEZpflFHu40zfO6sVCNqWXM
-5GiW/qK/VHzn/yDPQ8RAUHiG0NZA2pCmlzkoEJ98mx2vKKcd5KlTzvMsqS8I
-AdmQKJhE7D5m0J/iLCr5RAp1w8ZlwyurealS5SzKs29Ezk/fDBYR670SSTfy
-KUCBR6ZMaKuQBLvkm5yaCK+8/6y/lMlWAqnfNHs7L/lIwJ/RQO6U8gbB7ddy
-gvrEXR7qYVTHYi1Uwc/JbuPEA7MjnAYz3IlNWe7l8vcNEWTNyYisWWcjvY2V
-b0n7kJ7T4wbOvvPTiXy0pIvnr4z7DXpySI82CWKkHGMUHHxywo0wNuLkT5nX
-+8zX4VWMpH5iHFs7fZhhROO8M3J6ApCJSy7F683RpzSYJwj5IeQ36Epx7Rp8
-M3RHPRBEkjw8DOH7V2YCRavsj8TQ33M91xjTSZgdz2xYPcSyTbva7yuJCLoZ
-X1sUE1rHFJ/qxgqspTtNKA+nzqNDIi/i6daPRew7J37POEeCflu/292fGaMa
-DBM6BZzWVM2ntC5Jaq3Df4ikiu0s/KVRWRrafQy6Kc7dveFzc5cvC6dcaH91
-Ky0eJpSEBNvQiSY2UcHgXFslOboTjXtnKX221hg0f5yj6/9Uok+IL8i8WFEY
-ko+kxImD5kkwzhU6ZgWqMqUhTF1OnJvu4CzJT2RtfnJNqcAcsnZtKczsPq98
-95tTWUVNuxHv8eSRS+EAqELLZL5/fZUB1LrIr6sbF0waB+KgLp+mDfbOK0MM
-CC66XEws+/jE5FL9hrbjw0lsOPt9mH4WbwB9a3h0VkElkdMPTlUc6D4Oqd/p
-iSNjcf4KpbObUQXjXM99N82JW0vWsL8GEL4TNbThdCZc/GFKQa+M0IPDFDv2
-KaveNzaQU0dh0U8pnvzFpr3X7G8Xo1KXNkiO7U2Wm+nM1Ocr0CeJJ8ijb3Fv
-ITisNJfL7xVSK6SBEkwNAc7b+g6Eg83oBczjHqBg6WjTCT+SLZKcskMUEoMu
-rZ4OXekX1nKc9oPENKfynSNj+QNClRHu/IJPOlYuBYzAx5eSrubfMhgBNn3j
-e8hbSbAFP7tQzW3K7OWC/DVwfqDCcmVW1WfnmlIpIHOxJSEL5SXabVeGRDIs
-7JntD8BsVaw9HMtOVE2JDU71CyXRbKkXyyP/eYno7IXZvEMtM6sM9c0IYeQZ
-SQqs+eokFJEYApUE8ot+cOBBW06aui6yBDONuKLZ6n1SfCEWzRAKR5M3ke0j
-8cg0Ddm4FlDgAq+s6tO8Un//9bwyMav/o3llMrsEg3eWYf5SbpigmtkpLvIp
-PiEtuTWfSDvO72NGMmOTHKTLOVYrlyHapM4hO+jGO8WTbel/L09O/O/TTNlv
-M2cNCZti1HSerAQKJWhIW7pV1CR0pKRFt231XmwCPo/J1rgS1wtfuSm5OXvI
-rMhOpje8aEb5jjOQtGYShO9IbNkCRGCMEFCuA3G48BFAn89VOK1gR4fkQ1FB
-gwOj6wNtD0PeEV9iXCkhqdbWRwQJSxFrJF0ofCGilAbfnXPqdcW23N3PsSN/
-c/5z2JEyhsiO8qchh0hT7ERWIiSNnDqHyZDkapm+33FxEwBDEYIzzxOSxCDv
-o1jx6PiKnN0h+XMrOvWuqG+OnH7OAplrfyabCAh0OwdcFItm0LVKTjj1F2kj
-7IBMPUw1tKaEBiHPD05pFURH6lyR02fWpdn23Ihu4xHqXcM9mhl3nsOXB+mg
-ohdp2R0zU3X+PXgwtY4HD2JNL7uzbkI3htBGUoXMH6ub2yXIHOPEFjQfDycX
-ozidxJar2OGTpnfLH0A3GIz74MFChm6R/dsbIHXi+crOWX2gd+4tfZSsx4mg
-IPOGC1vJd81dshDpkYgFaEJumErAFovTJG6QzBJTCMHpXzCNNzjT+c2MPutP
-zyFJpypUo/PMz4e7CQ/ucEfD9F4ciXl9J80oBpP0gLryjfS7HAld7vAqfbdX
-eIjoHO4TPPih38ZTzkeSLvgsLXf99B4y9vOv2UJgRusOKjuY8B4rNyhCHuXA
-dIlGxMKlQZEE2RbcbtaxEdVLPpV5iK1trGOKc0jGD5JdBc3Z8hp9Z2TBXOUf
-NCgdiuZjJrhHMxCAZAyH3QDnkPBSN8xr9Ene6zIbASuhSyp6xybAqzhdpy7M
-x3pTBSq2DHYv83WUhsm+giDVTmQPyHIKCD9PZ7IR182mJBNsd9OgIDa45kZ3
-KWRc6Wbum71rSB/Y61xc2Ae/F2rBfIS8siF12RScyghcu1h8Z4R/Acq3fNoG
-KiBU3APi6IbCnPZo304vfJGphjkRUcwlYf3agN9wW8NzqVf/paoI6E7/oV/k
-Z4MJnmmGt+SXSyGfgOmaVFSQbsuIywQ8VHRdWtIo4pCEEuoy9V0QVcFzEXKG
-ot4/0skHacV8MxxFolswCsZlfNL7b7SLVhHy2mKWiGWjLNA8JeKl6o2X9L4B
-yfN1IibL80zsABq6QXzALi+Ccc1BVeF0GIPRKHqAkOIC3TL4bD0EThgn4MUc
-Vr8EA6YxxGYz+fy9Yc2vSzdBpQYcHGl7AM2CeA+sSr8DNw3fDNqnaiOYOqN0
-RukbEf02spOsTL91eUNgA3fR6RBrTE2d+LVMyr/7vySP6v7X5lH/mbzpj5zQ
-dzaazP9eHMnRH3dJ6v4j2VFK3DPc6PvGl2IoTGzKyCTo7QczYkam9Ju4HUm+
-tkuSxk0K5yCOYNUyd9GHkHxD4TyK4FnZRf/ZjBPjo4zVj/+/G1/12/938NU6
-f5u6dRNv7oCxMqjMmp1hv5CthvccGxXutrvPakOKSImsBlEkyaKzWdQ+cMDm
-DOfQCZY/H1xAtRHeRh8P8xlndQ2I0jxafVugKZr1rBtMlp6Ie8OB1A0C886g
-1DB7sHSV93CIVkw3BvQ3CnGe+BBpl9YoYkJMRkMkLTdym4/kJalzSGWBeoa+
-kWosKT5UowBZtF3fHtdAHoCNMRGLHLZ91KZmVtQ2QiYXynDuSomeTzYrt8GY
-QUYX+bZquz600nNd4oN80MXZV6U8sQ34ygY5jpw8g3v0cauYifKjHderJra5
-mZ5raxkNE3lBC/Gd+f2tWivteya4/UBdUr6JMmIv+BcjJ3YggOxWRUiKpe43
-wsXJf4OzHwd2TuUpTT0fUtgsVXko5IbFfOnFkVs2qwO59X/MOGMQ8HjXIRxu
-6f/V6TzwlXVSMzC5IIVxZAc2OvKhIjjJJhC/uos8DesVU67l803CAWOMye/D
-+4cv+4p0cSJqzsWnRMbmDus/xy3tomNxZp+gpydkl6gvnfqTpXfSiOqkj5Kk
-Vg1iMdPnvpBtY1E6CJKMNJ1PI9iPaeog2CGdpp5QKy+Y2rBJMnWlNOaz7w5l
-wa0aDH7mBNExnSnhzX1W0sDSp6FuFCELTQBbP/F6sO7/TtOgObcrNt6xZLGo
-vmiCelVrxFM22EoJEx3w6vYYPMVwpGu+t5QZBF1Ty98M/GyLjZUTnia6rb3J
-E+Jwkus+9rF7OKDe//x7+Mwy+llDCBnhJ5LlJp6WlaipIahIoZp0kmyLNYmn
-jcJNRGvYYeRFU/obJACGz7q0dbGmT+R8z3MN+xIU/DkxI7HBrjE2M7342Pwu
-LiRtSCjh2XttVSsIW34+6N0ngUtDo53Z59iilG0SLo5umrpsMwnnJMtwcaKZ
-zPO5BUXk4yze56spHKDFECymI90pO3HwMrGpROkISMdJldFEkdsrx4T057J9
-L9FY7tuxVxQJsvC2IPf5PDiuDkauKlt+sQYVRteVlhV32E/FzLiXTCGFhwp2
-NDMShry+a9quzGNZpk0nc36Ekz6Dq1jGMI6nG46ZwMPyUxJjTUqCQZ5Jt23U
-Dwfdf3YzTqfe/vrikVgQNsUDT/GO/wwuOIYOjOUtVuSYzaETO8yDGF/yji6V
-IGeaknFmlV8g8Mpb5k69k/wEkyUYoov1EzGKjUojFtHMnSCMz6p28KUu800J
-JIm8DEb8j+NMkTmeP/Ho3wVMNqF4MCFPKxptOZFFLWbSLDH5uxQLAv9jCgjc
-/R0B0VfdYUeseWohK/44/wrWOaOUJ3ABKbbWRIJKUL/YqzRTouZjieMSvuCw
-S21Fr9DywhjquaNvdlttUajwsNCPRqSxysJFkZYy23FCueRFjqvkUlVaN13U
-0I2lWpuFPq9baD7PtFlnKVvX5TSOxmgP2eab2rt2wDc7YpxhAGaW9B9L1CNe
-oNLv7Rsb5FkchOf6g5V20kv2oWX8EGpwQ/lnSAOYzPWau8HTiWEzd3jFzt8B
-2pUrIpq5RNNJYYIOXYeUttj0USsKiPcZHWdCY/eTYYPVoHhlO2NajEw1Y6/i
-hgpzkmaU3Ju6tAL7s2lyOi92Gl0Hh5ZW8aR0KbRWEQohumnWJjBZqa0UcqzY
-StLO44aiP8PYUPbSa3t4fXORNRYwseKgySlErIiqG4kSTvkw8khRGEJ/ZNpt
-BAACzJqmJKgIGiUEZmK4cRpJLnVo3QwJ+so1pAiWfZ6Uh8CZqgywaiMXqKQw
-jz25yGp1aoK1Qx/kdfpHlFN3kqL6kakFPQI6oVUwwpZkDP2ifScWLR1gaGoT
-djR6eFO7ZMAPWeaod3OyTCepw4wVKNtdcdMpyoDq4XSVj5ap2SUgkYqBoKYJ
-8vFQp/KbgdIzx0Q+hXVk6NXqacvc+pNahNjsk/t/0qqzFdlhFNpNU1Dob45V
-d5sACtyWyZzsc/R5upJpleZHwBljeozfCa1eiBvSoEG4oKrvi0M3768VF/En
-OWtn4h5gY8SkyrYWwwjovhAjCTqPDhgKvrSeOuQAoxKInaZa5jI1C+V4YnwG
-MGV0ks862GzrpA3cnFoxU7yhYn20w4NGE2F1lWpe2hGRVpHRilrB+hU0hck9
-D+URIzTocZ+/l6HPxvSGqAdO26VtXUONpDpRYQ1kY7hb8bGTvpeBFCPP0CMK
-40wD1Qu0nDyk2Qm116iSdu7lgHnPoWrkiWcGo3KWmxYUjysrP4JlnO7tR0CN
-DUI79r2I71qvg9Byb9bROnFFvajr1ZkwLk75xXWXOHoDTY9cG11KxoRlETKJ
-Bs/DyQXwVun8NYZ34Vxu+lPFRUEcitOgXZxl4X9Vg0zzLtpSoUmlO0ZEO+2e
-5Jnb38Vg9otR60fZ6Pi+eEENyFKu1ttknobdSWqun+HSMDsvIq1Oh9TgGav+
-ZpWMDhs105xKwYZl6gkBcpcz7hNdy3SXYEJbPykNsHfGzFz/31oggq2dBZiV
-1v9LIj13PEMFWMFTX4aXhl2HLwD8ut8fa3MoWKfDThHPQv8JM3jcXvOt3ZVb
-7pAeugwONGSJTs04LxRctcvOfLEt6d1dmOCZhZ2iN952zr1D4pM/3vm7yVd1
-bTp8ROAOYLwhnYB12Q06hBmO3GwTzaTXsK8Qm2AVo4oVvXIbEgi75iCkGXk0
-UmqK/V7QXKUuwVxBoc/pCf4W54xiw2NtXb5KmYTsG45Dot0SoS/yt1cRsZmx
-1fi/ligqv4iZxMYXpOeNvmJPS72hcKAujiicJgwpWMcXH9ck4JCMt0rb9XaD
-k+lCnWQlOAHsDhznmrQBWtWxRjr3Q5ePuvPaCQUgQOPyoTo7bQKrzXqzc1Uu
-h5XbUt3MrF4vZHoVIyo0OozaBnOXgHa/PB7yc3W46NvqJElb3UXccRo0LbAM
-5XkzfX2niTymqiF5K8mZQ6pVBGs1KJyTuOSypVNu+6rRtBgh+egEEYC4sWOM
-XTW3AexyS0Jk0+wHSknoaXSZ9l9lX7/aWYrxgxWcszLkUASwzgvXCyXbHANQ
-H2AFTT21boJDt514ChP/g3XWVYFhpe+Z67gCTq4rkmr3Tr3YyANleKmimwzu
-TXo8QrONSDG8vgNmoZkqcjBE2jXXe3QlWnTJRUajoTYQqJSGNEdxobCgUbXi
-UPS3JC7ppm4MM9nlPRELa0mlKqcw8dXRrLZzFtjnS2voGYBrQ9cNhWo0Hp8m
-HYjOXorHV2i+QNPImTrB0GEGrmGOGLwXSLvM+/XGsxkvpHbR4XGvuSSX0diE
-3uo79gJx9Sh9wxHPvtiUMuyeey/3TZveQs4xkhS+6LCnj9+rRxToamR6bqBq
-ooVIZGgO9QuVdVi7aNwL6VEKILOkCRcZs0zdbFmqSiyKSOAfRrZDb2XBiZZo
-Im2+AIesm2yihrPeleWBeE8W0mqT/Y+6H8JvYuShL6Mv+GNNm+G1h8GkP3I3
-yh+Ex2Jnvqv2FQunW/pheYg/LHf8w8XIxPhE2+KU9GVIS4DO8/2SlE83fgoe
-stAQW7mh+yfO8z4UQ6WPBv95Fo5EHHnogNBNeNa9fHY2MCRvtr5fw4cguQ1d
-fqR/KAjfhPtZUkx75IiJixFNjIedmdRVj82xwPLQHDPziL3UrEJzlvWJuvCJ
-ILKNyLlnDFeilImWBSiNrjyAKHaxmyu87QJYbEEKevm+DP5gb01nygljObB2
-vuMY+zw6/O/I4m7Qmg3F6/vqb8a3ZuA0M+dSKn14OcT61eBQf5pL1NaWuHPx
-d3iEJmqOIWijejqJvOisfAuOs3qownigjA3DRhKjtbLUPXr0IUiKmjqDOxz5
-OSQMYG5igYIRff+UcqyK3EndWJzSoilJym0GfsqEh0Di0P6CIj28x7mztVCD
-V2ieFouIIJ48FmS0MkVwAelWis736DaVuZLogPLxZaLffJKbMirKwWRM7Z3s
-fGBBcmRkaOUE0zHw6aoNqOcpqknQsW2j0ns6DQ1oK/zHZIFB20e/lLSSo4ZO
-0pH9cej82s9emtdOc0NupR1xkMBlh3oLwRlsj3VALg0Fi8xVU6AWDsPxl2yQ
-56IOsS7BsrPpQ6syH5TZZNADhVcYxid/fxhQijF4gBIairbUXgsML7seQntS
-+vsGW3t9lFhm0F9K9bakz2UiRxqDIsQHeE9YEZY2OwUZ8whgqCdA1YeoXGUx
-6UJ2oYgPr84mdFONm2+9BaPEzxkENXJHui5y2J+uBphbyi0WeWzRJinWB+Pp
-RX6miHNnA2gdBejYWqN57IzGupLnsMdAwE1gyAYHNofqZXT7xepR7H/7xaOv
-Ln+Oal4XtvGvx/11o0YE99/Gkos6NM5mN+ciH7QIF9AzjwXVrZtDmSkpfnX5
-ePVZ/vLbb79lzy7pju1mEKhT+oq0ldmZ8PcNsS/dll4YtMH/+YkLcwPfLxXw
-HYgN+mhCN/LewHTuQjuCQMe0Fv8JS/xC/Uvn/SXF+l3ZK9lnIzP10HAOquQt
-cLm+6CCMRI+PkJS9RSkJQ8u77+F2dZZ5BnfctXLDgMyGx5ey4I0wg6GkH8HK
-uRQYu9I9N0ERi2oq3cDwhzW0YDFshisN2XxpZDAEs2FZocXKoAvFGddvnOXL
-/Pa4R6QlL459s4cRuuSaneZd6kODCOPJ7goSmqRtZBpi0ik7cxF+DpnlKv8W
-MQ/uk8WDxlQl7pltkR7WzrrbRSJgZNh/Cj0u1T0Mi5ll/7ply8qK2TJxFZ4n
-2LBSn4dDbg1QP9Q2aBT0QsWCtQBvhJHhMkXEXzPbxdcdgSCHXcG0iVKXR3hI
-h9Zh/DiobHJnREwkrpyAJ8J//SE4eAZdTeI42dOa95jsyBZJy659nIoocOa1
-GXLcCI8s/6bRMLHLkMrGOlGnuTb3ku43SCjzWsNI/Q4+eoMsrOLeDFcxAmq8
-Qkcs52S2QDFvItPNNbdPXo8cUEhsVV7j2kMnLFr8gZYUm4RLobAVAhjHSQZZ
-kV4EW5OPReB+bl2vxXEwpxtAGo6LeMDSWcm8FntgICBHPW5T2w/DnbKl2lLj
-CcwKks0Q7weTCbTTmO7BDh+u9aqTv11k8QijVuY0Ad1885Gn1Fbk03GhRSY+
-TimD0tgfIle9eEdqqfWazaSiVYXMUbuR2XxPIGvWdqorkDVsi32B6FTpkEeQ
-0riAA3esXLmpLQBWGDScNIRIt28mZsZMpqhMbnzyDYJN9LENyAb1bZPx6YiN
-NxlC1xyFAaEMJxosls+HFsvyRFXg7/MvL7UheWcPJlWuUw+4Su3f55e/tT8n
-3SbwQzZ8fBI2Ok0Laj1mGKD7B86AfGYbVtnTTcynmASViRvtx5vd2JUuINkQ
-DjWCYUYJAXtN/Ej8/YhCNALv6YTzTsQbE+YVYXIFRaq/LZ3Uy6wX5KiOcjqn
-NrLUqUwzxXwJymvsTO2xq2yQkMDO/zHFnxB4FRS4US+UBFaMjSLBdAvl4RCh
-SGPmflY7oF0L8k8I+IuyGwqmHLSS4pKKpPrY9ZSeZVn2OiQ7+JQalN7GxdHM
-vyFqau/DHHgIHDDpXxF9CLEE7WrvfQiPyFwgg4FvpfZKUyoQqiokmQmhCUs8
-6DR3sSwcCpauc0Fq20EdF6A1mYoApjV8Tu19KC/xPbaQ+NEaX6ybip11XWdZ
-Fc45C1OMBMKexeJRS9WtFmFX5r4QAWGS15aYiJ41tstnfyM2UNXwFt2fiUXE
-Pj6MkXLXodTktZ3tiw/n7QWN8fjMBl4MEgYma2JDQUtubWq5ggMT7iH/E8b6
-5O/mkI8e/0oW+QXzSAWhRLjIfUtRtNgaC4dshqVrXxlJXItdMtdwTT1uO3Hr
-q+VSGJ0MCCuTUIT0athWvUfI8DeZjuob0cjitANsm4NsizMTvAOOmwiJweAT
-J4tRbCUsRzN77+O8VxMsmK/HHbdQGnoxp7si4ipUZOw7QDQHNMfRGQU5ZLZB
-+nZ/22yI8dzcy0HhhjLJPy85tZYZKTsGDCc/NVO0Y5QwyD/fckj8luN2kdLz
-b47VbhPbd2Q1HJBs23VpzYnrhDujhiT6VfL2BRI60Nun6Ly0imgqstnrLHI6
-0dSlGVR+DZ63HHq4sRN3xe5dZ1wfi3hXov8z74829nkyGEEbday7WJywbI87
-1ztpkTHlLCsUTFfmSzJIFSTALXLfsCq4MxYiCaRnVcaWS1/eVIzzZoLLipnA
-EKL0U0y2VX7V3JS9wjSSCJGO2+GQefi7YC7vHYWQGYKAC/RTATtRv0wX23Hi
-kmEonRHZCm3zHq2sfc/HHCawCps3Zn6GorzrRAoNHP28VzEOz7bbO40VWBS0
-p7uJaV4r0MwVc4yi7RdewMplRQJ4iP6idVgCkiv96BRDOtyl2BVcKukG+Ou+
-8FpdWqW7+/ZSAqeaVtXLN+vyQ1L8zZoL13xbKR96W5lcN0SlsHpgncgehJjU
-6CHXfFhOLGJSoS15spWZeFZcR56J+PRTtH5GZsQwG1D2f+rb4bN4IJCt+Kzi
-vQ36YmiMhX3SHmPJ8Vo7IbqBE4gxSYsyy+XuZdaWjJ+E7qd2NebHTe1mxsbe
-1F4EBfmFq7Y/FFWb4IJPkCJsd7HjuGVp3ywFL3oS6WP4PYkbQW5M1hyNLw2q
-zN1maRVfHwHOA+5RhiweVLprkxWH7qj0rjV6+UfrheaLE0NukiK4y+0NxoDg
-SgUw7MxDk4SzcikLfvk6X82VzHyKvVaWxM53Qb/TLuItglI+2GIN6+9daU4S
-qhmHP8FTVn7tU5j5aU2c9g8gbi2pNaHFHXxH6awzSb8cP+isRJeN+1pyMj46
-mZMR3SdJeUeoK5344oI9+cJYtlu31TpPjLOJARj8kWOU0kyBdWZOio05nahI
-l0i4VY2Eu1M3apxlM1dnYn6Drk1BNiSMhS0NyWfyJoYl86issYC+KbMMyYaj
-6TvV/RXBxnFVuSD475emM4hldV0h92yKqp0TiF6OPDQRrlkVqh0k736QdR5f
-uy1cRcgNrlQ4UNNmJXatz4yUULYbldVGzwFqVVy33ZW7AKZf6GzMFAzaRuoH
-tfj4sTvCuyFbIjxJ3fWTsNDmJn8j05aoCe0YMnyEpWRS/3Nt0HJekRTSThWW
-Lj/jNKCzRUjnEKh/ixdvjy22YQyZx/wyAXRhr6oJ7aHoSTCv3XNOCq2y2E6U
-g08JANjs7V7E3tVxSdCguHQfrTUtN2PKLjWnhJ1TdFQPSXiO0YeTnzzrRa6w
-Km6CQW3PRv2QPbTx9di/4Uka6sbegWCFccD0UshzsQpZoJrzYX/kO8O276EP
-rZwPtJUMctnUTwbrZwbmFaqZuN1k/8DwSSt7kp5NPTqr7RHW2xMBSMg+ZK29
-rDVMEwEbNWxABlblfhQDY2nxLdflDQ1UhPD0CwGA0LJeTrVhCNviUdV5EaE/
-4wQi0RhacGWsH+48+VmYkrf/aECii+uC+/eYSY7KC6BwBMXI69jqOxjWUDdt
-dYOjNlAquVVWFsKYDZYCzDeROP/9UhjqMK8KjJTl9mbDtV9JRIlrxYpeUnzF
-AcgHCYdZgh8pbkDFb+wbya0KzpN/zp6zmszNRkqdPtd0VvS65yB3CiPIma0T
-r3AqcRsdkM71CMghtRpLSdQN/OafRSwNMzQjruUMatLAZjI0TIdH9BGoJYna
-Bq3UgZZeLBzG+ugCetQyMDV1DdxI571hTlzIZGP+rkpjph1Iq3IneRaS1zT6
-lOZ/BqSXmn2SCUKESRvHmLsUHEYB9gqkNijmov/ME3ibBvgL7rD4sKMu7OvB
-0+cMonS7O74rF7GRidR7oKQ7ICg6FujEjWQEAmxFqphcXRPHqH5h4yVGGpjy
-+vs3PGqyep4n68ZBKkEVB42MgP4kj/hUP5nc4JHmsJhgp7Jkvc/kQgmYm2C3
-JUi4rrbCnhIbUB8tjMj4+vk3tWGt5pWg5OhT0OBoc/6szrGu9Aczj8jm2szo
-J7KTn0CxCRReVNNfRKKTPPWk4h5qj3SlJlpqbkiX392bj2PCo2MqayLNLWmf
-05r2pRbcTxglgzV+Ew1VzECHluoif2s1QTzTAsF6UBesme7TI4eESyk0zqSG
-N8GrDpxFUEW48rhJ1bp04HFfPNfRIeOWDAyEq1i4opizQyelAlxWUI8kAXWW
-2l/0sXmTAf5q7qVJ49h2Sb30Ll+kEwaiwMzTC0g+H2sUi+BxZ0NKM6XOlbdb
-p0uBcuoFQXZzkYkvL1Q6wRSPwRH+K7Ohg6WPQpVN20IsMqzqrlJMPnH3b2nH
-kmVZ8gtWdqz/euw4xIcU/wqNxF+c1KtNwnbqKyBbrZdIlX6FlREvKjutrmHM
-g2tzb/Dpm0SXksmOXSLvSukoyDEgh5csklxMyjQazR4m069VFwJxkwwoXKAs
-YFDF56xOGbLWWTHiw2PxlxWkR2/KnSoAcIGl3RXPBb4SQoyjKSNf4oX0BlFo
-UAYermpDUMsF3gQ50yJejrXkF9HFkQoXDyiWJam/LihsMBYTcGGaiEtUSDJm
-c+yXVb3sjr1mcfjWTlIUgzgjg8EUwHd2QcfOEiqHrgGLPq6yNxJv/AVRzFPj
-IRQUqZDVa0sYFHeRFbyBg6GCbxh8BdbJTW0FjeiYwsYRl0Ho3tOYRRsnx0sv
-683oDrhwGDTZhOttK45GG+gBrnFWDBJGpR0fNqIkdZiDE4EM+OtnkSzOMD8s
-Q3JmVsJjdIioByKqP4pUGryRD71coDEhfzmJLSEXG9co1rqfS1s/DxOVpQ/K
-2r81aB7Xc0lioRdZFANW9hovRCh9RfRIZpcNLZW3AyeHrcm/A/hHs7eGSvWJ
-VhR0tv82yncQfhb6L7AV3PW2huESYtIFGmndjXCqklDCn1No+ti2SGwvqRDw
-0dRCxZ+vZ6GrIOKBib6/FbZfBBeoKBkj3AKLkzr3LtJEYBdg0FtRCWPWrAf9
-HoAjXmmsRhOSCz0l/5Sdk3k5idfsL4LCIMagMFn+8LnLuLkw5n0HhjpHYyrb
-aVgVhhx4CPCmQcOXUidLVxzHpBV+Q8wH0RBCYWZILYskZbGVofqcDQHwNU+2
-6OddSiniPALVYWZWjuWNpxAuGKHUuN7u7Hc91fZrJt0s1I8wyMe6OqAKIvU/
-S4/LTs4unboqm0j5O43DUX5g0Ke55r5KHQonpUGLhUbFeIOYWbrcoeBDd3NA
-grlA3dyWx5YjK+tOM1/0hhgugwkjln1taZUS/Jcwc2ga2q54NnjA43EtnTVw
-la4ejOKErEvXdFSyUzX8NcAEHTu6h/AKAWG1E9z04NWc8qhhAhkC7WWXtlJR
-GMxUfzHXLqtApPtPJLy5Mw9vRcdkl52cyYIzlBP40SRPYuq97/i9mQY9GPNi
-5dtjTg0R3VTATOA0B9Ub0KABkHe18MaANa84XZwNYkcjLR5Myw3lKgokFLo+
-0D8Ec2Q2H1ktgl1ZmPYy3e4hE6qRChs3mvAjOycrUr9GprI086YV4fq/1MDl
-YHhzhKiXWOHXmf+btVhYE53JrKoEY5MBySZBaLk3hdRgW8p5LK+P8dakfFk9
-EgDWiAxBEdxlEs5ycZ1zRg7JkLXEfiXRx9NCdU1IB3YLMkctM0qr9j3J+5CM
-AoKEZrmfoJq4fi4hjmMx/33Rr29xaZiAfocJHSvjQlC0Cp/yUnXWdCPBwxUo
-hFEBZGxb/mUst5KW5sFQCyU06t1prgWMB/kyTR5xhLmROZ+EAqnzPowsKGAc
-2N5MKD9wMzAaH/s9SFc7ItuHp7iMleu3Vbn1DVWKDau6TNeZQzX2u+KAUJbm
-dPInWMC1mXmzAuatukKm696z7FutFbqmqQEdVMYiVaM67sXq6Kxwyoq23MG4
-z4luIagQASyITRiolBLIc9grDGSLCaaoOK7KTgq/uCZ/qWl4eN4i0d+/fJaf
-F1bV16zFW7EuxYXIwtcN77wK72pg6HJtmDbRZsqKzb5jSpv+iXPYig9LbX1k
-uYU+dYjljNfn7wyaKII7s31gFVJNK9WVNDtLDbkLQRREIGXqZwEihCiWfclC
-+YlgElc43L95QESKQ8D9R1o9KWosSPf0GeRiSU3dAQXi2NrzR5eX+avtobsg
-LdwtTZYVvYCjTmhdfrn64rcrzhJly1uaHPmDZi7BSgA/Nu7/GxauiRnZXHMC
-Ikua4uqLy8dfYJ4o+LAS6YR/YBvE1An1zb+zVOhyk+5OtuZYmOBwM1N6X67y
-p4Y5Z5ACky4I4tstZ4SAsBiT2F/cJbOsUAHclrt7wZXalYi/TrhrM1nmUjJP
-tiwV4fWHXreQ+S6LO+SQx0uXJJ4Id5QKE4jnMCNNNMVaptPsZ5saiB2tDGcS
-dV/BQGKm+riqwPXyO4HGL2w/CjiINkGAKBVUQrRhaL2MvNX94tYML+vhDQkp
-3Ql5yQUAdYrFzVxM8G35ObkrQxG1yOtqXe4CUr+5KoXlRHakPNmz4z9xCol5
-veDknxMxchetQh0LGTF9YqejCKDz/GFt6NqgkBbDboCDE85Uk2C3KZki0P+f
-D3xY7+MCmEnepzWXdMW7HuUbxLN3u+krHuKjPip6EXIgucOBH2PIS7DGaQhu
-rtkxB8YEF7tg5GLh0Uh7zU/0B7C9CJKN6ysnR9Xiz/c+dw2GaR982C4z6brc
-afbuYBhE2VXJvD2JxxzirsOJRKd+EgMqTg8Gl4nqguLyGVJiJpcSIcgBF12E
-qKSuFQRRRoA815RxIhlvANPUaVjI7RdwasUHgo5Fdz6yNdwAZUyix4i7dFDB
-Z6DaMzuCkvwdN5tLGB5jRZijlxsPc0q7sAtFlC/UEmTDuwKSDdIfi04irXZJ
-OogEmXw2eTeWbCuQJDgCA9viycIcISLMSDB/yj2HtTt/5+Ng6jpKq6h5MZ0U
-GASJAssU/OZb4ZKDYoJjB/tCYsYBrlAYtxQaIQxYwMGNlJY4NizbJ9mkNSvF
-rBMHAUgnLNGAUXb5oSuPm4YZfEQRVn+LKXiJlPhIL1xOuvsFEiXGzaVYjI1U
-y5wMpUMs18RRwsUxsYzUUOeqAOqr1BTXhGiiTEexmfyvQ3P/WiGIsIWxbU0m
-1fClO+3mEGoHADImgD3ANYaEilMelo31ngJ2AHOVSjwbkX5kw7bnuJZCt2mJ
-VmAP6D0fGz5AT2abowgoBpZLOeMuEfhUgd6gc2wV0BD+jFpTU0Uk7RQsqC9u
-bkTXP395UYOWzq8uyj3pkEVLit756wuLHS+J+gCaleraT7Isp4t4xTWf832L
-efjY8g/GraSKS7UQDf705qYtb3ipqdPBPvDCZWDG+A73jKEJh4wHaSEz5zWm
-gXJXkS7Zi1PDWgzkJdHy8ONE0P8Tvh1GHX5a/G7pit0XeQIngzHqUuKv62e7
-wYjTX0yWOfjgr/ne5ALp3J+TMHwvGYtLycGVkhs99jfHaxee0e3mAlBi8Ufn
-1k+CwWEp+td1U7KNXjF3cSc2fnFTCWu8vtf14uNYjFXRLfNvip0EKGPxkkUR
-ddLyhEAjpzQy2lqMrfynil8uRxOPC5ogvQkaiftz7Wcjw/zAqN7ozzU5Hln2
-1V7IGSon86v4pPIC6YQROnPEjHZgYneyE5FAZu/s9A20rTnsdBWH8ZRXg09I
-+9t09HJ6AunYkxdx4gN3t4xLN/iCqpeqTZWj2eBTiacgjafxJ15pZ5DZyccv
-x3Mdfwon8621XPOBTaPMH1iS7UU3mj2RT9qzfZyyrOE1KhQwchJS/cSBB1dv
-Yumr0QrmvnOYW+Unf4kpPFa1inFoku62DA3NTzbMJdlp2T/NcG8iS4n+pEHw
-2DOm1ce+nGaPxg8fRruVfpn1rL/768fQM5y/HFKPDBpo0nZQfWVGqfj7dIrv
-TKfgEK/3u/z/OsX405/KjbHbdvvx4TmuORTtVkXHnQ06xs7+e2V7TH2R5fxY
-cruEhH/KJ8Ub5uapojERtxNCngd9FupPrAp4PPKjy8vfYqUve0DQmsM1tGnR
-9CPZdK3kmyhoV2TevEP3PzYntM5KkLrYjX2tGZaYZjTisZM8+kuOnPhtmLjc
-LMu5FLsSv8Gu4WpSef0NQy2MV2w1CZjm77qxp1pJiod4XrouwOlcwGlHw0wo
-j0Y5pl3M9b31B5S6G+cIabw4PeZp9uRuxHruO5GMsAPXUVKEwor53r1aQTQy
-5LK5pomCJidxd7rrUzYgA/ftigMb0JnVNgQjGvv6Z8m8MTYe+2JNd1FeIKHE
-TYgWmvZx5HhuLa6fQolZEnccEQlWg4s90CjjCKEIGIbp4ZatSr0xGeeJSbQU
-Vrcu2jZJvBqPvTjlvhK/FGchHg8hokdD0TCJQz7NNObsnZ+u8m80ENXBg/7R
-XMuFphdakBeJhgLHzJkfsS3LasKfhPp1NJHxfqSQ1MHTsbiYuhWk7rREgPsF
-H5CHwAMONU+Cm7osdCr0TcNaK9gB4sqrPJ/qVtmbsvz19VdwcGgURiYi9dxv
-T9eIwkFlsXXpuJgCT0gyq0VKu1jeIQ736M5PABF+oa8NueX34mjVNIo4nITU
-FZd0JtPGKyyDTHdNyHCdo/0uskM2Abeu9kh06BBJBoYFAjflGh0htRuzlM8N
-6+cX2Y063PUxBEjKjVbB8YPqwOxjTrCnMYt/actIwPz31nG0zs/4WYYWbvuz
-lNasNTzjmxhGMufE6PfdV5V9fotRuKB+UwmwId22fXGDVpH86qhTiWTEo5JP
-oDxlInTT6JbX9iXUKsdc9dAiqLmWLmU7JvcdXGUTe1Epdpf/RXkWx3/o5mrr
-SS3TFj5DvIwR1/W21dsdZywDEOmnq9ihXKfLmkuxr2gKrQQcwmWX9wXQbBHJ
-DWnNTV0G1HazE2U8ZnebRmsIKwmhMpOwbdWVbY/SsMqh4HeWvWqtguk5xdVo
-Iw+SSKr7WMEh7i5+UJ2R8IrKAOGIod/9EB/MQvmLz38SP77wwmFmUr1xF0k2
-AIE5Rx2hAYDQh68ICGHcmK/hFlLWgnMeT2CYbYDYZItuwIKK7wITkymgC/Np
-aOMZhmkwLM/jnuQyAubwQQc+I5f4ycR2+Wi03+7zpFFcPJ6LmBURCmJmkxCc
-VJwKIxkua3oezxW1lgFh/5zy4fH0zYRJih5HbJFZ6zAkO/A6WuIICs+VVg/A
-zq81904SYzjdgFNjfItva7L5hqiNZvlSgQ6ybGK+FRInOAXtDkArcL/FbFSe
-KX5HJUjRe41fZbvnpeCjNQcsAMwmoWxEUplv8usMPA0sBksI4td3ZfEOoPzN
-sQX2v9S1HWvDUO6I/gtNa8Po5WaQJSEcBxlhcSrIBwSgrOT60qeggcCScAJC
-MdVjsEc8h1UdEiYlNZJXYt0iQk7QySxlnMOrardxp/BmiBnY7biMjuuVYsGQ
-tFzHcf1/7V3pbhvXFf4/TzFgYUBKSIakRNpy4gDy0iSALQe106AoCmkkDqWB
-KY6goSyrCPJYfYG+WM96t7kzpBYvaWSgjWly7py7neXec76PAxx1F9i+GHZH
-RXIiVwj94HJKaPPUk9OCUqIW+YxvXRXHiUaDVC+m4k5LpuCgEBI1b1c6xlHw
-GchTPcY1wZk4tZiPFc1xWRp+FckcK8pppZwvTKVWzmZ61XuSZ++vBIwbX8SF
-lLSWbR3hOabRYUDuNHwIP1hWOrY/GVTpo3Bo6fYer2IZVtxsx8guyKp6t5zl
-YJQl017O3JFJDTREugGrjWvwzkgroxZEZZFjCZRcPRazWIqVraWgQ+oNc/DU
-dRtMZCxtmykDVxiWbg1cYyMh4XOk9/Tixv6LLvfHYObgg9TTBlx0GcG8N8ms
-Haz6gxClw9eZ1dLLMhHIP7KsJAUitzu6ruLwwlN16U+7e7txUDzDyOUkq77j
-vHa6O61oK+Pz1BDEPRd00xg29lR5tmgLYKRPdi5+dXqan5acmq/+WZks86OT
-BQO2HZ1kKDxsj39nGj9lmG7zDesnqYplT4HqgSA6LyijOZmDY3POVFFoyUG9
-UxIxO1VT9H9oHwuGAazQTKCjKfonkOWMghRD42jS5XPrOr3P5Ur70O22IJBD
-b864Iya1b+Fhu1M9MANOY3U4odrvvWYSayYh46tspOIyieSoWR1KBEGzk3xU
-9B2STLiFaMhUFkSdKSoIaJaWrU/cA2rhlLCXaQXIIxhBMspL1+9e4XKikhuW
-dsqzDBZJh/M5uprml9g0P68GlXM4yWTlHzglW4WR2ZVh1bl+8+PrX14+p8FR
-+vJZni2FfxCtMeaECoYe1pmWEGhYyrcKHCL4ZS9VetDMWQKdGUQlPULh68hY
-lFMJM4oq4YMMTZVfcKKFueAXyOvCxVihHS/DXuk+wdSWQknIl/WuHYKBMakP
-hQJYHArlaGRGOUFl9wgz6mD1HwsUfUIwNHwiM8/BdNDh4jFKu8Rh4yWeLd5V
-OuJYXJvuztNXoAYlBxtJr5agFulYE/la80ss46VdRhuIHB2bngwa6xCGT0tu
-GKC8eJ8nyLoG8dSC0zXgrek/ygt8mXp9zHCMxFy8X/VEi768QirrLj6C3Gqw
-tM+rk+Ksm9A/CB0XJq9fLC4Rk4qKDiB2vjjXlUxXZBh+IJzsyRVfaS7y82Ni
-l0rM7SJ4izQ6fRCN3icl78wjX1Rc6PTWDNsPKGz6El6GdueHAlYHhO7g2YBR
-LudZN30GgkP4nv4NM34XJ//9z/ucq2qfvvr1h4R0FpZU4LENnYsgILotx61k
-lvwJ6BKHHVd8TJOTfE6oABckkLg2MqScAyIsSwhhoRSXhUfh8IoIAkyndiHO
-Sn8suU6Rj91e/7z317+n8D/aSrgIkTxdkuJUvoTl46JhaPucpoSPCLhqRghI
-XJZJBavlGvBEDnOqmJi/smtPWYvCMXsoivFVeZKhEnpaXhxlU8QfZNFoeZrx
-ywj2UZgjd5+njrycFkXl4TgYNA9NMnfdMRe0c9dwUl1UTmzVr2F9Zmd0FmdO
-CcjDRm2nJZ9vTrLLRfoCBuwK1xws7dnFaZHuFfC3adZNnoNzPk2fl1hZvJcd
-5jlSxB29E+aFl8jy/gKEO1/qZiI2D96rmaa2kY7VHuGBa6+X4lZF3RE7A3uG
-eVaCaqCZX2oBTZWIFkObk2+/EdGJYSyn7pG9RoSI3Q1V9Sbvn07s/K+Nvwhk
-roCc2GpKP0OaopUNYTMF80IH5pvuiRwhSr2HVZDVgnmGhdfi34bLxQiz+Rs3
-q+1wXqIKDH7kDiRdg7kJbZw0ymA0aMkuS1MoWIFyYJNjc7cwU2yaHNh75f2i
-2qd8zwOa8QPn4td+5eZ0Sk4gH/9bQfhgQiEp8eTqarEkt5G2HAExJD9fQSN8
-6CIzKceONgUbDKe7Cn5ymUAIScXPQ5OOa7oeWI3cqZ6CriskVbLi7rt24sGz
-Fi/DXSXVDBaBJ0f7jX+fgSY5BFk2Xpf/otflmPeMq1RfKvl/FBsRZaLkL87o
-XFezEEljYmIiGlPw9Akcxc3rw7WlN2+gKcE2ktuds/twsF8dKLs3dhs+X5we
-8IlZvQ2bHOG0ROdn1BZ/vX+QuA2aZ/YPtI7UpmYy35zcNRtAePtiBAAr5yHD
-PM8SYb4f4O7f18MW6AyTSjXUduv2rK2Cvm2LTwz26VG3Ne/aeZ3W0MfY52gQ
-N9w+Brwk4Jv2a3zD3SR1IanQHYTFl1LUXk/aj9JmpKYkon5ti0HdinM9eH5D
-xHJTKEwFBcu6We85tvsFd/1m3f4RUyHq/aZt+hHmmrd/KOVHnmY+/YhPMHf0
-I0ztHfX0ur2MzedbCvenVFPzrmIoaqqoIn9Ny+3leHlRBvd69fyj33//PT0j
-S5l89+z18xfp0xc//LT35vtEFI6blfokrami9Jt0Y9gfQKha+2ozMTmd+xU8
-Gy7D9Ku0/g4T4+xzBiU9iWQdg/5AkPjdCU57qfOSzURUvfuLJ2nDvk+/Tmsv
-0wbcR8ImnB58ndZfqE1QPGvEj7XbTQMrsZmw/YLn7D9CA0GLX9XHOnE9Kb/3
-wcO9hr4kMV8MHm9q+LsnaUTYJOLrWRnWeJ7X4Iu957ACYWmi299wxe14/pqD
-YB1/uk/lU0pEWirlRrctX4d4LIl1XNAYnUr1G1ReUkhRDyYYck9TnrWWyPcj
-WE4p/GRS13kAdOclUjb5+ZsJgjiYcxOVw6AxtZeQRbyH/78IRuvQJHrJKuvm
-HjhTu2+n9jOGJ6vdcPS5KRHv3h+/98fv/fEv1B93+833cdDl3UVaYH7uIZus
-MKGcIR7vusNUjF2v+d80Sb6EAGmSfMNhweRUTGAtkOH3gLpywNQ6RMgKvlVu
-KaHsvqS6UgPSYy2EJFxzQ31aDO72qk1K5lgOQ/n7lq8W3ZZE3+ju95qLpPeq
-EdGWGj1uMsuOx82AHTwFoGoTryCgpQSg0QVf2wtt2EmJ74MGOrDrPQbOp02H
-fpJaX7PZvTf+m50qeHIPu07TQd0vFml9vT+mXORilsaagGHcI0iKcydB+/sU
-AgB+DP/EXx0uHPNz205PfxUsCvppPq9y+45DWPbvEvtvxey68sDIJXEvgsgL
-+Wh9n87fzDhHWtqMecVS1W2ovN56Wyz3vmVEsHkmN9Y4uCdXZ5h+wNd3WAtP
-V9Xyc58Ql9M3TAawR0oRJYvAzNv3Tt4sXZ4gPB+doRKbVpfybFziMy5Ypi3W
-5ZRLy9zDrqVfYGO0Bqc4eqm/2OBplc/f5xY+n65YHFSWjMHgaBb0ykNl0qws
-2OoL0PpHRCUjVpzkZDQSC8DJDp/VJYFi5spxVsm1PKLXyuvh3D8QOgpM59FF
-1RWaGuFGLaleoAEKqJ/4JgOzbpY+64kdkFPuN91qSFIepbaUM5kLZeYm3SY8
-HJwLwDA4JRZPeWw8DW93Ec0uFctJWtdcsYBDYl6W7yQQcUOdGk+LT3PBee+7
-S/qKVexpyWkUijtTI1DRIvxMvHgdd0mR0S/JvgoIrwnHgmkWoji+Tr+UVJtv
-EVjCku7Ol5biylCeTJV7SRdXG0QlhowC5JjjoOJtuC5Y4r9IKkwrWWaLnEHd
-e/SODvWqo1SJFdXeMEhvxle9nM1d8XxjHxPdDwyEzxcFth4hALI9RmrKpXAa
-Geh86HQitSl0F6qDy9DYrKRIaYQabIZ35rELEYl5FHxQw2df4/WZPohwUJhG
-CaFDTdxE/Z0SQxDFa8lSbbyXLU6dj1LE2MT83UqCawnLOvWi6I6df2f/uAqL
-0Ef0zvWMwBv5ognnoi9U0UMpo8IOPE4F14r+pdHFe5xOBpX9iRvpBF85xVrI
-KWv+3Q1nzDf1LuJXdBQ4oP8OHiC1LlhI+jkOLL5uNhlMB/NBLv0Zef0R2uSb
-d2c4un5/xi0dGjs96uGnTbHoXpfgrdCnsenUltepYcVcHau6NbxJr/rjj9Wv
-IXWrP3b6te31awTPOoW6d73+Wno2aunZyOvZSBfhaFxfhdC30SAXncM0wxCU
-sO4i1nLEFkFyJwP1w/zomjXMGoPNGz55RDlAlORrgNLJbfA1XVA+7Gu8ClMX
-xNXx3Zm1FN5bPnfz2S/wWMrzwbqa1UhsNnQY5gt1SXcHU8z5gJjvKJd0/fy9
-0P0VlONC2pVLZm6pXt0Sjpp+TSRHR4HCC/R3bNqZjo+VGUT7maTylSVPMWfA
-Y84MfXVcSniCFsnuVVhAzvLfZGCEqUHfNYgy6XjHcKxSUutQARlRaFiClPps
-V95458OwGsxVwFGjgIfZHcj32BLgdo1k8Lcmsb4ewn+GRjpfjc2zuxXOHzwz
-bjiKRkTmE63LORl4gm43CXpn02yJhO0w1meXxKqLN47MMtGKN4wjW8F0w7Us
-68jWddjdEa+zPs+BgDDfoH/7VtJJk6RNA3lHoq4tKP8FN9AzTXijYicUXgNM
-EB702mOniJjUEeYvEnF6h9ZGxyGFQc3E8pJwIAcW4AWOPQKYmrk3wHku1jgq
-5a2A2pWJE6FFi2Cc1Q/bjD7v6Mrs0EgzNEQBsR4mdXbTDk5Gh1J6sUvXk94d
-cSN/XJQtCSJY7JelOfZrkzM+2LhyOo1DjSL458LwTgr55nM+L+ezhPa5xI3U
-/o7gEPaaL9HN2r5maGqDF0miLHoUHXHWO2QaOrxzOuHS8AfDjDe6F5ZtsQVc
-7Zm9NqzCi9ETwoIGc9sIfp/ytRkBBRxemcsrGidFWtMgXgFJ9YCG9p66BgSP
-yjY48aquOqEV7ugzeJXLs2IuC9W0F5VjOckfpAkK//ym8ZB8EuWkXxqlzx8d
-97XX+Oe3lk8rPrKg5PDUBbVxkHwy2hMFdWyS/a14q81AML+lg8p9h/sp9tFr
-ykNFWdXUeCf4NolBv8UeXkOqCKZb9MURMUIhEx/dzJ8CX6z2tobb/YfjKnEg
-zYI/YXM9X9KguZ60F8M2i7e3ctSa0Mtu1ZRBKVtPqvqyiOGR6deTgTtCGOI2
-f8Qo0QUe8/+saGsyrILfJs3YYtdvqwlMjIbggTcgzqfYx6QRMAwFGdjf+59w
-5Af9R1tbD9xvk0C/y4k5/8Dk9dY+tXz0FTl52b7jJZyzczmCVOIARA2jPA4/
-n9iaiFGLiSDLfi0LYYKYe0NxM0MxrGmET2Qo/L1VEyMQ8o9qKPxe9saPWg3F
-1kpDMWyT53qGorWp+rJoMxStTU3CRfPJDIW/LuoPB1J+MkMx7E8mD43+Di2D
-o9w/taFwrtauYSiEJD5Of6C6l44SLGkhgRkgKhKjWhSKp0Co0NRFBy+EiFnp
-9HXB8AJXCumlV4v8j7nQ4xjsEIhaTBxscDD6nsDugHbpBWFQVxju3SXf1FBt
-L1k6ooiiLB2HADik25MRoOpSivsuT66cx1ehdTpN8FNU8ulwgDuJr3STXRwv
-aH6xehsNtcK3wLBPld2N010nA+XqUYwAx05vBXbaPxFsM9P+SbZrrM1R2b21
-vpm1ruvST2StJ226sy7kH9VcB1q+1bxuj/ujFdb6Bs21WezW5oI5qTVXt9qt
-zQ2H3izVmvu4ltvvSyBLKKmE15/KeHsWN7DWYz8GfDjubw8ffqkW3OGJbQr2
-ULEHp7+UeW1iPTpc5OQQJ9wzCU5Dz8AQOI7MumDD4PtgPvRntTtjJQOsxJ5w
-rpBQsQqUgDsa/CtJ4exY89AxSWwuk7EXrgbUuVjlwDGuUqDKteqIjnS3PGO5
-3WwsVx171q1l7WLp3lp+dmtZb+pTWcvrtrW+tWxva3gtU9keiW5fy06ubmt9
-I9k+jdvXspDhAPlmZPvzmcdRq5i3so31IPkattEPbGu28VZ20WvrdnbRDdjq
-djGXBFyejaW5qHPM4mVm7zW7yUk2n8UTesx1H1/IaoIqUe7mFXXKHQWFW6ab
-PJfs3CT0WhhqzxqNo0es5l7zNsbISSK4t0qNVqlVZ9+hVYqojXuz9GWaJXeq
-Vi2JVWapta3hI1+hD1bYJb+x0C5FGlvfMLXP5IoD11a5IgO2vl0KjMfWVn/L
-ibG2R/1H49vEbA3t3ZV5cqlSMRXdEAqiJhdNi3EaH2a21CbqweASVgSShIAN
-oVG+hE1PxOJ+8FYpjjhncwqMuBxBCpsiVsbFcIkjyfBhUQz0JAKlyswglgve
-4JZiZ8Pjy0QK/gVNFX+C6wkLkQVo1PTIEAY8vVi2JMgYuSXgzZapwTIkMl+0
-KtNuglCxDcvPYKxTCcmYeFkUUJ2S9mo1jhSSGuoIL4atYbNWwoNyseB6aXht
-tAiX7TqWgRwLEkA0bk60EJKGBIbT43t2PIxJs4dx63g3kv5372p8ClejtanQ
-EozufY0/h6/xqMUMj7b9RTFZ5Wtct7H1fY32EbuWr7F6wNp8jcnE3u7WfI3R
-2P/4sD96uNXua9ykvfV9jWbnouZruAo/SFwlGyA5rsJEDt7BZVYo87OEslrp
-CjbUvfTU7FxKJPpW4GQooRqRpHMCMDzvMq66qedFXBVyDYjtGxPU0a0QXHQ6
-00WLaypdkeRNcn/5DJmg/0NmxIsFFfQsXWFfZR/o75utvIF+Ku5uHaDGK7/m
-7nENsU0VppEDK5nPmY4qfjLdRWRXQpnBZKx8Xhwbj8iRrefQksEbkfGDZOun
-T68M0Q0LIDVNzgAtYZQl4ZcR/UmOxLvb/hZv2p0axuzoKD9bGqij4ryBuesy
-TxRKP0BmckR2mTeYmIXy7JEAc+OtAlgiN8W5g0lMzHfpYDCg/9VLpTfZg+FK
-RD7aaZhMvJPXeky63hdsDFx56CPV8+jVoUzBjxUfpLFEDpt30vMTLi6+JOL7
-4WikEEkMP8rQ9YFf91au6xGJA4uxaj4ktDR44M4u/gN4m5T23uxOeIL1kxYu
-yNZf4jyMvG99RsWiYkc4ZEj0OAzB+fkwHCGQhf+iCP9hnPsQeQ3ZL5Wrow3D
-BAwqvcsNDB5sPo5wNRZVhKAQAwEkKVRiHuUpvDlLoSnMQJHviKuQG9rLL+u8
-i2jJevT/T8Ai9mWMvJAoPkiwcnCYIqPUTOXoj1J03tBInJY0QUdcLU58ImUq
-M9jIpGhQEZpZFClniNdCI5lisTaRYoxEsa3ZFZgNVg9tyU2lPUzG/dysOty6
-JIz+0GwanWFxGFarDnjrFf1OKUGRqYZWmFSFwa4z4bKUbGn9Hn0/dOJv7/vI
-11qc5v7Abd/7upLbW1mQsUL6ExGb9Vzw7J0pOPBWWzQcuKt3oOHI6X0Sqssb
-6LjJ0MTUdgsjREH+51N13G0cyN0Zgm/A2Gjl6dQrkfswRCglOmZw0G4uDpfE
-6SK2H4ODLmI84d2Lozcnox6Tq61Sm/d68+715vZH9N/qR25RLDXfi0v/jF7c
-ozV03KMPFF0/SXcm927cnbhxOxPrxm1NbuDHff5xokX88Udqyxmp3mibhmqv
-FII23p8bzF+9hnretPr5RrpZN+6ttfPOTn/n86jnnZ0dRAG7qIi4J6KfJbO+
-gsFFUrQ6nEuX57SBPx0vS1y8rwZJpOjeYrFImfT/ALAhqu2x+QEA
-
--->
-
-</rfc>
-