--- /dev/null
+
+
+
+
+Benchmarking Working Group M. Konstantynowicz
+Internet-Draft V. Polak
+Intended status: Informational Cisco Systems
+Expires: 6 March 2026 2 September 2025
+
+
+ Multiple Loss Ratio Search
+ draft-ietf-bmwg-mlrsearch-12
+
+Abstract
+
+ This document specifies extensions to "Benchmarking Methodology for
+ Network Interconnect Devices" (RFC 2544) throughput search by
+ defining a new methodology called Multiple Loss Ratio search
+ (MLRsearch). MLRsearch aims to minimize search duration, support
+ multiple loss ratio searches, and improve result repeatability and
+ comparability.
+
+ MLRsearch is motivated by the pressing need to address the challenges
+ of evaluating and testing the various data plane solutions,
+ especially in software- based networking systems based on Commercial
+ Off-the-Shelf (COTS) CPU hardware vs purpose-built ASIC / NPU / FPGA
+ hardware.
+
+Status of This Memo
+
+ This Internet-Draft is submitted in full conformance with the
+ provisions of BCP 78 and BCP 79.
+
+ Internet-Drafts are working documents of the Internet Engineering
+ Task Force (IETF). Note that other groups may also distribute
+ working documents as Internet-Drafts. The list of current Internet-
+ Drafts is at https://datatracker.ietf.org/drafts/current/.
+
+ Internet-Drafts are draft documents valid for a maximum of six months
+ and may be updated, replaced, or obsoleted by other documents at any
+ time. It is inappropriate to use Internet-Drafts as reference
+ material or to cite them other than as "work in progress."
+
+ This Internet-Draft will expire on 6 March 2026.
+
+Copyright Notice
+
+ Copyright (c) 2025 IETF Trust and the persons identified as the
+ document authors. All rights reserved.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 1]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ This document is subject to BCP 78 and the IETF Trust's Legal
+ Provisions Relating to IETF Documents (https://trustee.ietf.org/
+ license-info) in effect on the date of publication of this document.
+ Please review these documents carefully, as they describe your rights
+ and restrictions with respect to this document. Code Components
+ extracted from this document must include Revised BSD License text as
+ described in Section 4.e of the Trust Legal Provisions and are
+ provided without warranty as described in the Revised BSD License.
+
+Table of Contents
+
+ 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4
+ 1.1. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . 4
+ 1.2. Positioning within BMWG Methodologies . . . . . . . . . . 6
+ 2. Overview of RFC 2544 Problems . . . . . . . . . . . . . . . . 7
+ 2.1. Long Search Duration . . . . . . . . . . . . . . . . . . 7
+ 2.2. DUT in SUT . . . . . . . . . . . . . . . . . . . . . . . 8
+ 2.3. Repeatability and Comparability . . . . . . . . . . . . . 10
+ 2.4. Throughput with Non-Zero Loss . . . . . . . . . . . . . . 11
+ 2.5. Inconsistent Trial Results . . . . . . . . . . . . . . . 12
+ 3. Requirements Language . . . . . . . . . . . . . . . . . . . . 13
+ 4. MLRsearch Specification . . . . . . . . . . . . . . . . . . . 13
+ 4.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 14
+ 4.1.1. Relationship to RFC 2544 . . . . . . . . . . . . . . 14
+ 4.1.2. Applicability of Other Specifications . . . . . . . . 15
+ 4.1.3. Out of Scope . . . . . . . . . . . . . . . . . . . . 15
+ 4.2. Architecture Overview . . . . . . . . . . . . . . . . . . 15
+ 4.2.1. Test Report . . . . . . . . . . . . . . . . . . . . . 17
+ 4.2.2. Behavior Correctness . . . . . . . . . . . . . . . . 17
+ 4.3. Quantities . . . . . . . . . . . . . . . . . . . . . . . 17
+ 4.3.1. Current and Final Values . . . . . . . . . . . . . . 18
+ 4.4. Existing Terms . . . . . . . . . . . . . . . . . . . . . 18
+ 4.4.1. SUT . . . . . . . . . . . . . . . . . . . . . . . . . 18
+ 4.4.2. DUT . . . . . . . . . . . . . . . . . . . . . . . . . 19
+ 4.4.3. Trial . . . . . . . . . . . . . . . . . . . . . . . . 19
+ 4.5. Trial Terms . . . . . . . . . . . . . . . . . . . . . . . 21
+ 4.5.1. Trial Duration . . . . . . . . . . . . . . . . . . . 21
+ 4.5.2. Trial Load . . . . . . . . . . . . . . . . . . . . . 21
+ 4.5.3. Trial Input . . . . . . . . . . . . . . . . . . . . . 23
+ 4.5.4. Traffic Profile . . . . . . . . . . . . . . . . . . . 23
+ 4.5.5. Trial Forwarding Ratio . . . . . . . . . . . . . . . 25
+ 4.5.6. Trial Loss Ratio . . . . . . . . . . . . . . . . . . 25
+ 4.5.7. Trial Forwarding Rate . . . . . . . . . . . . . . . . 26
+ 4.5.8. Trial Effective Duration . . . . . . . . . . . . . . 27
+ 4.5.9. Trial Output . . . . . . . . . . . . . . . . . . . . 27
+ 4.5.10. Trial Result . . . . . . . . . . . . . . . . . . . . 28
+ 4.6. Goal Terms . . . . . . . . . . . . . . . . . . . . . . . 28
+ 4.6.1. Goal Final Trial Duration . . . . . . . . . . . . . . 28
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 2]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ 4.6.2. Goal Duration Sum . . . . . . . . . . . . . . . . . . 29
+ 4.6.3. Goal Loss Ratio . . . . . . . . . . . . . . . . . . . 29
+ 4.6.4. Goal Exceed Ratio . . . . . . . . . . . . . . . . . . 30
+ 4.6.5. Goal Width . . . . . . . . . . . . . . . . . . . . . 30
+ 4.6.6. Goal Initial Trial Duration . . . . . . . . . . . . . 31
+ 4.6.7. Search Goal . . . . . . . . . . . . . . . . . . . . . 32
+ 4.6.8. Controller Input . . . . . . . . . . . . . . . . . . 32
+ 4.7. Auxiliary Terms . . . . . . . . . . . . . . . . . . . . . 34
+ 4.7.1. Trial Classification . . . . . . . . . . . . . . . . 34
+ 4.7.2. Load Classification . . . . . . . . . . . . . . . . . 35
+ 4.8. Result Terms . . . . . . . . . . . . . . . . . . . . . . 37
+ 4.8.1. Relevant Upper Bound . . . . . . . . . . . . . . . . 37
+ 4.8.2. Relevant Lower Bound . . . . . . . . . . . . . . . . 38
+ 4.8.3. Conditional Throughput . . . . . . . . . . . . . . . 38
+ 4.8.4. Goal Results . . . . . . . . . . . . . . . . . . . . 39
+ 4.8.5. Search Result . . . . . . . . . . . . . . . . . . . . 40
+ 4.8.6. Controller Output . . . . . . . . . . . . . . . . . . 41
+ 4.9. Architecture Terms . . . . . . . . . . . . . . . . . . . 41
+ 4.9.1. Measurer . . . . . . . . . . . . . . . . . . . . . . 42
+ 4.9.2. Controller . . . . . . . . . . . . . . . . . . . . . 42
+ 4.9.3. Manager . . . . . . . . . . . . . . . . . . . . . . . 43
+ 4.10. Compliance . . . . . . . . . . . . . . . . . . . . . . . 44
+ 4.10.1. Test Procedure Compliant with MLRsearch . . . . . . 44
+ 4.10.2. MLRsearch Compliant with RFC 2544 . . . . . . . . . 45
+ 4.10.3. MLRsearch Compliant with TST009 . . . . . . . . . . 45
+ 5. Methodology Rationale and Design Considerations . . . . . . . 46
+ 5.1. Binary Search . . . . . . . . . . . . . . . . . . . . . . 46
+ 5.2. Stopping Conditions and Precision . . . . . . . . . . . . 47
+ 5.3. Loss Ratios and Loss Inversion . . . . . . . . . . . . . 47
+ 5.3.1. Single Goal and Hard Bounds . . . . . . . . . . . . . 47
+ 5.3.2. Multiple Goals and Loss Inversion . . . . . . . . . . 47
+ 5.3.3. Conservativeness and Relevant Bounds . . . . . . . . 48
+ 5.3.4. Consequences . . . . . . . . . . . . . . . . . . . . 49
+ 5.4. Exceed Ratio and Multiple Trials . . . . . . . . . . . . 49
+ 5.5. Short Trials and Duration Selection . . . . . . . . . . . 50
+ 5.6. Generalized Throughput . . . . . . . . . . . . . . . . . 50
+ 5.6.1. Hard Performance Limit . . . . . . . . . . . . . . . 51
+ 5.6.2. Performance Variability . . . . . . . . . . . . . . . 51
+ 6. MLRsearch Logic and Example . . . . . . . . . . . . . . . . . 52
+ 6.1. Load Classification Logic . . . . . . . . . . . . . . . . 52
+ 6.2. Conditional Throughput Logic . . . . . . . . . . . . . . 54
+ 6.2.1. Conditional Throughput and Load Classification . . . 55
+ 6.3. SUT Behaviors . . . . . . . . . . . . . . . . . . . . . . 55
+ 6.3.1. Expert Predictions . . . . . . . . . . . . . . . . . 55
+ 6.3.2. Exceed Probability . . . . . . . . . . . . . . . . . 56
+ 6.3.3. Trial Duration Dependence . . . . . . . . . . . . . . 56
+ 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 57
+ 8. Security Considerations . . . . . . . . . . . . . . . . . . . 57
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 3]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 57
+ 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 58
+ 10.1. Normative References . . . . . . . . . . . . . . . . . . 58
+ 10.2. Informative References . . . . . . . . . . . . . . . . . 58
+ Appendix A. Load Classification Code . . . . . . . . . . . . . . 59
+ Appendix B. Conditional Throughput Code . . . . . . . . . . . . 61
+ Appendix C. Example Search . . . . . . . . . . . . . . . . . . . 63
+ C.1. Example Goals . . . . . . . . . . . . . . . . . . . . . . 64
+ C.2. Example Trial Results . . . . . . . . . . . . . . . . . . 65
+ C.3. Load Classification Computations . . . . . . . . . . . . 66
+ C.3.1. Point 1 . . . . . . . . . . . . . . . . . . . . . . . 66
+ C.3.2. Point 2 . . . . . . . . . . . . . . . . . . . . . . . 67
+ C.3.3. Point 3 . . . . . . . . . . . . . . . . . . . . . . . 69
+ C.3.4. Point 4 . . . . . . . . . . . . . . . . . . . . . . . 70
+ C.3.5. Point 5 . . . . . . . . . . . . . . . . . . . . . . . 71
+ C.3.6. Point 6 . . . . . . . . . . . . . . . . . . . . . . . 73
+ C.4. Conditional Throughput Computations . . . . . . . . . . . 74
+ C.4.1. Goal 2 . . . . . . . . . . . . . . . . . . . . . . . 74
+ C.4.2. Goal 3 . . . . . . . . . . . . . . . . . . . . . . . 75
+ C.4.3. Goal 4 . . . . . . . . . . . . . . . . . . . . . . . 76
+ Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 78
+
+1. Introduction
+
+ This document describes the Multiple Loss Ratio search (MLRsearch)
+ methodology, optimized for determining data plane throughput in
+ software-based networking functions running on commodity systems with
+ x86/ARM CPUs (vs purpose-built ASIC / NPU / FPGA). Such network
+ functions can be deployed on dedicated physical appliance (e.g., a
+ standalone hardware device) or as virtual appliance (e.g., Virtual
+ Network Function running on shared servers in the compute cloud).
+
+1.1. Purpose
+
+ The purpose of this document is to describe the Multiple Loss Ratio
+ search (MLRsearch) methodology, optimized for determining data plane
+ throughput in software-based networking devices and functions.
+
+ Applying the vanilla throughput binary search, as specified for
+ example in [TST009] and [RFC2544] to software devices under test
+ (DUTs) results in several problems:
+
+ * Binary search takes long as most trials are done far from the
+ eventually found throughput.
+
+ * The required final trial duration and pauses between trials
+ prolong the overall search duration.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 4]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * Software DUTs show noisy trial results, leading to a big spread of
+ possible discovered throughput values.
+
+ * Throughput requires a loss of exactly zero frames, but the
+ industry best practices frequently allow for low but non-zero
+ losses tolerance ([Y.1564], test-equipment manuals).
+
+ * The definition of throughput is not clear when trial results are
+ inconsistent. (e.g., when successive trials at the same - or even
+ a higher - offered load yield different loss ratios, the classical
+ [RFC1242] / [RFC2544] throughput metric can no longer be pinned to
+ a single, unambiguous value.)
+
+ To address these problems, early MLRsearch implementations employed
+ the following enhancements:
+
+ 1. Allow multiple short trials instead of one big trial per load.
+
+ * Optionally, tolerate a percentage of trial results with higher
+ loss.
+
+ 2. Allow searching for multiple Search Goals, with differing loss
+ ratios.
+
+ * Any trial result can affect each Search Goal in principle.
+
+ 3. Insert multiple coarse targets for each Search Goal, earlier ones
+ need to spend less time on trials.
+
+ * Earlier targets also aim for lesser precision.
+
+ * Use Forwarding Rate (FR) at Maximum Offered Load (FRMOL), as
+ defined in Section 3.6.2 of [RFC2285], to initialize bounds.
+
+ 4. Be careful when dealing with inconsistent trial results.
+
+ * Reported throughput is smaller than the smallest load with
+ high loss.
+
+ * Smaller load candidates are measured first.
+
+ 5. Apply several time-saving load selection heuristics that
+ deliberately prevent the bounds from narrowing unnecessarily.
+
+ Enhacements 1, 2 and partly 4 are formalized as MLRsearch
+ Specification within this document, other implementation details are
+ out the scope.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 5]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The remaining enhancements are treated as implementation details,
+ thus achieving high comparability without limiting future
+ improvements.
+
+ MLRsearch configuration supports both conservative settings and
+ aggressive settings. Conservative enough settings lead to results
+ unconditionally compliant with [RFC2544], but without much
+ improvement on search duration and repeatability - see MLRsearch
+ Compliant with RFC 2544 (Section 4.10.2). Conversely, aggressive
+ settings lead to shorter search durations and better repeatability,
+ but the results are not compliant with [RFC2544]. Exact settings are
+ not specified, but see the discussion in Overview of RFC 2544
+ Problems (Section 2) for the impact of different settings on result
+ quality.
+
+ This document does not change or obsolete any part of [RFC2544].
+
+1.2. Positioning within BMWG Methodologies
+
+ The Benchmarking Methodology Working Group (BMWG) produces
+ recommendations (RFCs) that describe various benchmarking
+ methodologies for use in a controlled laboratory environment. A
+ large number of these benchmarks are based on the terminology from
+ [RFC1242] and the foundational methodology from [RFC2544]. A common
+ pattern has emerged where BMWG documents reference the methodology of
+ [RFC2544] and augment it with specific requirements for testing
+ particular network systems or protocols, without modifying the core
+ benchmark definitions.
+
+ While BMWG documents are formally recommendations, they are widely
+ treated as industry norms to ensure the comparability of results
+ between different labs. The set of benchmarks defined in [RFC2544],
+ in particular, became a de facto standard for performance testing.
+ In this context, the MLRsearch Specification formally defines a new
+ class of benchmarks that fits within the wider [RFC2544] framework
+ (see Scope (Section 4.1)).
+
+ A primary consideration in the design of MLRsearch is the trade-off
+ between configurability and comparability. The methodology's
+ flexibility, especially the ability to define various sets of Search
+ Goals, supporting both single-goal and multiple-goal benchmarks in an
+ unified way is powerful for detailed characterization and internal
+ testing. However, this same flexibility is detrimental to inter-lab
+ comparability unless a specific, common set of Search Goals is agreed
+ upon.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 6]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Therefore, MLRsearch should not be seen as a direct extension nor a
+ replacement for the [RFC2544] Throughput benchmark. Instead, this
+ document provides a foundational methodology that future BMWG
+ documents can use to define new, specific, and comparable benchmarks
+ by mandating particular Search Goal configurations. For operators of
+ existing test procedures, it is worth noting that many test setups
+ measuring [RFC2544] Throughput can be adapted to produce results
+ compliant with the MLRsearch Specification, often without affecting
+ Trials, merely by augmenting the content of the final test report.
+
+2. Overview of RFC 2544 Problems
+
+ This section describes the problems affecting usability of various
+ performance testing methodologies, mainly a binary search for
+ [RFC2544] unconditionally compliant throughput.
+
+2.1. Long Search Duration
+
+ The proliferation of software DUTs, with frequent software updates
+ and a
+
+ number of different frame processing modes and configurations, has
+ increased both the number of performance tests required to verify the
+ DUT update and the frequency of running those tests. This makes the
+ overall test execution time even more important than before.
+
+ The throughput definition per [RFC2544] restricts the potential for
+ time-efficiency improvements. The bisection method, when used in a
+ manner unconditionally compliant with [RFC2544], is excessively slow
+ due to two main factors.
+
+ Firstly, a significant amount of time is spent on trials with loads
+ that, in retrospect, are far from the final determined throughput.
+
+ Secondly, [RFC2544] does not specify any stopping condition for
+ throughput search, so users of testing equipment implementing the
+ procedure already have access to a limited trade-off between search
+ duration and achieved precision. However, each of the full 60-second
+ trials doubles the precision.
+
+ As such, not many trials can be removed without a substantial loss of
+ precision.
+
+ For reference, here is a brief [RFC2544] throughput binary
+ (bisection) reminder, based on Sections 24 and 26 of [RFC2544:
+
+ * Set Max = line-rate and Min = a proven loss-free load.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 7]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * Run a single 60-s trial at the midpoint.
+
+ * Zero-loss -> midpoint becomes new Min; any loss-> new Max.
+
+ * Repeat until the Max-Min gap meets the desired precision, then
+ report the highest zero-loss rate for every mandatory frame size.
+
+2.2. DUT in SUT
+
+ [RFC2285] defines:
+
+ DUT as:
+
+ * The network frame forwarding device to which stimulus is offered
+ and response measured Section 3.1.1 of [RFC2285].
+
+ SUT as:
+
+ * The collective set of network devices as a single entity to which
+ stimulus is offered and response measured Section 3.1.2 of
+ [RFC2285].
+
+ Section 19 of [RFC2544] specifies a test setup with an external
+ tester stimulating the networking system, treating it either as a
+ single Device Under Test (DUT), or as a system of devices, a System
+ Under Test (SUT).
+
+ For software-based data-plane forwarding running on commodity x86/ARM
+ CPUs, the SUT comprises not only the forwarding application itself,
+ the DUT, but the entire execution environment: host hardware,
+ firmware and kernel/hypervisor services, as well as any other
+ software workloads that share the same CPUs, memory and I/O
+ resources.
+
+ Given that a SUT is a shared multi-tenant environment, the DUT might
+ inadvertently experience interference from the operating system or
+ other software operating on the same server.
+
+ Some of this interference can be mitigated. For instance, in multi-
+ core CPU systems, pinning DUT program threads to specific CPU cores
+ and isolating those cores can prevent context switching.
+
+ Despite taking all feasible precautions, some adverse effects may
+ still impact the DUT's network performance. In this document, these
+ effects are collectively referred to as SUT noise, even if the
+ effects are not as unpredictable as what other engineering
+ disciplines call noise.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 8]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ A DUT can also exhibit fluctuating performance itself, for reasons
+ not related to the rest of SUT. For example, this can be due to
+ pauses in execution as needed for internal stateful processing. In
+ many cases this may be an expected per-design behavior, as it would
+ be observable even in a hypothetical scenario where all sources of
+ SUT noise are eliminated. Such behavior affects trial results in a
+ way similar to SUT noise. As the two phenomena are hard to
+ distinguish, in this document the term 'noise' is used to encompass
+ both the internal performance fluctuations of the DUT and the genuine
+ noise of the SUT.
+
+ A simple model of SUT performance consists of an idealized noiseless
+ performance, and additional noise effects. For a specific SUT, the
+ noiseless performance is assumed to be constant, with all observed
+ performance variations being attributed to noise. The impact of the
+ noise can vary in time, sometimes wildly, even within a single trial.
+ The noise can sometimes be negligible, but frequently it lowers the
+ observed SUT performance as observed in trial results.
+
+ In this simple model, a SUT does not have a single performance value,
+ it has a spectrum. One end of the spectrum is the idealized
+ noiseless performance value, the other end can be called a noiseful
+ performance. In practice, trial results close to the noiseful end of
+ the spectrum happen only rarely. The worse a possible performance
+ value is, the more rarely it is seen in a trial. Therefore, the
+ extreme noiseful end of the SUT spectrum is not observable among
+ trial results.
+
+ Furthermore, the extreme noiseless end of the SUT spectrum is
+ unlikely to be observable, this time because minor noise events
+ almost always occur during each trial, nudging the measured
+ performance slightly below the theoretical maximum.
+
+ Unless specified otherwise, this document's focus is on the
+ potentially observable ends of the SUT performance spectrum, as
+ opposed to the extreme ones.
+
+ When focusing on the DUT, the benchmarking effort should ideally aim
+ to eliminate only the SUT noise from SUT measurements. However, this
+ is currently not feasible in practice, as there are no realistic
+ enough models that would be capable to distinguish SUT noise from DUT
+ fluctuations (based on the available literature at the time of
+ writing).
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 9]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Provided SUT execution environment and any co-resident workloads
+ place only negligible demands on SUT shared resources, so that the
+ DUT remains the principal performance limiter, the DUT's ideal
+ noiseless performance is defined as the noiseless end of the SUT
+ performance spectrum.
+
+ Note that by this definition, DUT noiseless performance also
+ minimizes the impact of DUT fluctuations, as much as realistically
+ possible for a given trial duration.
+
+ The MLRsearch methodology aims to solve the DUT in SUT problem by
+ estimating the noiseless end of the SUT performance spectrum using a
+ limited number of trial results.
+
+ Improvements to the throughput search algorithm, aimed at better
+ dealing with software networking SUT and DUT setups, should adopt
+ methods that explicitly model SUT-generated noise, enabling to derive
+ surrogate metrics that approximate the (proxies for) DUT noiseless
+ performance across a range of SUT noise-tolerance levels.
+
+2.3. Repeatability and Comparability
+
+ [RFC2544] does not suggest repeating throughput search. Also, note
+ that from simply one discovered throughput value, it cannot be
+ determined how repeatable that value is. Unsatisfactory
+ repeatability then leads to unacceptable comparability, as different
+ benchmarking teams may obtain varying throughput values for the same
+ SUT, exceeding the expected differences from search precision.
+ Repeatability is important also when the test procedure is kept the
+ same, but SUT is varied in small ways. For example, during
+ development of software-based DUTs, repeatability is needed to detect
+ small regressions.
+
+ [RFC2544] throughput requirements (60 seconds trial and no tolerance
+ of a single frame loss) affect the throughput result as follows:
+
+ The SUT behavior close to the noiseful end of its performance
+ spectrum consists of rare occasions of significantly low performance,
+ but the long trial duration makes those occasions not so rare on the
+ trial level. Therefore, the binary search results tend to wander
+ away from the noiseless end of SUT performance spectrum, more
+ frequently and more widely than shorter trials would, thus causing
+ unacceptable throughput repeatability.
+
+ The repeatability problem can be better addressed by defining a
+ search procedure that identifies a consistent level of performance,
+ even if it does not meet the strict definition of throughput in
+ [RFC2544].
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 10]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ According to the SUT performance spectrum model, better repeatability
+ will be at the noiseless end of the spectrum. Therefore, solutions
+ to the DUT in SUT problem will help also with the repeatability
+ problem.
+
+ Conversely, any alteration to [RFC2544] throughput search that
+ improves repeatability should be considered as less dependent on the
+ SUT noise.
+
+ An alternative option is to simply run a search multiple times, and
+ report some statistics (e.g., average and standard deviation, and/or
+ percentiles like p95).
+
+ This can be used for a subset of tests deemed more important, but it
+ makes the search duration problem even more pronounced.
+
+2.4. Throughput with Non-Zero Loss
+
+ Section 3.17 of [RFC1242] defines throughput as: The maximum rate at
+ which none of the offered frames are dropped by the device.
+
+ Then, it says: Since even the loss of one frame in a data stream can
+ cause significant delays while waiting for the higher-level
+ protocols to time out, it is useful to know the actual maximum
+ data rate that the device can support.
+
+ However, many benchmarking teams accept a low, non-zero loss ratio as
+ the goal for their load search.
+
+ Motivations are many:
+
+ * Networking protocols tolerate frame loss better, compared to the
+ time when [RFC1242] and [RFC2544] were specified.
+
+ * Increased link speeds require trials sending way more frames
+ within the same duration, increasing the chance of a small SUT
+ performance fluctuation being enough to cause frame loss.
+
+ * Because noise-related drops usually arrive in small bursts, their
+ impact on the trial's overall frame loss ratio is diluted by the
+ longer intervals in which the SUT operates close to its noiseless
+ performance; consequently, the averaged Trial Loss Ratio can still
+ end up below the specified Goal Loss Ratio value.
+
+ * If an approximation of the SUT noise impact on the Trial Loss
+ Ratio is known, it can be set as the Goal Loss Ratio (see
+ definitions of Trial and Goal terms in Trial Terms (Section 4.5)
+ and Goal Terms (Section 4.6)).
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 11]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * For more information, see an earlier draft [Lencze-Shima]
+ (Section 5) and references there.
+
+ Regardless of the validity of all similar motivations, support for
+ non-zero loss goals makes a search algorithm more user-friendly.
+ [RFC2544] throughput is not user-friendly in this regard.
+
+ Furthermore, allowing users to specify multiple loss ratio values,
+ and enabling a single search to find all relevant bounds,
+ significantly enhances the usefulness of the search algorithm.
+
+ Searching for multiple Search Goals also helps to describe the SUT
+ performance spectrum better than the result of a single Search Goal.
+ For example, the repeated wide gap between zero and non-zero loss
+ loads indicates the noise has a large impact on the observed
+ performance, which is not evident from a single goal load search
+ procedure result.
+
+ It is easy to modify the vanilla bisection to find a lower bound for
+ the load that satisfies a non-zero Goal Loss Ratio. But it is not
+ that obvious how to search for multiple goals at once, hence the
+ support for multiple Search Goals remains a problem.
+
+ At the time of writing there does not seem to be a consensus in the
+ industry on which ratio value is the best. For users, performance of
+ higher protocol layers is important, for example, goodput of TCP
+ connection (TCP throughput, [RFC6349]), but relationship between
+ goodput and loss ratio is not simple. Refer to [Lencze-Kovacs-Shima]
+ for examples of various corner cases, Section 3 of [RFC6349] for loss
+ ratios acceptable for an accurate measurement of TCP throughput, and
+ [Ott-Mathis-Semke-Mahdavi] for models and calculations of TCP
+ performance in presence of packet loss.
+
+2.5. Inconsistent Trial Results
+
+ While performing throughput search by executing a sequence of
+ measurement trials, there is a risk of encountering inconsistencies
+ between trial results.
+
+ Examples include, but are not limited to:
+
+ * A trial at the same load (same or different trial duration)
+ results in a different Trial Loss Ratio.
+
+ * A trial at a larger load (same or different trial duration)
+ results in a lower Trial Loss Ratio.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 12]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The plain bisection never encounters inconsistent trials. But
+ [RFC2544] hints about the possibility of inconsistent trial results,
+ in two places in its text. The first place is Section 24 of
+ [RFC2544], where full trial durations are required, presumably
+ because they can be inconsistent with the results from short trial
+ durations. The second place is Section 26.3 of [RFC2544], where two
+ successive zero-loss trials are recommended, presumably because after
+ one zero-loss trial there can be a subsequent inconsistent non-zero-
+ loss trial.
+
+ A robust throughput search algorithm needs to decide how to continue
+ the search in the presence of such inconsistencies. Definitions of
+ throughput in [RFC1242] and [RFC2544] are not specific enough to
+ imply a unique way of handling such inconsistencies.
+
+ Ideally, there will be a definition of a new quantity which both
+ generalizes throughput for non-zero Goal Loss Ratio values (and other
+ possible repeatability enhancements), while being precise enough to
+ force a specific way to resolve trial result inconsistencies. But
+ until such a definition is agreed upon, the correct way to handle
+ inconsistent trial results remains an open problem.
+
+ Relevant Lower Bound is the MLRsearch term that addresses this
+ problem.
+
+3. Requirements Language
+
+ The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
+ "OPTIONAL" in this document are to be interpreted as described in BCP
+ 14, [RFC2119] and [RFC8174] when, and only when, they appear in all
+ capitals, as shown here.
+
+ This document is categorized as an Informational RFC. While it does
+ not mandate the adoption of the MLRsearch methodology, it uses the
+ normative language of BCP 14 to provide an unambiguous specification.
+ This ensures that if a test procedure or test report claims
+ compliance with the MLRsearch Specification, it MUST adhere to all
+ the absolute requirements defined herein. The use of normative
+ language is intended to promote repeatable and comparable results
+ among those who choose to implement this methodology.
+
+4. MLRsearch Specification
+
+ This chapter provides all technical definitions needed for evaluating
+ whether a particular test procedure complies with MLRsearch
+ Specification.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 13]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Some terms used in the specification are capitalized. It is just a
+ stylistic choice for this document, reminding the reader this term is
+ introduced, defined or explained elsewhere in the document.
+ Lowercase variants are equally valid.
+
+ This document does not separate terminology from methodology. Terms
+ are fully specified and discussed in their own subsections, under
+ sections titled "Terms". This way, the list of terms is visible in
+ table of contents.
+
+ Each per term subsection contains a short _Definition_ paragraph
+ containing a minimal definition and all strict requirements, followed
+ by _Discussion_ paragraphs focusing on important consequences and
+ recommendations. Requirements about how other components can use the
+ defined quantity are also included in the discussion.
+
+4.1. Scope
+
+ This document specifies the Multiple Loss Ratio search (MLRsearch)
+ methodology. The MLRsearch Specification details a new class of
+ benchmarks by listing all terminology definitions and methodology
+ requirements. The definitions support "multi-goal" benchmarks, with
+ "single-goal" as a subset.
+
+ The normative scope of this specification includes:
+
+ * The terminology for all required quantities and their attributes.
+
+ * An abstract architecture consisting of functional components
+ (Manager, Controller, Measurer) and the requirements for their
+ inputs and outputs.
+
+ * The required structure and attributes of the Controller Input,
+ including one or more Search Goal instances.
+
+ * The required logic for Load Classification, which determines
+ whether a given Trial Load qualifies as a Lower Bound or an Upper
+ Bound for a Search Goal.
+
+ * The required structure and attributes of the Controller Output,
+ including a Goal Result for each Search Goal.
+
+4.1.1. Relationship to RFC 2544
+
+ MLRsearch Specification is an independent methodology and does not
+ change or obsolete any part of [RFC2544].
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 14]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ This specification permits deviations from the Trial procedure as
+ described in [RFC2544]. Any deviation from the [RFC2544] procedure
+ must be documented explicitly in the Test Report, and such variations
+ remain outside the scope of the original [RFC2544] benchmarks.
+
+ A specific single-goal MLRsearch benchmark can be configured to be
+ compliant with [RFC2544] Throughput, and most procedures reporting
+ [RFC2544] Throughput can be adapted to satisfy also MLRsearch
+ requirements for specific search goal.
+
+4.1.2. Applicability of Other Specifications
+
+ Methodology extensions from other BMWG documents that specify details
+ for testing particular DUTs, configurations, or protocols (e.g., by
+ defining a particular Traffic Profile) are considered orthogonal to
+ MLRsearch and are applicable to a benchmark conducted using MLRsearch
+ methodology.
+
+4.1.3. Out of Scope
+
+ The following aspects are explicitly out of the normative scope of
+ this document:
+
+ * This specification does not mandate or recommend any single,
+ universal Search Goal configuration for all use cases. The
+ selection of Search Goal parameters is left to the operator of the
+ test procedure or may be defined by future specifications.
+
+ * The internal heuristics or algorithms used by the Controller to
+ select Trial Input values (e.g., the load selection strategy) are
+ considered implementation details.
+
+ * The potential for, and the effects of, interference between
+ different Search Goal instances within a multiple-goal search are
+ considered outside the normative scope of this specification.
+
+4.2. Architecture Overview
+
+ Although the normative text references only terminology that has
+ already been introduced, explanatory passages beside it sometimes
+ profit from terms that are defined later in the document. To keep
+ the initial read-through clear, this informative section offers a
+ concise, top-down sketch of the complete MLRsearch architecture.
+
+ The architecture is modelled as a set of abstract, interacting
+ components. Information exchange between components is expressed in
+ an imperative-programming style: one component "calls" another,
+ supplying inputs (arguments) and receiving outputs (return values).
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 15]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ This notation is purely conceptual; actual implementations need not
+ exchange explicit messages. When the text contrasts alternative
+ behaviours, it refers to the different implementations of the same
+ component.
+
+ A test procedure is considered compliant with the MLRsearch
+ Specification if it can be conceptually decomposed into the abstract
+ components defined herein, and each component satisfies the
+ requirements defined for its corresponding MLRsearch element.
+
+ The Measurer component is tasked to perform Trials, the Controller
+ component is tasked to select Trial Durations and Loads, the Manager
+ component is tasked to pre-configure involved entities and to produce
+ the Test Report. The Test Report explicitly states Search Goals (as
+ Controller Input) and corresponding Goal Results (Controller Output).
+
+ This constitutes one benchmark (single-goal or multi-goal). Repeated
+ or slightly differing benchmarks are realized by calling Controller
+ once for each benchmark.
+
+ The Manager calls a Controller once, and the Controller then invokes
+ the Measurer repeatedly until Controler decides it has enough
+ information to return outputs.
+
+ The part during which the Controller invokes the Measurer is termed
+ the Search. Any work the Manager performs either before invoking the
+ Controller or after Controller returns, falls outside the scope of
+ the Search.
+
+ MLRsearch Specification prescribes Regular Search Results and
+ recommends corresponding search completion conditions.
+
+ Irregular Search Results are also allowed, they have different
+ requirements and their corresponding stopping conditions are out of
+ scope.
+
+ Search Results are based on Load Classification. When measured
+ enough, a chosen Load can either achieve or fail each Search Goal
+ (separately), thus becoming a Lower Bound or an Upper Bound for that
+ Search Goal.
+
+ When the Relevant Lower Bound is close enough to Relevant Upper Bound
+ according to Goal Width, the Regular Goal Result is found. Search
+ stops when all Regular Goal Results are found, or when some Search
+ Goals are proven to have only Irregular Goal Results.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 16]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.2.1. Test Report
+
+ A primary responsibility of the Manager is to produce a Test Report,
+ which serves as the final and formal output of the test procedure.
+
+ This document does not provide a single, complete, normative
+ definition for the structure of the Test Report. For example, Test
+ Report may contain results for a single benchmark, or it could
+ aggregate results of many benchmarks.
+
+ Instead, normative requirements for the content of the Test Report
+ are specified throughout this document in conjunction with the
+ definitions of the quantities and procedures to which they apply.
+ Readers should note that any clause requiring a value to be
+ "reported" or "stated in the test report" constitutes a normative
+ requirement on the content of this final artifact.
+
+ Even where not stated explicitly, the "Reporting format" paragraphs
+ in [RFC2544] sections are still requirements on Test Report if they
+ apply to a MLRsearch benchmark.
+
+4.2.2. Behavior Correctness
+
+ MLRsearch Specification by itself does not guarantee that the Search
+ ends in finite time, as the freedom the Controller has for Load
+ selection also allows for clearly deficient choices.
+
+ For deeper insights on these matters, refer to [FDio-CSIT-MLRsearch].
+
+ The primary MLRsearch implementation, used as the prototype for this
+ specification, is [PyPI-MLRsearch].
+
+4.3. Quantities
+
+ MLRsearch Specification uses a number of specific quantities, some of
+ them can be expressed in several different units.
+
+ In general, MLRsearch Specification does not require particular units
+ to be used, but it is REQUIRED for the test report to state all the
+ units. For example, ratio quantities can be dimensionless numbers
+ between zero and one, but may be expressed as percentages instead.
+
+ For convenience, a group of quantities can be treated as a composite
+ quantity. One constituent of a composite quantity is called an
+ attribute. A group of attribute values is called an instance of that
+ composite quantity.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 17]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Some attributes may depend on others and can be calculated from other
+ attributes. Such quantities are called derived quantities.
+
+4.3.1. Current and Final Values
+
+ Some quantities are defined in a way that makes it possible to
+ compute their values in the middle of a Search. Other quantities are
+ specified so that their values can be computed only after a Search
+ ends. Some quantities are important only after a Search ended, but
+ their values are computable also before a Search ends.
+
+ For a quantity that is computable before a Search ends, the adjective
+ *current* is used to mark a value of that quantity available before
+ the Search ends. When such value is relevant for the search result,
+ the adjective *final* is used to denote the value of that quantity at
+ the end of the Search.
+
+ If a time evolution of such a dynamic quantity is guided by
+ configuration quantities, those adjectives can be used to distinguish
+ quantities. For example, if the current value of "duration" (dynamic
+ quantity) increases from "initial duration" to "final duration"
+ (configuration quantities), all the quoted names denote separate but
+ related quantities. As the naming suggests, the final value of
+ "duration" is expected to be equal to "final duration" value.
+
+4.4. Existing Terms
+
+ This specification relies on the following three documents that
+ should be consulted before attempting to make use of this document:
+
+ * "Benchmarking Terminology for Network Interconnect Devices"
+ [RFC1242] contains basic term definitions.
+
+ * "Benchmarking Terminology for LAN Switching Devices" [RFC2285]
+ adds more terms and discussions, describing some known network
+ benchmarking situations in a more precise way.
+
+ * "Benchmarking Methodology for Network Interconnect Devices"
+ [RFC2544] contains discussions about terms and additional
+ methodology requirements.
+
+ Definitions of some central terms from above documents are copied and
+ discussed in the following subsections.
+
+4.4.1. SUT
+
+ Defined in Section 3.1.2 of [RFC2285] as follows.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 18]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Definition:
+
+ The collective set of network devices to which stimulus is offered
+ as a single entity and response measured.
+
+ Discussion:
+
+ An SUT consisting of a single network device is allowed by this
+ definition.
+
+ In software-based networking SUT may comprise multitude of
+ networking applications and the entire host hardware and software
+ execution environment.
+
+ SUT is the only entity that can be benchmarked directly, even
+ though only the performance of some sub-components are of
+ interest.
+
+4.4.2. DUT
+
+ Defined in Section 3.1.1 of [RFC2285] as follows.
+
+ Definition:
+
+ The network forwarding device to which stimulus is offered and
+ response measured.
+
+ Discussion:
+
+ Contrary to SUT, the DUT stimulus and response are frequently
+ initiated and observed only indirectly, on different parts of SUT.
+
+ DUT, as a sub-component of SUT, is only indirectly mentioned in
+ MLRsearch Specification, but is of key relevance for its
+ motivation. The device can represent a software-based networking
+ functions running on commodity x86/ARM CPUs (vs purpose-built ASIC
+ / NPU / FPGA).
+
+ A well-designed SUTs should have the primary DUT as their
+ performance bottleneck. The ways to achieve that are outside of
+ MLRsearch Specification scope.
+
+4.4.3. Trial
+
+ A trial is the part of the test described in Section 23 of [RFC2544].
+
+ Definition:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 19]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ A particular test consists of multiple trials. Each trial returns
+ one piece of information, for example the loss rate at a
+ particular input frame rate. Each trial consists of a number of
+ phases:
+
+ a) If the DUT is a router, send the routing update to the "input"
+ port and pause two seconds to be sure that the routing has
+ settled.
+
+ b) Send the "learning frames" to the "output" port and wait 2
+ seconds to be sure that the learning has settled. Bridge learning
+ frames are frames with source addresses that are the same as the
+ destination addresses used by the test frames. Learning frames
+ for other protocols are used to prime the address resolution
+ tables in the DUT. The formats of the learning frame that should
+ be used are shown in the Test Frame Formats document.
+
+ c) Run the test trial.
+
+ d) Wait for two seconds for any residual frames to be received.
+
+ e) Wait for at least five seconds for the DUT to restabilize.
+
+ Discussion:
+
+ The traffic is sent only in phase c) and received in phases c) and
+ d).
+
+ Trials are the only stimuli the SUT is expected to experience
+ during the Search.
+
+ In some discussion paragraphs, it is useful to consider the
+ traffic as sent and received by a tester, as implicitly defined in
+ Section 6 of [RFC2544].
+
+ The definition describes some traits, not using capitalized verbs
+ to signify strength of the requirements. For the purposes of the
+ MLRsearch Specification, the test procedure MAY deviate from the
+ [RFC2544] description, but any such deviation MUST be described
+ explicitly in the Test Report. It is still RECOMMENDED to not
+ deviate from the description, as any deviation weakens
+ comparability.
+
+ An example of deviation from [RFC2544] is using shorter wait
+ times, compared to those described in phases a), b), d) and e).
+
+ The [RFC2544] document itself seems to be treating phase b) as any
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 20]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ type of configuration that cannot be configured only once (by
+ Manager, before Search starts), as some crucial SUT state could
+ time-out during the Search. It is RECOMMENDED to interpret the
+ "learning frames" to be any such time-sensitive per-trial
+ configuration method, with bridge MAC learning being only one
+ possible examples. Appendix C.2.4.1 of [RFC2544] lists another
+ example: ARP with wait time of 5 seconds.
+
+ Some methodologies describe recurring tests. If those are based
+ on Trials, they are treated as multiple independent Trials.
+
+4.5. Trial Terms
+
+ This section defines new and redefine existing terms for quantities
+ relevant as inputs or outputs of a Trial, as used by the Measurer
+ component. This includes also any derived quantities related to
+ results of one Trial.
+
+4.5.1. Trial Duration
+
+ Definition:
+
+ Trial Duration is the intended duration of the phase c) of a
+ Trial.
+
+ Discussion:
+
+ The value MUST be positive.
+
+ While any positive real value may be provided, some Measurer
+ implementations MAY limit possible values, e.g., by rounding down
+ to nearest integer in seconds. In that case, it is RECOMMENDED to
+ give such inputs to the Controller so that the Controller only
+ uses the accepted values.
+
+4.5.2. Trial Load
+
+ Definition:
+
+ Trial Load is the per-interface Intended Load for a Trial.
+
+ Discussion:
+
+ Trial Load is equivalent to the quantities defined as constant
+ load (Section 3.4 of [RFC1242]), data rate (Section 14 of
+ [RFC2544]), and Intended Load (Section 3.5.1 of [RFC2285]), in the
+ sense that all three definitions specify that this value applies
+ to one (input or output) interface.
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 21]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ For specification purposes, it is assumed that this is a constant
+ load by default, as specified in Section 3.4 of [RFC1242]).
+ Informally, Traffic Load is a single number that can "scale" any
+ traffic pattern as long as the intuition of load intended against
+ a single interface can be applied.
+
+ It MAY be possible to use a Trial Load value to describe a non-
+ constant traffic (using average load when the traffic consists of
+ repeated bursts of frames e.g., as suggested in Section 21 of
+ [RFC2544]). In the case of a non-constant load, the Test Report
+ MUST explicitly mention how exactly non-constant the traffic is
+ and how it reacts to Traffic Load value. But the rest of the
+ MLRsearch Specification assumes that is not the case, to avoid
+ discussing corner cases (e.g., which values are possible within
+ medium limitations).
+
+ Similarly, traffic patterns where different interfaces are subject
+ to different loads MAY be described by a single Trial Load value
+ (e.g. using largest load among interfaces), but again the Test
+ Report MUST explicitly describe how the traffic pattern reacts to
+ Traffic Load value, and this specification does not discuss all
+ the implications of that approach.
+
+ In the common case of bidirectional traffic, as described in
+ Section 14. Bidirectional Traffic of [RFC2544], Trial Load is the
+ data rate per direction, half of aggregate data rate.
+
+ Traffic patterns where a single Trial Load does not describe their
+ scaling cannot be used for MLRsearch benchmarks.
+
+ Similarly to Trial Duration, some Measurers MAY limit the possible
+ values of Trial Load. Contrary to Trial Duration, documenting
+ such behavior in the test report is OPTIONAL. This is because the
+ load differences are negligible (and frequently undocumented) in
+ practice.
+
+ The Controller MAY select Trial Load and Trial Duration values in
+ a way that would not be possible to achieve using any integer
+ number of data frames.
+
+ If a particular Trial Load value is not tied to a single Trial,
+ e.g., if there are no Trials yet or if there are multiple Trials,
+ this document uses a shorthand *Load*.
+
+ The test report MAY present the aggregate load across multiple
+ interfaces, treating it as the same quantity expressed using
+ different units. Each reported Trial Load value MUST state
+ unambiguously whether it refers to (i) a single interface, (ii) a
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 22]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ specified subset of interfaces (e.g., such as all logical
+ interfaces mapped to one physical port), or (iii) the total across
+ every interface. For any aggregate load value, the report MUST
+ also give the fixed conversion factor that links the per-interface
+ and multi-interface load values.
+
+ The per-interface value remains the primary unit, consistent with
+ prevailing practice in [RFC1242], [RFC2544], and [RFC2285].
+
+ The last paragraph also applies to other terms related to Load.
+
+ For example, tests with symmetric bidirectional traffic can report
+ load-related values as "bidirectional load" (double of
+ "unidirectional load").
+
+4.5.3. Trial Input
+
+ Definition:
+
+ Trial Input is a composite quantity, consisting of two attributes:
+ Trial Duration and Trial Load.
+
+ Discussion:
+
+ When talking about multiple Trials, it is common to say "Trial
+ Inputs" to denote all corresponding Trial Input instances.
+
+ A Trial Input instance acts as the input for one call of the
+ Measurer component.
+
+ Contrary to other composite quantities, MLRsearch implementations
+ MUST NOT add optional attributes into Trial Input. This improves
+ interoperability between various implementations of a Controller
+ and a Measurer.
+
+ Note that both attributes are *intended* quantities, as only those
+ can be fully controlled by the Controller. The actual offered
+ quantities, as realized by the Measurer, can be different (and
+ must be different if not multiplying into integer number of
+ frames), but questions around those offered quantities are
+ generally outside of the scope of this document.
+
+4.5.4. Traffic Profile
+
+ Definition:
+
+ Traffic Profile is a composite quantity containing all attributes
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 23]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ other than Trial Load and Trial Duration, that are needed for
+ unique determination of the Trial to be performed.
+
+ Discussion:
+
+ All the attributes are assumed to be constant during the Search,
+ and the composite is configured on the Measurer by the Manager
+ before the Search starts. This is why the traffic profile is not
+ part of the Trial Input.
+
+ Specification of traffic properties included in the Traffic
+ Profile is the responsibility of the Manager, but the specific
+ configuration mechanisms are outside of the scope of this
+ docunment.
+
+ Informally, implementations of the Manager and the Measurer must
+ be aware of their common set of capabilities, so that Traffic
+ Profile instance uniquely defines the traffic during the Search.
+ Typically, Manager and Measurer implementations are tightly
+ integrated.
+
+ Integration efforts between independent Manager and Measurer
+ implementations are outside of the scope of this document. An
+ example standardization effort is [Vassilev], a draft at the time
+ of writing.
+
+ Examples of traffic properties include: - Data link frame size -
+ Fixed sizes as listed in Section 3.5 of [RFC1242] and in Section 9
+ of [RFC2544] - IMIX mixed sizes as defined in [RFC6985] - Frame
+ formats and protocol addresses - Section 8, 12 and Appendix C of
+ [RFC2544] - Symmetric bidirectional traffic - Section 14 of
+ [RFC2544].
+
+ Other traffic properties that need to be somehow specified in
+ Traffic Profile, and MUST be mentioned in Test Report if they
+ apply to the benchmark, include:
+
+
+ * bidirectional traffic from Section 14 of [RFC2544],
+
+ * fully meshed traffic from Section 3.3.3 of [RFC2285],
+
+ * modifiers from Section 11 of [RFC2544].
+
+ * IP version mixing from Section 5.3 of [RFC8219].
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 24]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.5.5. Trial Forwarding Ratio
+
+ Definition:
+
+ The Trial Forwarding Ratio is a dimensionless floating point
+ value. It MUST range between 0.0 and 1.0, both inclusive. It is
+ calculated by dividing the number of frames successfully forwarded
+ by the SUT by the total number of frames expected to be forwarded
+ during the trial.
+
+ Discussion:
+
+ For most Traffic Profiles, "expected to be forwarded" means
+ "intended to get received by SUT from tester". This SHOULD be the
+ default interpretation. Only if this is not the case, the test
+ report MUST describe the Traffic Profile in a detail sufficient to
+ imply how Trial Forwarding Ratio should be calculated.
+
+ Trial Forwarding Ratio MAY be expressed in other units (e.g., as a
+ percentage) in the test report.
+
+ Note that, contrary to Load terms, frame counts used to compute
+ Trial Forwarding Ratio are generally aggregates over all SUT
+ output interfaces, as most test procedures verify all outgoing
+ frames. The procedure for [RFC2544] Throughput counts received
+ frames, so implicitly it implies bidirectional counts for
+ bidirectional traffic, even though the final value is "rate" that
+ is still per-interface.
+
+ For example, in a test with symmetric bidirectional traffic, if
+ one direction is forwarded without losses, but the opposite
+ direction does not forward at all, the Trial Forwarding Ratio
+ would be 0.5 (50%).
+
+ In future extensions, more general ways to compute Trial
+ Forwarding Ratio may be allowed, but the current MLRsearch
+ Specification relies on this specific averaged counters approach.
+
+4.5.6. Trial Loss Ratio
+
+ Definition:
+
+ The Trial Loss Ratio is equal to one minus the Trial Forwarding
+ Ratio.
+
+ Discussion:
+
+ 100% minus the Trial Forwarding Ratio, when expressed as a
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 25]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ percentage.
+
+ This is almost identical to Frame Loss Rate of Section 3.6 of
+ [RFC1242]. The only minor differences are that Trial Loss Ratio
+ does not need to be expressed as a percentage, and Trial Loss
+ Ratio is explicitly based on averaged frame counts when more than
+ one data stream is present.
+
+4.5.7. Trial Forwarding Rate
+
+ Definition:
+
+ The Trial Forwarding Rate is a derived quantity, calculated by
+ multiplying the Trial Load by the Trial Forwarding Ratio.
+
+ Discussion:
+
+ This quantity differs from the Forwarding Rate described in
+ Section 3.6.1 of [RFC2285]. Under the RFC 2285 method, each
+ output interface is measured separately, so every interface may
+ report a distinct rate. The Trial Forwarding Rate, by contrast,
+ uses a single set of frame counts and therefore yields one value
+ that represents the whole system, while still preserving the
+ direct link to the per-interface load.
+
+ When the Traffic Profile is symmetric and bidirectional, as
+ defined in Section 14 of [RFC2544], the Trial Forwarding Rate is
+ numerically equal to the arithmetic average of the individual per-
+ interface forwarding rates that would be produced by the RFC 2285
+ procedure.
+
+ For more complex traffic patterns, such as many-to-one as
+ mentioned in Section 3.3.2 Partially Meshed Traffic of [RFC2285],
+ the meaning of Trial Forwarding Rate is less straightforward. For
+ example, if two input interfaces receive one million frames per
+ second each, and a single interface outputs 1.4 million frames per
+ second (fps), Trial Load is 1 million fps, Trial Loss Ratio is
+ 30%, and Trial Forwarding Rate is 0.7 million fps.
+
+ Because this rate is anchored to the Load defined for one
+ interface, a test report MAY show it either as the single averaged
+ figure just described, or as the sum of the separate per-interface
+ forwarding rates. For the example above, the aggregate trial
+ forwarding rate is 1.4 million fps.
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 26]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.5.8. Trial Effective Duration
+
+ Definition:
+
+ Trial Effective Duration is a time quantity related to a Trial, by
+ default equal to the Trial Duration.
+
+ Discussion:
+
+ This is an optional feature. If the Measurer does not return any
+ Trial Effective Duration value, the Controller MUST use the Trial
+ Duration value instead.
+
+ Trial Effective Duration may be any positive time quantity chosen
+ by the Measurer to be used for time-based decisions in the
+ Controller.
+
+ The test report MUST explain how the Measurer computes the
+ returned Trial Effective Duration values, if they are not always
+ equal to the Trial Duration.
+
+ This feature can be beneficial for time-critical benchmarks
+ designed to manage the overall search duration, rather than solely
+ the traffic portion of it. An approach is to measure the duration
+ of the whole trial (including all wait times) and use that as the
+ Trial Effective Duration.
+
+ This is also a way for the Measurer to inform the Controller about
+ its surprising behavior, for example, when rounding the Trial
+ Duration value.
+
+4.5.9. Trial Output
+
+ Definition:
+
+ Trial Output is a composite quantity consisting of several
+ attributes. Required attributes are: Trial Loss Ratio, Trial
+ Effective Duration and Trial Forwarding Rate.
+
+ Discussion:
+
+ When referring to more than one trial, plural term "Trial Outputs"
+ is used to collectively describe multiple Trial Output instances.
+
+ Measurer implementations may provide additional optional
+ attributes. The Controller implementations SHOULD ignore values
+ of any optional attribute they are not familiar with, except when
+ passing Trial Output instances to the Manager.
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 27]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Example of an optional attribute: The aggregate number of frames
+ expected to be forwarded during the trial, especially if it is not
+ (a rounded-down value) implied by Trial Load and Trial Duration.
+
+ While Section 3.5.2 of [RFC2285] requires the Offered Load value
+ to be reported for forwarding rate measurements, it is not
+ required in MLRsearch Specification, as search results do not
+ depend on it.
+
+4.5.10. Trial Result
+
+ Definition:
+
+ Trial Result is a composite quantity, consisting of the Trial
+ Input and the Trial Output.
+
+ Discussion:
+
+ When referring to more than one trial, plural term "Trial Results"
+ is used to collectively describe multiple Trial Result instances.
+
+4.6. Goal Terms
+
+ This section defines new terms for quantities relevant (directly or
+ indirectly) for inputs and outputs of the Controller component.
+
+ Several goal attributes are defined before introducing the main
+ composite quantity: the Search Goal.
+
+ Contrary to other sections, definitions in subsections of this
+ section are necessarily vague, as their fundamental meaning is to act
+ as coefficients in formulas for Controller Output, which are not
+ defined yet.
+
+ The discussions in this section relate the attributes to concepts
+ mentioned in Section Overview of RFC 2544 Problems (Section 2), but
+ even these discussion paragraphs are short, informal, and mostly
+ referencing later sections, where the impact on search results is
+ discussed after introducing the complete set of auxiliary terms.
+
+4.6.1. Goal Final Trial Duration
+
+ Definition:
+
+ Minimal value for Trial Duration that must be reached. The value
+ MUST be positive.
+
+ Discussion:
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 28]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Certain trials must reach this minimum duration before a load can
+ be classified as a lower bound.
+
+ The Controller may choose shorter durations, results of those may
+ be enough for classification as an Upper Bound.
+
+ It is RECOMMENDED for all search goals to share the same Goal
+ Final Trial Duration value. Otherwise, Trial Duration values
+ larger than the Goal Final Trial Duration may occur, weakening the
+ assumptions the Load Classification Logic (Section 6.1) is based
+ on.
+
+4.6.2. Goal Duration Sum
+
+ Definition:
+
+ A threshold value for a particular sum of Trial Effective Duration
+ values. The value MUST be positive.
+
+ Discussion:
+
+ Informally, this prescribes the sufficient number of trials
+ performed at a specific Trial Load and Goal Final Trial Duration
+ during the search.
+
+ If the Goal Duration Sum is larger than the Goal Final Trial
+ Duration, multiple trials may be needed to be performed at the
+ same load.
+
+ Refer to Section MLRsearch Compliant with TST009 (Section 4.10.3)
+ for an example where the possibility of multiple trials at the
+ same load is intended.
+
+ A Goal Duration Sum value shorter than the Goal Final Trial
+ Duration (of the same goal) could save some search time, but is
+ NOT RECOMMENDED, as the time savings come at the cost of decreased
+ repeatability.
+
+ In practice, the Search can spend less than Goal Duration Sum
+ measuring a Load value when the results are particularly one-
+ sided, but also, the Search can spend more than Goal Duration Sum
+ measuring a Load when the results are balanced and include trials
+ shorter than Goal Final Trial Duration.
+
+4.6.3. Goal Loss Ratio
+
+ Definition:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 29]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ A threshold value for Trial Loss Ratio values. The value MUST be
+ non-negative and smaller than one.
+
+ Discussion:
+
+ A trial with Trial Loss Ratio larger than this value signals the
+ SUT may be unable to process this Trial Load well enough.
+
+ See Throughput with Non-Zero Loss (Section 2.4) for reasons why
+ users may want to set this value above zero.
+
+ Since multiple trials may be needed for one Load value, the Load
+ Classification may be more complicated than mere comparison of
+ Trial Loss Ratio to Goal Loss Ratio.
+
+4.6.4. Goal Exceed Ratio
+
+ Definition:
+
+ A threshold value for a particular ratio of sums of Trial
+ Effective Duration values. The value MUST be non-negative and
+ smaller than one.
+
+ Discussion:
+
+ Informally, up to this proportion of Trial Results with Trial Loss
+ Ratio above Goal Loss Ratio is tolerated at a Lower Bound. This
+ is the full impact if every Trial was measured at Goal Final Trial
+ Duration. The actual full logic is more complicated, as shorter
+ Trials are allowed.
+
+ For explainability reasons, the RECOMMENDED value for exceed ratio
+ is 0.5 (50%), as in practice that value leads to the smallest
+ variation in overall Search Duration.
+
+ Refer to Section Exceed Ratio and Multiple Trials (Section 5.4)
+ for more details.
+
+4.6.5. Goal Width
+
+ Definition:
+
+ A threshold value for deciding whether two Trial Load values are
+ close enough. This is an OPTIONAL attribute. If present, the
+ value MUST be positive.
+
+ Discussion:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 30]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Informally, this acts as a stopping condition, controlling the
+ precision of the search result. The search stops if every goal
+ has reached its precision.
+
+ Implementations without this attribute MUST provide the Controller
+ with other means to control the search stopping conditions.
+
+ Absolute load difference and relative load difference are two
+ popular choices, but implementations may choose a different way to
+ specify width.
+
+ The test report MUST make it clear what specific quantity is used
+ as Goal Width.
+
+ It is RECOMMENDED to express Goal Width as a relative difference
+ and setting it to a value not lower than the Goal Loss Ratio.
+
+ Refer to Section Generalized Throughput (Section 5.6) for more
+ elaboration on the reasoning.
+
+4.6.6. Goal Initial Trial Duration
+
+ Definition:
+
+ Minimal value for Trial Duration suggested to use for this goal.
+ If present, this value MUST be positive.
+
+ Discussion:
+
+ This is an example of an optional Search Goal.
+
+ A typical default value is equal to the Goal Final Trial Duration
+ value.
+
+ Informally, this is the shortest Trial Duration the Controller
+ should select when focusing on the goal.
+
+ Note that shorter Trial Duration values can still be used, for
+ example, selected while focusing on a different Search Goal. Such
+ results MUST be still accepted by the Load Classification logic.
+
+ Goal Initial Trial Duration is a mechanism for a user to
+ discourage trials with Trial Duration values deemed as too
+ unreliable for a particular SUT and a given Search Goal.
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 31]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.6.7. Search Goal
+
+ Definition:
+
+ The Search Goal is a composite quantity consisting of several
+ attributes, some of them are required.
+
+ Required attributes: Goal Final Trial Duration, Goal Duration Sum,
+ Goal Loss Ratio and Goal Exceed Ratio.
+
+ Optional attributes: Goal Initial Trial Duration and Goal Width.
+
+ Discussion:
+
+ Implementations MAY add their own attributes. Those additional
+ attributes may be required by an implementation even if they are
+ not required by MLRsearch Specification. However, it is
+ RECOMMENDED for those implementations to support missing
+ attributes by providing typical default values.
+
+ For example, implementations with Goal Initial Trial Durations may
+ also require users to specify "how quickly" should Trial Durations
+ increase.
+
+ Refer to Section Section 4.10 for important Search Goal settings.
+
+4.6.8. Controller Input
+
+ Definition:
+
+ Controller Input is a composite quantity required as an input for
+ the Controller. The only REQUIRED attribute is a list of Search
+ Goal instances.
+
+ Discussion:
+
+ MLRsearch implementations MAY use additional attributes. Those
+ additional attributes may be required by an implementation even if
+ they are not required by MLRsearch Specification.
+
+ Formally, the Manager does not apply any Controller configuration
+ apart from one Controller Input instance.
+
+ For example, Traffic Profile is configured on the Measurer by the
+ Manager, without explicit assistance of the Controller.
+
+ The order of Search Goal instances in a list SHOULD NOT have a big
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 32]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ impact on Controller Output, but MLRsearch implementations MAY
+ base their behavior on the order of Search Goal instances in a
+ list.
+
+4.6.8.1. Max Load
+
+ Definition:
+
+ Max Load is an optional attribute of Controller Input. It is the
+ maximal value the Controller is allowed to use for Trial Load
+ values.
+
+ Discussion:
+
+ Max Load is an example of an optional attribute (outside the list
+ of Search Goals) required by some implementations of MLRsearch.
+
+ If the Max Load value is provided, Controller MUST NOT select
+ Trial Load values larger than that value.
+
+ In theory, each search goal could have its own Max Load value, but
+ as all Trial Results are possibly affecting all Search Goals, it
+ makes more sense for a single Max Load value to apply to all
+ Search Goal instances.
+
+ While Max Load is a frequently used configuration parameter,
+ already governed (as maximum frame rate) by [RFC2544] (Section 20)
+ and (as maximum offered load) by [RFC2285] (Section 3.5.3), some
+ implementations may detect or discover it (instead of requiring a
+ user-supplied value).
+
+ In MLRsearch Specification, one reason for listing the Relevant
+ Upper Bound (Section 4.8.1) as a required attribute is that it
+ makes the search result independent of Max Load value.
+
+ Given that Max Load is a quantity based on Load, Test Report MAY
+ express this quantity using multi-interface values, as sum of per-
+ interface maximal loads.
+
+4.6.8.2. Min Load
+
+ Definition:
+
+ Min Load is an optional attribute of Controller Input. It is the
+ minimal value the Controller is allowed to use for Trial Load
+ values.
+
+ Discussion:
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 33]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Min Load is another example of an optional attribute required by
+ some implementations of MLRsearch. Similarly to Max Load, it
+ makes more sense to prescribe one common value, as opposed to
+ using a different value for each Search Goal.
+
+ If the Min Load value is provided, Controller MUST NOT select
+ Trial Load values smaller than that value.
+
+ Min Load is mainly useful for saving time by failing early,
+ arriving at an Irregular Goal Result when Min Load gets classified
+ as an Upper Bound.
+
+ For implementations, it is RECOMMENDED to require Min Load to be
+ non-zero and large enough to result in at least one frame being
+ forwarded even at shortest allowed Trial Duration, so that Trial
+ Loss Ratio is always well-defined, and the implementation can
+ apply relative Goal Width safely.
+
+ Given that Min Load is a quantity based on Load, Test Report MAY
+ express this quantity using multi-interface values, as sum of per-
+ interface minimal loads.
+
+4.7. Auxiliary Terms
+
+ While the terms defined in this section are not strictly needed when
+ formulating MLRsearch requirements, they simplify the language used
+ in discussion paragraphs and explanation sections.
+
+4.7.1. Trial Classification
+
+ When one Trial Result instance is compared to one Search Goal
+ instance, several relations can be named using short adjectives.
+
+ As trial results do not affect each other, this *Trial
+ Classification* does not change during a Search.
+
+4.7.1.1. High-Loss Trial
+
+ A trial with Trial Loss Ratio larger than a Goal Loss Ratio value is
+ called a *high-loss trial*, with respect to given Search Goal (or
+ lossy trial, if Goal Loss Ratio is zero).
+
+4.7.1.2. Low-Loss Trial
+
+ If a trial is not high-loss, it is called a *low-loss trial* (or
+ zero-loss trial, if Goal Loss Ratio is zero).
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 34]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.7.1.3. Short Trial
+
+ A trial with Trial Duration shorter than the Goal Final Trial
+ Duration is called a *short trial* (with respect to the given Search
+ Goal).
+
+4.7.1.4. Full-Length Trial
+
+ A trial that is not short is called a *full-length* trial.
+
+ Note that this includes Trial Durations larger than Goal Final Trial
+ Duration.
+
+4.7.1.5. Long Trial
+
+ A trial with Trial Duration longer than the Goal Final Trial Duration
+ is called a *long trial*.
+
+4.7.2. Load Classification
+
+ When a set of all Trial Result instances, performed so far at one
+ Trial Load, is compared to one Search Goal instance, their relation
+ can be named using the concept of a bound.
+
+ In general, such bounds are a current quantity, even though cases of
+ a Load changing its classification more than once during the Search
+ is rare in practice.
+
+4.7.2.1. Upper Bound
+
+ Definition:
+
+ A Load value is called an Upper Bound if and only if it is
+ classified as such by Appendix A (Appendix A) algorithm for the
+ given Search Goal at the current moment of the Search.
+
+ Discussion:
+
+ In more detail, the set of all Trial Result instances performed so
+ far at the Trial Load (and any Trial Duration) is certain to fail
+ to uphold all the requirements of the given Search Goal, mainly
+ the Goal Loss Ratio in combination with the Goal Exceed Ratio. In
+ this context, "certain to fail" relates to any possible results
+ within the time remaining till Goal Duration Sum.
+
+ One search goal can have multiple different Trial Load values
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 35]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ classified as its Upper Bounds. While search progresses and more
+ trials are measured, any load value can become an Upper Bound in
+ principle.
+
+ Moreover, a Load can stop being an Upper Bound, but that can only
+ happen when more than Goal Duration Sum of trials are measured
+ (e.g., because another Search Goal needs more trials at this
+ load). Informally, the previous Upper Bound got invalidated. In
+ practice, the Load frequently becomes a Lower Bound
+ (Section 4.7.2.2) instead.
+
+4.7.2.2. Lower Bound
+
+ Definition:
+
+ A Load value is called a Lower Bound if and only if it is
+ classified as such by Appendix A (Appendix A) algorithm for the
+ given Search Goal at the current moment of the search.
+
+ Discussion:
+
+ In more detail, the set of all Trial Result instances performed so
+ far at the Trial Load (and any Trial Duration) is certain to
+ uphold all the requirements of the given Search Goal, mainly the
+ Goal Loss Ratio in combination with the Goal Exceed Ratio. Here
+ "certain to uphold" relates to any possible results within the
+ time remaining till Goal Duration Sum.
+
+ One search goal can have multiple different Trial Load values
+ classified as its Lower Bounds. As search progresses and more
+ trials are measured, any load value can become a Lower Bound in
+ principle.
+
+ No load can be both an Upper Bound and a Lower Bound for the same
+ Search goal at the same time, but it is possible for a larger load
+ to be a Lower Bound while a smaller load is an Upper Bound.
+
+ Moreover, a Load can stop being a Lower Bound, but that can only
+ happen when more than Goal Duration Sum of trials are measured
+ (e.g., because another Search Goal needs more trials at this
+ load). Informally, the previous Lower Bound got invalidated. In
+ practice, the Load frequently becomes an Upper Bound
+ (Section 4.7.2.1) instead.
+
+4.7.2.3. Undecided
+
+ Definition:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 36]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ A Load value is called Undecided if it is currently neither an
+ Upper Bound nor a Lower Bound.
+
+ Discussion:
+
+ A Load value that has not been measured so far is Undecided.
+
+ It is possible for a Load to transition from an Upper Bound to
+ Undecided by adding Short Trials with Low-Loss results. That is
+ yet another reason for users to avoid using Search Goal instances
+ with different Goal Final Trial Duration values.
+
+4.8. Result Terms
+
+ Before defining the full structure of a Controller Output, it is
+ useful to define the composite quantity, called Goal Result. The
+ following subsections define its attribute first, before describing
+ the Goal Result quantity.
+
+ There is a correspondence between Search Goals and Goal Results.
+ Most of the following subsections refer to a given Search Goal, when
+ defining their terms. Conversely, at the end of the search, each
+ Search Goal instance has its corresponding Goal Result instance.
+
+4.8.1. Relevant Upper Bound
+
+ Definition:
+
+ The Relevant Upper Bound is the smallest Trial Load value
+ classified as an Upper Bound for a given Search Goal at the end of
+ the Search.
+
+ Discussion:
+
+ If no measured load had enough High-Loss Trials, the Relevant
+ Upper Bound MAY be non-existent. For example, when Max Load is
+ classified as a Lower Bound.
+
+ Conversely, when Relevant Upper Bound does exist, it is not
+ affected by Max Load value.
+
+ Given that Relevant Upper Bound is a quantity based on Load, Test
+ Report MAY express this quantity using multi-interface values, as
+ sum of per-interface loads.
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 37]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.8.2. Relevant Lower Bound
+
+ Definition:
+
+ The Relevant Lower Bound is the largest Trial Load value among
+ those smaller than the Relevant Upper Bound, that got classified
+ as a Lower Bound for a given Search Goal at the end of the search.
+
+ Discussion:
+
+ If no load had enough Low-Loss Trials, the Relevant Lower Bound
+ MAY be non-existent.
+
+ Strictly speaking, if the Relevant Upper Bound does not exist, the
+ Relevant Lower Bound also does not exist. In a typical case, Max
+ Load is classified as a Lower Bound, making it impossible to
+ increase the Load to continue the search for an Upper Bound.
+ Thus, it is not clear whether a larger value would be found for a
+ Relevant Lower Bound if larger Loads were possible.
+
+ Given that Relevant Lower Bound is a quantity based on Load, Test
+ Report MAY express this quantity using multi-interface values, as
+ sum of per-interface loads.
+
+4.8.3. Conditional Throughput
+
+ Definition:
+
+ Conditional Throughput is a value computed at the Relevant Lower
+ Bound according to algorithm defined in Appendix B (Appendix B).
+
+ Discussion:
+
+ The Relevant Lower Bound is defined only at the end of the Search,
+ and so is the Conditional Throughput. But the algorithm can be
+ applied at any time on any Lower Bound load, so the final
+ Conditional Throughput value may appear sooner than at the end of
+ a Search.
+
+ Informally, the Conditional Throughput should be a typical Trial
+ Forwarding Rate, expected to be seen at the Relevant Lower Bound
+ of a given Search Goal.
+
+ But frequently it is only a conservative estimate thereof, as
+ MLRsearch implementations tend to stop measuring more Trials as
+ soon as they confirm the value cannot get worse than this estimate
+ within the Goal Duration Sum.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 38]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ This value is RECOMMENDED to be used when evaluating repeatability
+ and comparability of different MLRsearch implementations.
+
+ Refer to Section Generalized Throughput (Section 5.6) for more
+ details.
+
+ Given that Conditional Throughput is a quantity based on Load,
+ Test Report MAY express this quantity using multi-interface
+ values, as sum of per-interface forwarding rates.
+
+4.8.4. Goal Results
+
+ MLRsearch Specification is based on a set of requirements for a
+ "regular" result. But in practice, it is not always possible for
+ such result instance to exist, so also "irregular" results need to be
+ supported.
+
+4.8.4.1. Regular Goal Result
+
+ Definition:
+
+ Regular Goal Result is a composite quantity consisting of several
+ attributes. Relevant Upper Bound and Relevant Lower Bound are
+ REQUIRED attributes. Conditional Throughput is a RECOMMENDED
+ attribute.
+
+ Discussion:
+
+ Implementations MAY add their own attributes.
+
+ Test report MUST display Relevant Lower Bound. Displaying
+ Relevant Upper Bound is RECOMMENDED, especially if the
+ implementation does not use Goal Width.
+
+ In general, stopping conditions for the corresponding Search Goal
+ MUST be satisfied to produce a Regular Goal Result. Specifically,
+ if an implementation offers Goal Width as a Search Goal attribute,
+ the distance between the Relevant Lower Bound and the Relevant
+ Upper Bound MUST NOT be larger than the Goal Width.
+
+ For stopping conditions refer to Sections Goal Width
+ (Section 4.6.5) and Stopping Conditions and Precision
+ (Section 5.2).
+
+4.8.4.2. Irregular Goal Result
+
+ Definition:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 39]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Irregular Goal Result is a composite quantity. No attributes are
+ required.
+
+ Discussion:
+
+ It is RECOMMENDED to report any useful quantity even if it does
+ not satisfy all the requirements. For example, if Max Load is
+ classified as a Lower Bound, it is fine to report it as an
+ "effective" Relevant Lower Bound (although not a real one, as that
+ requires Relevant Upper Bound which does not exist in this case),
+ and compute Conditional Throughput for it. In this case, only the
+ missing Relevant Upper Bound signals this result instance is
+ irregular.
+
+ Similarly, if both relevant bounds exist, it is RECOMMENDED to
+ include them as Irregular Goal Result attributes, and let the
+ Manager decide if their distance is too far for Test Report
+ purposes.
+
+ If Test Report displays some Irregular Goal Result attribute
+ values, they MUST be clearly marked as coming from irregular
+ results.
+
+ The implementation MAY define additional attributes, for example
+ explicit flags for expected situations, so the Manager logic can
+ be simpler.
+
+4.8.4.3. Goal Result
+
+ Definition:
+
+ Goal Result is a composite quantity. Each instance is either a
+ Regular Goal Result or an Irregular Goal Result.
+
+ Discussion:
+
+ The Manager MUST be able to distinguish whether the instance is
+ regular or not.
+
+4.8.5. Search Result
+
+ Definition:
+
+ The Search Result is a single composite object that maps each
+ Search Goal instance to a corresponding Goal Result instance.
+
+ Discussion:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 40]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ As an alternative to mapping, the Search Result may be represented
+ as an ordered list of Goal Result instances that appears in the
+ exact sequence of their corresponding Search Goal instances.
+
+ When the Search Result is expressed as a mapping, it MUST contain
+ an entry for every Search Goal instance supplied in the Controller
+ Input.
+
+ Identical Goal Result instances MAY be listed for different Search
+ Goals, but their status as regular or irregular may be different.
+ For example, if two goals differ only in Goal Width value, and the
+ relevant bound values are close enough according to only one of
+ them.
+
+4.8.6. Controller Output
+
+ Definition:
+
+ The Controller Output is a composite quantity returned from the
+ Controller to the Manager at the end of the search. The Search
+ Result instance is its only required attribute.
+
+ Discussion:
+
+ MLRsearch implementation MAY return additional data in the
+ Controller Output, e.g., number of trials performed and the total
+ Search Duration.
+
+4.9. Architecture Terms
+
+ MLRsearch architecture consists of three main system components: the
+ Manager, the Controller, and the Measurer. The components were
+ introduced in Architecture Overview (Section 4.2), and the following
+ subsections finalize their definitions using terms from previous
+ sections.
+
+ Note that the architecture also implies the presence of other
+ components, such as the SUT and the tester (as a sub-component of the
+ Measurer).
+
+ Communication protocols and interfaces between components are left
+ unspecified. For example, when MLRsearch Specification mentions
+ "Controller calls Measurer", it is possible that the Controller
+ notifies the Manager to call the Measurer indirectly instead. In
+ doing so, the Measurer implementations can be fully independent from
+ the Controller implementations, e.g., developed in different
+ programming languages.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 41]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+4.9.1. Measurer
+
+ Definition:
+
+ The Measurer is a functional element that when called with a Trial
+ Input (Section 4.5.3) instance, performs one Trial (Section 4.4.3)
+ and returns a Trial Output (Section 4.5.9) instance.
+
+ Discussion:
+
+ This definition assumes the Measurer is already initialized. In
+ practice, there may be additional steps before the Search, e.g.,
+ when the Manager configures the traffic profile (either on the
+ Measurer or on its tester sub-component directly) and performs a
+ warm-up (if the tester or the test procedure requires one).
+
+ It is the responsibility of the Measurer implementation to uphold
+ any requirements and assumptions present in MLRsearch
+ Specification, e.g., Trial Forwarding Ratio not being larger than
+ one.
+
+ Implementers have some freedom. For example, Section 10 of
+ [RFC2544] gives some suggestions (but not requirements) related to
+ duplicated or reordered frames. Implementations are RECOMMENDED
+ to document their behavior related to such freedoms in as detailed
+ a way as possible.
+
+ It is RECOMMENDED to benchmark the test equipment first, e.g.,
+ connect sender and receiver directly (without any SUT in the
+ path), find a load value that guarantees the Offered Load is not
+ too far from the Intended Load and use that value as the Max Load
+ value. When testing the real SUT, it is RECOMMENDED to turn any
+ severe deviation between the Intended Load and the Offered Load
+ into increased Trial Loss Ratio.
+
+ Neither of the two recommendations are made into mandatory
+ requirements, because it is not easy to provide guidance about
+ when the difference is severe enough, in a way that would be
+ disentangled from other Measurer freedoms.
+
+ For a sample situation where the Offered Load cannot keep up with
+ the Intended Load, and the consequences on MLRsearch result, refer
+ to Section Hard Performance Limit (Section 5.6.1).
+
+4.9.2. Controller
+
+ Definition:
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 42]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The Controller is a functional element that, upon receiving a
+ Controller Input instance, repeatedly generates Trial Input
+ instances for the Measurer and collects the corresponding Trial
+ Output instances. This cycle continues until the stopping
+ conditions are met, at which point the Controller produces a final
+ Controller Output instance and terminates.
+
+ Discussion:
+
+ Informally, the Controller has big freedom in selection of Trial
+ Inputs, and the implementations want to achieve all the Search
+ Goals in the shortest average time.
+
+ The Controller's role in optimizing the overall Search Duration
+ distinguishes MLRsearch algorithms from simpler search procedures.
+
+ Informally, each implementation can have different stopping
+ conditions. Goal Width is only one example. In practice,
+ implementation details do not matter, as long as Goal Result
+ instances are regular.
+
+4.9.3. Manager
+
+ Definition:
+
+ The Manager is a functional element that is reponsible for
+ provisioning other components, calling a Controller component
+ once, and for creating the test report following the reporting
+ format as defined in Section 26 of [RFC2544].
+
+ Discussion:
+
+ The Manager initializes the SUT, the Measurer (and the tester if
+ independent from Measurer) with their intended configurations
+ before calling the Controller.
+
+ Note that Section 7 of [RFC2544] already puts requirements on SUT
+ setups:
+
+ "It is expected that all of the tests will be run without changing
+ the configuration or setup of the DUT in any way other than that
+ required to do the specific test. For example, it is not
+ acceptable to change the size of frame handling buffers between
+ tests of frame handling rates or to disable all but one transport
+ protocol when testing the throughput of that protocol."
+
+ It is REQUIRED for the test report to encompass all the SUT
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 43]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ configuration details, including description of a "default"
+ configuration common for most tests and configuration changes if
+ required by a specific test.
+
+ For example, Section 5.1.1 of [RFC5180] recommends testing jumbo
+ frames if SUT can forward them, even though they are outside the
+ scope of the 802.3 IEEE standard. In this case, it is acceptable
+ for the SUT default configuration to not support jumbo frames, and
+ only enable this support when testing jumbo traffic profiles, as
+ the handling of jumbo frames typically has different packet buffer
+ requirements and potentially higher processing overhead. Non-
+ jumbo frame sizes should also be tested on the jumbo-enabled
+ setup.
+
+ The Manager does not need to be able to tweak any Search Goal
+ attributes, but it MUST report all applied attribute values even
+ if not tweaked.
+
+ A "user" - human or automated - invokes the Manager once to launch
+ a single Search and receive its report. Every new invocation is
+ treated as a fresh, independent Search; how the system behaves
+ across multiple calls (for example, combining or comparing their
+ results) is explicitly out of scope for this document.
+
+4.10. Compliance
+
+ This section discusses compliance relations between MLRsearch and
+ other test procedures.
+
+4.10.1. Test Procedure Compliant with MLRsearch
+
+ Any networking measurement setup that could be understood as
+ consisting of functional elements satisfying requirements for the
+ Measurer, the Controller and the Manager, is compliant with MLRsearch
+ Specification.
+
+ These components can be seen as abstractions present in any testing
+ procedure. For example, there can be a single component acting both
+ as the Manager and the Controller, but if values of required
+ attributes of Search Goals and Goal Results are visible in the test
+ report, the Controller Input instance and Controller Output instance
+ are implied.
+
+ For example, any setup for conditionally (or unconditionally)
+ compliant [RFC2544] throughput testing can be understood as a
+ MLRsearch architecture, if there is enough data to reconstruct the
+ Relevant Upper Bound.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 44]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Refer to section MLRsearch Compliant with RFC 2544 (Section 4.10.2)
+ for an equivalent Search Goal.
+
+ Any test procedure that can be understood as one call to the Manager
+ of MLRsearch architecture is said to be compliant with MLRsearch
+ Specification.
+
+4.10.2. MLRsearch Compliant with RFC 2544
+
+ The following Search Goal instance makes the corresponding Search
+ Result unconditionally compliant with Section 24 of [RFC2544].
+
+ * Goal Final Trial Duration = 60 seconds
+
+ * Goal Duration Sum = 60 seconds
+
+ * Goal Loss Ratio = 0%
+
+ * Goal Exceed Ratio = 0%
+
+ Goal Loss Ratio and Goal Exceed Ratio attributes, are enough to make
+ the Search Goal conditionally compliant. Adding Goal Final Trial
+ Duration makes the Search Goal unconditionally compliant.
+
+ Goal Duration Sum prevents MLRsearch from repeating zero-loss Full-
+ Length Trials.
+
+ The presence of other Search Goals does not affect the compliance of
+ this Goal Result. The Relevant Lower Bound and the Conditional
+ Throughput are in this case equal to each other, and the value is the
+ [RFC2544] throughput.
+
+ Non-zero exceed ratio is not strictly disallowed, but it could
+ needlessly prolong the search when Low-Loss short trials are present.
+
+4.10.3. MLRsearch Compliant with TST009
+
+ One of the alternatives to [RFC2544] is Binary search with loss
+ verification as described in Section 12.3.3 of [TST009].
+
+ The rationale of such search is to repeat high-loss trials, hoping
+ for zero loss on second try, so the results are closer to the
+ noiseless end of performance spectrum, thus more repeatable and
+ comparable.
+
+ Only the variant with "z = infinity" is achievable with MLRsearch.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 45]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ For example, for "max(r) = 2" variant, the following Search Goal
+ instance should be used to get compatible Search Result:
+
+ * Goal Final Trial Duration = 60 seconds
+
+ * Goal Duration Sum = 120 seconds
+
+ * Goal Loss Ratio = 0%
+
+ * Goal Exceed Ratio = 50%
+
+ If the first 60 seconds trial has zero loss, it is enough for
+ MLRsearch to stop measuring at that load, as even a second high-loss
+ trial would still fit within the exceed ratio.
+
+ But if the first trial is high-loss, MLRsearch needs to perform also
+ the second trial to classify that load. Goal Duration Sum is twice
+ as long as Goal Final Trial Duration, so third full-length trial is
+ never needed.
+
+5. Methodology Rationale and Design Considerations
+
+ This section explains the Why behind MLRsearch. Building on the
+ normative specification in Section MLRsearch Specification
+ (Section 4), it contrasts MLRsearch with the classic [RFC2544]
+ single-ratio binary-search procedure and walks through the key design
+ choices: binary-search mechanics, stopping-rule precision, loss-
+ inversion for multiple goals, exceed-ratio handling, short-trial
+ strategies, and the generalised throughput concept. Together, these
+ considerations show how the methodology reduces test time, supports
+ multiple loss ratios, and improves repeatability.
+
+5.1. Binary Search
+
+ A typical binary search implementation for [RFC2544] tracks only the
+ two tightest bounds. To start, the search needs both Max Load and
+ Min Load values. Then, one trial is used to confirm Max Load is an
+ Upper Bound, and one trial to confirm Min Load is a Lower Bound.
+
+ Then, next Trial Load is chosen as the mean of the current tightest
+ upper bound and the current tightest lower bound, and becomes a new
+ tightest bound depending on the Trial Loss Ratio.
+
+ After some number of trials, the tightest lower bound becomes the
+ throughput, but [RFC2544] does not specify when, if ever, the search
+ should stop. In practice, the search stops either at some distance
+ between the tightest upper bound and the tightest lower bound, or
+ after some number of Trials.
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 46]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ For a given pair of Max Load and Min Load values, there is one-to-one
+ correspondence between number of Trials and final distance between
+ the tightest bounds. Thus, the search always takes the same time,
+ assuming initial bounds are confirmed.
+
+5.2. Stopping Conditions and Precision
+
+ MLRsearch Specification requires listing both Relevant Bounds for
+ each Search Goal, and the difference between the bounds implies
+ whether the result precision is achieved. Therefore, it is not
+ necessary to report the specific stopping condition used.
+
+ MLRsearch implementations may use Goal Width to allow direct control
+ of result precision and indirect control of the Search Duration.
+
+ Other MLRsearch implementations may use different stopping
+ conditions: for example based on the Search Duration, trading off
+ precision control for duration control.
+
+ Due to various possible time optimizations, there is no strict
+ correspondence between the Search Duration and Goal Width values. In
+ practice, noisy SUT performance increases both average search time
+ and its variance.
+
+5.3. Loss Ratios and Loss Inversion
+
+ The biggest
+
+ difference between MLRsearch and [RFC2544] binary search is in the
+ goals of the search. [RFC2544] has a single goal, based on
+ classifying a single full-length trial as either zero-loss or non-
+ zero-loss. MLRsearch supports searching for multiple Search Goals at
+ once, usually differing in their Goal Loss Ratio values.
+
+5.3.1. Single Goal and Hard Bounds
+
+ Each bound in [RFC2544] simple binary search is "hard", in the sense
+ that all further Trial Load values are smaller than any current upper
+ bound and larger than any current lower bound.
+
+ This is also possible for MLRsearch implementations, when the search
+ is started with only one Search Goal instance.
+
+5.3.2. Multiple Goals and Loss Inversion
+
+ MLRsearch Specification
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 47]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ supports multiple Search Goals, making the search procedure more
+ complicated compared to binary search with single goal, but most of
+ the complications do not affect the final results much. Except for
+ one phenomenon: Loss Inversion.
+
+ Depending on Search Goal attributes, Load Classification results may
+ be resistant to small amounts of Section Inconsistent Trial Results
+ (Section 2.5). However, for larger amounts, a Load that is
+ classified as an Upper Bound for one Search Goal may still be a Lower
+ Bound for another Search Goal. Due to this other goal, MLRsearch
+ will probably perform subsequent Trials at Trial Loads even larger
+ than the original value.
+
+ This introduces questions any many-goals search algorithm has to
+ address. For example: What to do when all such larger load trials
+ happen to have zero loss? Does it mean the earlier upper bound was
+ not real? Does it mean the later Low-Loss trials are not considered
+ a lower bound?
+
+ The situation where a smaller Load is classified as an Upper Bound,
+ while a larger Load is classified as a Lower Bound (for the same
+ search goal), is called Loss Inversion.
+
+ Conversely, only single-goal search algorithms can have hard bounds
+ that shield them from Loss Inversion.
+
+5.3.3. Conservativeness and Relevant Bounds
+
+ MLRsearch is conservative when dealing with Loss Inversion: the Upper
+ Bound is considered real, and the Lower Bound is considered to be a
+ fluke, at least when computing the final result.
+
+ This is formalized using definitions of Relevant Upper Bound
+ (Section 4.8.1) and Relevant Lower Bound (Section 4.8.2).
+
+ The Relevant Upper Bound (for specific goal) is the smallest Load
+ classified as an Upper Bound. But the Relevant Lower Bound is not
+ simply the largest among Lower Bounds. It is the largest Load among
+ Loads that are Lower Bounds while also being smaller than the
+ Relevant Upper Bound.
+
+ With these definitions, the Relevant Lower Bound is always smaller
+ than the Relevant Upper Bound (if both exist), and the two relevant
+ bounds are used analogously as the two tightest bounds in the binary
+ search. When they meet the stopping conditions, the Relevant Bounds
+ are used in the output.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 48]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+5.3.4. Consequences
+
+ The consequence of the way the Relevant Bounds are defined is that
+ every Trial Result can have an impact on any current Relevant Bound
+ larger than that Trial Load, namely by becoming a new Upper Bound.
+
+ This also applies when that Load is measured before another Load gets
+ enough measurements to become a current Relevant Bound.
+
+ This also implies that if the SUT tested (or the Traffic Generator
+ used) needs a warm-up, it should be warmed up before starting the
+ Search, otherwise the first few measurements could become unjustly
+ limiting.
+
+ For MLRsearch implementations, it means it is better to measure at
+ smaller Loads first, so bounds found earlier are less likely to get
+ invalidated later.
+
+5.4. Exceed Ratio and Multiple Trials
+
+ The idea of performing multiple Trials at the same Trial Load comes
+ from a model where some Trial Results (those with high Trial Loss
+ Ratio) are affected by infrequent effects, causing unsatisfactory
+ repeatability
+
+ of [RFC2544] Throughput results. Refer to Section DUT in SUT
+ (Section 2.2) for a discussion about noiseful and noiseless ends of
+ the SUT performance spectrum. Stable results are closer to the
+ noiseless end of the SUT performance spectrum, so MLRsearch may need
+ to allow some frequency of high-loss trials to ignore the rare but
+ big effects near the noiseful end.
+
+ For MLRsearch to perform such Trial Result filtering, it needs a
+ configuration option to tell how frequent the "infrequent" big loss
+ can be. This option is called the Goal Exceed Ratio (Section 4.6.4).
+ It tells MLRsearch what ratio of trials (more specifically, what
+ ratio of Trial Effective Duration seconds) can have a Trial Loss
+ Ratio (Section 4.5.6) larger than the Goal Loss Ratio (Section 4.6.3)
+ and still be classified as a Lower Bound (Section 4.7.2.2).
+
+ Zero exceed ratio means all Trials must have a Trial Loss Ratio equal
+ to or lower than the Goal Loss Ratio.
+
+ When more than one Trial is intended to classify a Load, MLRsearch
+ also needs something that controls the number of trials needed.
+ Therefore, each goal also has an attribute called Goal Duration Sum.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 49]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The meaning of a Goal Duration Sum (Section 4.6.2) is that when a
+ Load has (Full-Length) Trials whose Trial Effective Durations when
+ summed up give a value at least as big as the Goal Duration Sum
+ value, the Load is guaranteed to be classified either as an Upper
+ Bound or a Lower Bound for that Search Goal instance.
+
+5.5. Short Trials and Duration Selection
+
+ MLRsearch requires each Search Goal to specify its Goal Final Trial
+ Duration.
+
+ Section 24 of [RFC2544] already anticipates possible time savings
+ when Short Trials are used.
+
+ An MLRsearch implementation MAY expose configuration parameters that
+ decide whether, when, and how short trial durations are used. The
+ exact heuristics and controls are left to the discretion of the
+ implementer.
+
+ While MLRsearch implementations are free to use any logic to select
+ Trial Input values, comparability between MLRsearch implementations
+ is only assured when the Load Classification logic handles any
+ possible set of Trial Results in the same way.
+
+ The presence of Short Trial Results complicates the Load
+ Classification logic, see more details in Section Load Classification
+ Logic (Section 6.1).
+
+ While the Load Classification algorithm is designed to avoid any
+ unneeded Trials, for explainability reasons it is recommended for
+ users to use such Controller Input instances that lead to all Trial
+ Duration values selected by Controller to be the same, e.g., by
+ setting any Goal Initial Trial Duration to be a single value also
+ used in all Goal Final Trial Duration attributes.
+
+5.6. Generalized Throughput
+
+ Because testing equipment takes the Intended Load as an input
+ parameter for a Trial measurement, any load search algorithm needs to
+ deal with Intended Load values internally.
+
+ But in the presence of Search Goals with a non-zero Goal Loss Ratio
+ (Section 4.6.3), the Load usually does not match the user's intuition
+ of what a throughput is. The forwarding rate as defined in
+ Section Section 3.6.1 of [RFC2285] is better, but it is not obvious
+ how to generalize it for Loads with multiple Trials and a non-zero
+ Goal Loss Ratio.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 50]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The clearest illustration - and the chief reason for adopting a
+ generalized throughput definition - is the presence of a hard
+ performance limit.
+
+5.6.1. Hard Performance Limit
+
+ Even if bandwidth of a medium allows higher traffic forwarding
+ performance, the SUT interfaces may have their additional own
+ limitations, e.g., a specific frames-per-second limit on the NIC (a
+ common occurrence).
+
+ Those limitations should be known and provided as Max Load,
+ Section Max Load (Section 4.6.8.1).
+
+ But if Max Load is set larger than what the interface can receive or
+ transmit, there will be a "hard limit" behavior observed in Trial
+ Results.
+
+ Consider that the hard limit is at hundred million frames per second
+ (100 Mfps), Max Load is larger, and the Goal Loss Ratio is 0.5%. If
+ DUT has no additional losses, 0.5% Trial Loss Ratio will be achieved
+ at Relevant Lower Bound of 100.5025 Mfps.
+
+ Reporting a throughput that exceeds the SUT's verified hard limit is
+ counter-intuitive. Accordingly, the [RFC2544] Throughput metric
+ should be generalized - rather than relying solely on the Relevant
+ Lower Bound - to reflect realistic, limit-aware performance.
+
+ MLRsearch defines one such generalization, the Conditional Throughput
+ (Section 4.8.3). It is the Trial Forwarding Rate from one of the
+ Full-Length Trials performed at the Relevant Lower Bound. The
+ algorithm to determine which trial exactly is in Appendix B
+ (Appendix B).
+
+ In the hard limit example, 100.5025 Mfps Load will still have only
+ 100.0 Mfps forwarding rate, nicely confirming the known limitation.
+
+5.6.2. Performance Variability
+
+ With non-zero Goal Loss Ratio, and without hard performance limits,
+ Low-Loss trials at the same Load may achieve different Trial
+ Forwarding Rate values just due to DUT performance variability.
+
+ By comparing the best case (all Relevant Lower Bound trials have zero
+ loss) and the worst case (all Trial Loss Ratios at Relevant Lower
+ Bound are equal to the Goal Loss Ratio), one can prove that
+ Conditional Throughput values may have up to the Goal Loss Ratio
+ relative difference.
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 51]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Setting the Goal Width below the Goal Loss Ratio may cause the
+ Conditional Throughput for a larger Goal Loss Ratio to become smaller
+ than a Conditional Throughput for a goal with a lower Goal Loss
+ Ratio, which is counter-intuitive, considering they come from the
+ same Search. Therefore, it is RECOMMENDED to set the Goal Width to a
+ value no lower than the Goal Loss Ratio of the higher-loss Search
+ Goal.
+
+ Although Conditional Throughput can fluctuate from one run to the
+ next, it still offers a more discriminating basis for comparison than
+ the Relevant Lower Bound - particularly when deterministic load
+ selection yields the same Lower Bound value across multiple runs.
+
+6. MLRsearch Logic and Example
+
+ This section uses informal language to describe two aspects of
+ MLRsearch logic: Load Classification and Conditional Throughput,
+ reflecting formal pseudocode representation provided in Appendix A
+ (Appendix A) and Appendix B (Appendix B). This is followed by
+ example search.
+
+ The logic is equivalent but not identical to the pseudocode on
+ appendices. The pseudocode is designed to be short and frequently
+ combines multiple operations into one expression. The logic as
+ described in this section lists each operation separately and uses
+ more intuitive names for the intermediate values.
+
+6.1. Load Classification Logic
+
+ Note: For explanation clarity variables are taged as (I)nput,
+ (T)emporary, (O)utput.
+
+ * Collect Trial Results:
+
+ - Take all Trial Result instances (I) measured at a given load.
+
+ * Aggregate Trial Durations:
+
+ - Full-length high-loss sum (T) is the sum of Trial Effective
+ Duration values of all full-length high-loss trials (I).
+
+ - Full-length low-loss sum (T) is the sum of Trial Effective
+ Duration values of all full-length low-loss trials (I).
+
+ - Short high-loss sum is the sum (T) of Trial Effective Duration
+ values of all short high-loss trials (I).
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 52]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ - Short low-loss sum is the sum (T) of Trial Effective Duration
+ values of all short low-loss trials (I).
+
+ * Derive goal-based ratios:
+
+ - Subceed ratio (T) is One minus the Goal Exceed Ratio (I).
+
+ - Exceed coefficient (T) is the Goal Exceed Ratio divided by the
+ subceed ratio.
+
+ * Balance short-trial effects:
+
+ - Balancing sum (T) is the short low-loss sum multiplied by the
+ exceed coefficient.
+
+ - Excess sum (T) is the short high-loss sum minus the balancing
+ sum.
+
+ - Positive excess sum (T) is the maximum of zero and excess sum.
+
+ * Compute effective duration totals
+
+ - Effective high-loss sum (T) is the full-length high-loss sum
+ plus the positive excess sum.
+
+ - Effective full sum (T) is the effective high-loss sum plus the
+ full-length low-loss sum.
+
+ - Effective whole sum (T) is the larger of the effective full sum
+ and the Goal Duration Sum.
+
+ - Missing sum (T) is the effective whole sum minus the effective
+ full sum.
+
+ * Estimate exceed ratios:
+
+ - Pessimistic high-loss sum (T) is the effective high-loss sum
+ plus the missing sum.
+
+ - Optimistic exceed ratio (T) is the effective high-loss sum
+ divided by the effective whole sum.
+
+ - Pessimistic exceed ratio (T) is the pessimistic high-loss sum
+ divided by the effective whole sum.
+
+ * Classify the Load:
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 53]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ - The load is classified as an Upper Bound (O) if the optimistic
+ exceed ratio is larger than the Goal Exceed Ratio.
+
+ - The load is classified as a Lower Bound (O) if the pessimistic
+ exceed ratio is not larger than the Goal Exceed Ratio.
+
+ - The load is classified as undecided (O) otherwise.
+
+6.2. Conditional Throughput Logic
+
+ * Collect Trial Results
+
+ - Take all Trial Result instances (I) measured at a given Load.
+
+ * Sum Full-Length Durations:
+
+ - Full-length high-loss sum (T) is the sum of Trial Effective
+ Duration values of all full-length high-loss trials (I).
+
+ - Full-length low-loss sum (T) is the sum of Trial Effective
+ Duration values of all full-length low-loss trials (I).
+
+ - Full-length sum (T) is the full-length high-loss sum (I) plus
+ the full-length low-loss sum (I).
+
+ * Derive initial thresholds:
+
+ - Subceed ratio (T) is One minus the Goal Exceed Ratio (I) is
+ called.
+
+ - Remaining sum (T) initially is full-lengths sum multiplied by
+ subceed ratio.
+
+ - Current loss ratio (T) initially is 100%.
+
+ * Iterate through ordered trials
+
+ - For each full-length trial result, sorted in increasing order
+ by Trial Loss Ratio:
+
+ o If remaining sum is not larger than zero, exit the loop.
+
+ o Set current loss ratio to this trial's Trial Loss Ratio (I).
+
+ o Decrease the remaining sum by this trial's Trial Effective
+ Duration (I).
+
+ * Compute Conditional Throughput
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 54]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ - Current forwarding ratio (T) is One minus the current loss
+ ratio.
+
+ - Conditional Throughput (T) is the current forwarding ratio
+ multiplied by the Load value.
+
+6.2.1. Conditional Throughput and Load Classification
+
+ Conditional Throughput and results of Load Classification overlap but
+ are not identical.
+
+ * When a load is marked as a Relevant Lower Bound, its Conditional
+ Throughput is taken from a trial whose loss ratio never exceeds
+ the Goal Loss Ratio.
+
+ * The reverse is not guaranteed: if the Goal Width is narrower than
+ the Goal Loss Ratio, Conditional Throughput can still end up
+ higher than the Relevant Upper Bound.
+
+6.3. SUT Behaviors
+
+ In Section DUT in SUT (Section 2.2), the notion of noise has been
+ introduced. This section uses new terms to describe possible SUT
+ behaviors more precisely.
+
+ From measurement point of view, noise is visible as inconsistent
+ trial results. See Inconsistent Trial Results (Section 2.5) for
+ general points and Loss Ratios and Loss Inversion (Section 5.3) for
+ specifics when comparing different Load values.
+
+ Load Classification and Conditional Throughput apply to a single Load
+ value, but even the set of Trial Results measured at that Trial Load
+ value may appear inconsistent.
+
+ As MLRsearch aims to save time, it executes only a small number of
+ Trials, getting only a limited amount of information about SUT
+ behavior. It is useful to introduce an "SUT expert" point of view to
+ contrast with that limited information.
+
+6.3.1. Expert Predictions
+
+ Imagine that before the Search starts, a human expert had unlimited
+ time to measure SUT and obtain all reliable information about it.
+ The information is not perfect, as there is still random noise
+ influencing SUT. But the expert is familiar with possible noise
+ events, even the rare ones, and thus the expert can do probabilistic
+ predictions about future Trial Outputs.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 55]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ When several outcomes are possible, the expert can assess probability
+ of each outcome.
+
+6.3.2. Exceed Probability
+
+ When the Controller selects new Trial Duration and Trial Load, and
+ just before the Measurer starts performing the Trial, the SUT expert
+ can envision possible Trial Results.
+
+ With respect to a particular Search Goal instance, the possibilities
+ can be summarized into a single number: Exceed Probability. It is
+ the probability (according to the expert) that the measured Trial
+ Loss Ratio will be higher than the Goal Loss Ratio.
+
+6.3.3. Trial Duration Dependence
+
+ When comparing Exceed Probability values for the same Trial Load
+ value but different Trial Duration values, there are several patterns
+ that commonly occur in practice.
+
+6.3.3.1. Strong Increase
+
+ Exceed Probability is very low at short durations but very high at
+ full-length. This SUT behavior is undesirable, and may hint at
+ faulty SUT, e.g., SUT leaks resources and is unable to sustain the
+ desired performance.
+
+ But this behavior is also seen when SUT uses large amount of buffers.
+ This is the main reasons users may want to set large Goal Final Trial
+ Duration.
+
+6.3.3.2. Mild Increase
+
+ Short trials are slightly less likely to exceed the loss-ratio limit,
+ but the improvement is modest. This mild benefit is typical when
+ noise is dominated by rare, large loss spikes: during a full-length
+ trial, the good-performing periods cannot fully offset the heavy
+ frame loss that occurs in the brief low-performing bursts.
+
+6.3.3.3. Independence
+
+ Short trials have basically the same Exceed Probability as full-
+ length trials. This is possible only if loss spikes are small (so
+ other parts can compensate) and if Goal Loss Ratio is more than zero
+ (otherwise, other parts cannot compensate at all).
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 56]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+6.3.3.4. Decrease
+
+ Short trials have larger Exceed Probability than full-length trials.
+ This can be possible only for non-zero Goal Loss Ratio, for example
+ if SUT needs to "warm up" to best performance within each trial. Not
+ commonly seen in practice.
+
+7. IANA Considerations
+
+ This document does not make any request to IANA.
+
+8. Security Considerations
+
+ Benchmarking activities as described in this memo are limited to
+ technology characterization of a DUT/SUT using controlled stimuli in
+ a laboratory environment, with dedicated address space and the
+ constraints specified in the sections above.
+
+ The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test
+ traffic into a production network or misroute traffic to the test
+ management network.
+
+ Further, benchmarking is performed on an "opaque" basis, relying
+ solely on measurements observable external to the DUT/SUT.
+
+ The DUT/SUT SHOULD NOT include features that serve only to boost
+ benchmark scores - such as a dedicated "fast-track" test mode that is
+ never used in normal operation.
+
+ Any implications for network security arising from the DUT/SUT SHOULD
+ be identical in the lab and in production networks.
+
+9. Acknowledgements
+
+ Special wholehearted gratitude and thanks to the late Al Morton for
+ his thorough reviews filled with very specific feedback and
+ constructive guidelines. Thank You Al for the close collaboration
+ over the years, Your Mentorship, Your continuous unwavering
+ encouragement full of empathy and energizing positive attitude. Al,
+ You are dearly missed.
+
+ Thanks to Gabor Lencse, Giuseppe Fioccola and BMWG contributors for
+ good discussions and thorough reviews, guiding and helping us to
+ improve the clarity and formality of this document.
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 57]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Many thanks to Alec Hothan of the OPNFV NFVbench project for a
+ thorough review and numerous useful comments and suggestions in the
+ earlier versions of this document.
+
+10. References
+
+10.1. Normative References
+
+ [RFC1242] Bradner, S., "Benchmarking Terminology for Network
+ Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242,
+ July 1991, <https://www.rfc-editor.org/info/rfc1242>.
+
+ [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
+ Requirement Levels", BCP 14, RFC 2119,
+ DOI 10.17487/RFC2119, March 1997,
+ <https://www.rfc-editor.org/info/rfc2119>.
+
+ [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN
+ Switching Devices", RFC 2285, DOI 10.17487/RFC2285,
+ February 1998, <https://www.rfc-editor.org/info/rfc2285>.
+
+ [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
+ Network Interconnect Devices", RFC 2544,
+ DOI 10.17487/RFC2544, March 1999,
+ <https://www.rfc-editor.org/info/rfc2544>.
+
+ [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
+ 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
+ May 2017, <https://www.rfc-editor.org/info/rfc8174>.
+
+10.2. Informative References
+
+ [FDio-CSIT-MLRsearch]
+ "FD.io CSIT Test Methodology - MLRsearch", October 2023,
+ <https://csit.fd.io/cdocs/methodology/measurements/
+ data_plane_throughput/mlr_search/>.
+
+ [Lencze-Kovacs-Shima]
+ "Gaming with the Throughput and the Latency Benchmarking
+ Measurement Procedures of RFC 2544", n.d.,
+ <http://dx.doi.org/10.11601/ijates.v9i2.288>.
+
+ [Lencze-Shima]
+ "An Upgrade to Benchmarking Methodology for Network
+ Interconnect Devices - expired", n.d.,
+ <https://datatracker.ietf.org/doc/html/draft-lencse-bmwg-
+ rfc2544-bis-00>.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 58]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ [Ott-Mathis-Semke-Mahdavi]
+ "The Macroscopic Behavior of the TCP Congestion Avoidance
+ Algorithm", n.d.,
+ <https://www.cs.cornell.edu/people/egs/cornellonly/
+ syslunch/fall02/ott.pdf>.
+
+ [PyPI-MLRsearch]
+ "MLRsearch 1.2.1, Python Package Index", October 2023,
+ <https://pypi.org/project/MLRsearch/1.2.1/>.
+
+ [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D.
+ Dugatkin, "IPv6 Benchmarking Methodology for Network
+ Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May
+ 2008, <https://www.rfc-editor.org/info/rfc5180>.
+
+ [RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage,
+ "Framework for TCP Throughput Testing", RFC 6349,
+ DOI 10.17487/RFC6349, August 2011,
+ <https://www.rfc-editor.org/info/rfc6349>.
+
+ [RFC6985] Morton, A., "IMIX Genome: Specification of Variable Packet
+ Sizes for Additional Testing", RFC 6985,
+ DOI 10.17487/RFC6985, July 2013,
+ <https://www.rfc-editor.org/info/rfc6985>.
+
+ [RFC8219] Georgescu, M., Pislaru, L., and G. Lencse, "Benchmarking
+ Methodology for IPv6 Transition Technologies", RFC 8219,
+ DOI 10.17487/RFC8219, August 2017,
+ <https://www.rfc-editor.org/info/rfc8219>.
+
+ [TST009] "TST 009", n.d., <https://www.etsi.org/deliver/etsi_gs/
+ NFV-TST/001_099/009/03.04.01_60/gs_NFV-
+ TST009v030401p.pdf>.
+
+ [Vassilev] "A YANG Data Model for Network Tester Management", n.d.,
+ <https://datatracker.ietf.org/doc/draft-ietf-bmwg-network-
+ tester-cfg/06>.
+
+ [Y.1564] "Y.1564", n.d., <https://www.itu.int/rec/
+ dologin_pub.asp?lang=e&id=T-REC-Y.1564-201602-I!!PDF-
+ E&type=items>.
+
+Appendix A. Load Classification Code
+
+ This appendix specifies how to perform the Load Classification.
+
+ Any Trial Load value can be classified, according to a given Search
+ Goal (Section 4.6.7) instance.
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 59]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The algorithm uses (some subsets of) the set of all available Trial
+ Results from Trials measured at a given Load at the end of the
+ Search.
+
+ The block at the end of this appendix holds pseudocode which computes
+ two values, stored in variables named optimistic_is_lower and
+ pessimistic_is_lower.
+
+ Although presented as pseudocode, the listing is syntactically valid
+ Python and can be executed without modification.
+
+ If values of both variables are computed to be true, the Load in
+ question is classified as a Lower Bound according to the given Search
+ Goal instance. If values of both variables are false, the Load is
+ classified as an Upper Bound. Otherwise, the load is classified as
+ Undecided.
+
+ Some variable names are shortened to fit expressions in one line.
+ Namely, variables holding sum quantities end in _s instead of _sum,
+ and variables holding effective quantities start in effect_ instead
+ of effective_.
+
+ The pseudocode expects the following variables to hold the following
+ values:
+
+ * goal_duration_s: The Goal Duration Sum value of the given Search
+ Goal.
+
+ * goal_exceed_ratio: The Goal Exceed Ratio value of the given Search
+ Goal.
+
+ * full_length_low_loss_s: Sum of Trial Effective Durations across
+ Trials with Trial Duration at least equal to the Goal Final Trial
+ Duration and with Trial Loss Ratio not higher than the Goal Loss
+ Ratio (across Full-Length Low-Loss Trials).
+
+ * full_length_high_loss_s: Sum of Trial Effective Durations across
+ Trials with Trial Duration at least equal to the Goal Final Trial
+ Duration and with Trial Loss Ratio higher than the Goal Loss Ratio
+ (across Full-Length High-Loss Trials).
+
+ * short_low_loss_s: Sum of Trial Effective Durations across Trials
+ with Trial Duration shorter than the Goal Final Trial Duration and
+ with Trial Loss Ratio not higher than the Goal Loss Ratio (across
+ Short Low-Loss Trials).
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 60]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * short_high_loss_s: Sum of Trial Effective Durations across Trials
+ with Trial Duration shorter than the Goal Final Trial Duration and
+ with Trial Loss Ratio higher than the Goal Loss Ratio (across
+ Short High-Loss Trials).
+
+ The code works correctly also when there are no Trial Results at a
+ given Load.
+
+ <CODE BEGINS>
+ exceed_coefficient = goal_exceed_ratio / (1.0 - goal_exceed_ratio)
+ balancing_s = short_low_loss_s * exceed_coefficient
+ positive_excess_s = max(0.0, short_high_loss_s - balancing_s)
+ effect_high_loss_s = full_length_high_loss_s + positive_excess_s
+ effect_full_length_s = full_length_low_loss_s + effect_high_loss_s
+ effect_whole_s = max(effect_full_length_s, goal_duration_s)
+ quantile_duration_s = effect_whole_s * goal_exceed_ratio
+ pessimistic_high_loss_s = effect_whole_s - full_length_low_loss_s
+ pessimistic_is_lower = pessimistic_high_loss_s <= quantile_duration_s
+ optimistic_is_lower = effect_high_loss_s <= quantile_duration_s
+ <CODE ENDS>
+
+Appendix B. Conditional Throughput Code
+
+ This section specifies an example of how to compute Conditional
+ Throughput, as referred to in Section Conditional Throughput
+ (Section 4.8.3).
+
+ Any Load value can be used as the basis for the following
+ computation, but only the Relevant Lower Bound (at the end of the
+ Search) leads to the value called the Conditional Throughput for a
+ given Search Goal.
+
+ The algorithm uses (some subsets of) the set of all available Trial
+ Results from Trials measured at a given Load at the end of the
+ Search.
+
+ The block at the end of this appendix holds pseudocode which computes
+ a value stored as variable conditional_throughput.
+
+ Although presented as pseudocode, the listing is syntactically valid
+ Python and can be executed without modification.
+
+ Some variable names are shortened in order to fit expressions in one
+ line. Namely, variables holding sum quantities end in _s instead of
+ _sum, and variables holding effective quantities start in effect_
+ instead of effective_.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 61]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The pseudocode expects the following variables to hold the following
+ values:
+
+ * goal_duration_s: The Goal Duration Sum value of the given Search
+ Goal.
+
+ * goal_exceed_ratio: The Goal Exceed Ratio value of the given Search
+ Goal.
+
+ * full_length_low_loss_s: Sum of Trial Effective Durations across
+ Trials with Trial Duration at least equal to the Goal Final Trial
+ Duration and with Trial Loss Ratio not higher than the Goal Loss
+ Ratio (across Full-Length Low-Loss Trials).
+
+ * full_length_high_loss_s: Sum of Trial Effective Durations across
+ Trials with Trial Duration at least equal to the Goal Final Trial
+ Duration and with Trial Loss Ratio higher than the Goal Loss Ratio
+ (across Full-Length High-Loss Trials).
+
+ * full_length_trials: An iterable of all Trial Results from Trials
+ with Trial Duration at least equal to the Goal Final Trial
+ Duration (all Full-Length Trials), sorted by increasing Trial Loss
+ Ratio. One item trial is a composite with the following two
+ attributes available:
+
+ - trial.loss_ratio: The Trial Loss Ratio as measured for this
+ Trial.
+
+ - trial.effect_duration: The Trial Effective Duration of this
+ Trial.
+
+ The code works correctly only when there is at least one Trial Result
+ measured at a given Load.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 62]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ <CODE BEGINS>
+ full_length_s = full_length_low_loss_s + full_length_high_loss_s
+ whole_s = max(goal_duration_s, full_length_s)
+ remaining = whole_s * (1.0 - goal_exceed_ratio)
+ quantile_loss_ratio = None
+ for trial in full_length_trials:
+ if quantile_loss_ratio is None or remaining > 0.0:
+ quantile_loss_ratio = trial.loss_ratio
+ remaining -= trial.effect_duration
+ else:
+ break
+ else:
+ if remaining > 0.0:
+ quantile_loss_ratio = 1.0
+ conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
+ <CODE ENDS>
+
+Appendix C. Example Search
+
+ The following example Search is related to one hypothetical run of a
+ Search test procedure that has been started with multiple Search
+ Goals. Several points in time are chosen, to show how the logic
+ works, with specific sets of Trial Result available. The trial
+ results themselves are not very realistic, as the intention is to
+ show several corner cases of the logic.
+
+ In all Trials, the Effective Trial Duration is equal to Trial
+ Duration.
+
+ Only one Trial Load is in focus, its value is one million frames per
+ second. Trial Results at other Trial Loads are not mentioned, as the
+ parts of logic present here do not depend on those. In practice,
+ Trial Results at other Load values would be present, e.g., MLRsearch
+ will look for a Lower Bound smaller than any Upper Bound found.
+
+ At any given moment, exactly one Search Goal is designated as in
+ focus. This designation affects only the Trial Duration chosen for
+ new trials; it does not alter the rest of the decision logic.
+
+ An MLRsearch implementation is free to evaluate several goals
+ simultaneously - the "focus" mechanism is optional and appears here
+ only to show that a load can still be classified against goals that
+ are not currently in focus.
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 63]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+C.1. Example Goals
+
+ The following four Search Goal instances are selected for the example
+ Search. Each goal has a readable name and dense code, the code is
+ useful to show Search Goal attribute values.
+
+ As the variable "exceed coefficient" does not depend on trial
+ results, it is also precomputed here.
+
+ Goal 1:
+
+ name: RFC2544
+ Goal Final Trial Duration: 60s
+ Goal Duration Sum: 60s
+ Goal Loss Ratio: 0%
+ Goal Exceed Ratio: 0%
+ exceed coefficient: 0% / (100% / 0%) = 0.0
+ code: 60f60d0l0e
+
+ Goal 2:
+
+ name: TST009
+ Goal Final Trial Duration: 60s
+ Goal Duration Sum: 120s
+ Goal Loss Ratio: 0%
+ Goal Exceed Ratio: 50%
+ exceed coefficient: 50% / (100% - 50%) = 1.0
+ code: 60f120d0l50e
+
+ Goal 3:
+
+ name: 1s final
+ Goal Final Trial Duration: 1s
+ Goal Duration Sum: 120s
+ Goal Loss Ratio: 0.5%
+ Goal Exceed Ratio: 50%
+ exceed coefficient: 50% / (100% - 50%) = 1.0
+ code: 1f120d.5l50e
+
+ Goal 4:
+
+ name: 20% exceed
+ Goal Final Trial Duration: 60s
+ Goal Duration Sum: 60s
+ Goal Loss Ratio: 0.5%
+ Goal Exceed Ratio: 20%
+ exceed coefficient: 20% / (100% - 20%) = 0.25
+ code: 60f60d0.5l20e
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 64]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ The first two goals are important for compliance reasons, the other
+ two cover less frequent cases.
+
+C.2. Example Trial Results
+
+ The following six sets of trial results are selected for the example
+ Search. The sets are defined as points in time, describing which
+ Trial Results were added since the previous point.
+
+ Each point has a readable name and dense code, the code is useful to
+ show Trial Output attribute values and number of times identical
+ results were added.
+
+ Point 1:
+
+ name: first short good
+ goal in focus: 1s final (1f120d.5l50e)
+ added Trial Results: 59 trials, each 1 second and 0% loss
+ code: 59x1s0l
+
+ Point 2:
+
+ name: first short bad
+ goal in focus: 1s final (1f120d.5l50e)
+ added Trial Result: one trial, 1 second, 1% loss
+ code: 59x1s0l+1x1s1l
+
+ Point 3:
+
+ name: last short bad
+ goal in focus: 1s final (1f120d.5l50e)
+ added Trial Results: 59 trials, 1 second each, 1% loss each
+ code: 59x1s0l+60x1s1l
+
+ Point 4:
+
+ name: last short good
+ goal in focus: 1s final (1f120d.5l50e)
+ added Trial Results: one trial 1 second, 0% loss
+ code: 60x1s0l+60x1s1l
+
+ Point 5:
+
+ name: first long bad
+ goal in focus: TST009 (60f120d0l50e)
+ added Trial Results: one trial, 60 seconds, 0.1% loss
+ code: 60x1s0l+60x1s1l+1x60s.1l
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 65]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Point 6:
+
+ name: first long good
+ goal in focus: TST009 (60f120d0l50e)
+ added Trial Results: one trial, 60 seconds, 0% loss
+ code: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l
+
+ Comments on point in time naming:
+
+ * When a name contains "short", it means the added trial had Trial
+ Duration of 1 second, which is Short Trial for 3 of the Search
+ Goals, but it is a Full-Length Trial for the "1s final" goal.
+
+ * Similarly, "long" in name means the added trial had Trial Duration
+ of 60 seconds, which is Full-Length Trial for 3 goals but Long
+ Trial for the "1s final" goal.
+
+ * When a name contains "good" it means the added trial is Low-Loss
+ Trial for all the goals.
+
+ * When a name contains "short bad" it means the added trial is High-
+ Loss Trial for all the goals.
+
+ * When a name contains "long bad", it means the added trial is a
+ High-Loss Trial for goals "RFC2544" and "TST009", but it is a Low-
+ Loss Trial for the two other goals.
+
+C.3. Load Classification Computations
+
+ This section shows how Load Classification logic is applied by
+ listing all temporary values at the specific time point.
+
+C.3.1. Point 1
+
+ This is the "first short good" point. Code for available results is:
+ 59x1s0l
+
+ +==============+==========+============+============+=============+
+ |Goal name |RFC2544 |TST009 |1s final |20% exceed |
+ +==============+==========+============+============+=============+
+ |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |0s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |59s |0s |
+ |low-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 66]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ |Short high- |0s |0s |0s |0s |
+ |loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short low-loss|59s |59s |0s |59s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Balancing sum |0s |59s |0s |14.75s |
+ +--------------+----------+------------+------------+-------------+
+ |Excess sum |0s |-59s |0s |-14.75s |
+ +--------------+----------+------------+------------+-------------+
+ |Positive |0s |0s |0s |0s |
+ |excess sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |0s |0s |0s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective full|0s |0s |59s |0s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |60s |120s |120s |60s |
+ |whole sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Missing sum |60s |120s |61s |60s |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |60s |120s |61s |60s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Optimistic |0% |0% |0% |0% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |100% |100% |50.833% |100% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Classification|Undecided |Undecided |Undecided |Undecided |
+ |Result | | | | |
+ +--------------+----------+------------+------------+-------------+
+
+ Table 1
+
+ This is the last point in time where all goals have this load as
+ Undecided.
+
+C.3.2. Point 2
+
+ This is the "first short bad" point. Code for available results is:
+ 59x1s0l+1x1s1l
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 67]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ +==============+==========+============+============+=============+
+ |Goal name |RFC2544 |TST009 |1s final |20% exceed |
+ +==============+==========+============+============+=============+
+ |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |1s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |59s |0s |
+ |low-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short high- |1s |1s |0s |1s |
+ |loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short low-loss|59s |59s |0s |59s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Balancing sum |0s |59s |0s |14.75s |
+ +--------------+----------+------------+------------+-------------+
+ |Excess sum |1s |-58s |0s |-13.75s |
+ +--------------+----------+------------+------------+-------------+
+ |Positive |1s |0s |0s |0s |
+ |excess sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |1s |0s |1s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective full|1s |0s |60s |0s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |60s |120s |120s |60s |
+ |whole sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Missing sum |59s |120s |60s |60s |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |60s |120s |61s |60s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Optimistic |1.667% |0% |0.833% |0% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |100% |100% |50.833% |100% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Classification|Upper |Undecided |Undecided |Undecided |
+ |Result |Bound | | | |
+ +--------------+----------+------------+------------+-------------+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 68]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Table 2
+
+ Due to zero Goal Loss Ratio, RFC2544 goal must have mild or strong
+ increase of exceed probability, so the one lossy trial would be lossy
+ even if measured at 60 second duration. Due to zero exceed ratio,
+ one High-Loss Trial is enough to preclude this Load from becoming a
+ Lower Bound for RFC2544. That is why this Load is classified as an
+ Upper Bound for RFC2544 this early.
+
+ This is an example how significant time can be saved, compared to
+ 60-second trials.
+
+C.3.3. Point 3
+
+ This is the "last short bad" point. Code for available trial results
+ is: 59x1s0l+60x1s1l
+
+ +==============+==========+============+============+=============+
+ |Goal name |RFC2544 |TST009 |1s final |20% exceed |
+ +==============+==========+============+============+=============+
+ |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |60s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |59s |0s |
+ |low-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short high- |60s |60s |0s |60s |
+ |loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short low-loss|59s |59s |0s |59s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Balancing sum |0s |59s |0s |14.75s |
+ +--------------+----------+------------+------------+-------------+
+ |Excess sum |60s |1s |0s |45.25s |
+ +--------------+----------+------------+------------+-------------+
+ |Positive |60s |1s |0s |45.25s |
+ |excess sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |60s |1s |60s |45.25s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective full|60s |1s |119s |45.25s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |60s |120s |120s |60s |
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 69]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ |whole sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Missing sum |0s |119s |1s |14.75s |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |60s |120s |61s |60s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Optimistic |100% |0.833% |50% |75.417% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |100% |100% |50.833% |100% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Classification|Upper |Undecided |Undecided |Upper Bound |
+ |Result |Bound | | | |
+ +--------------+----------+------------+------------+-------------+
+
+ Table 3
+
+ This is the last point for "1s final" goal to have this Load still
+ Undecided. Only one 1-second trial is missing within the 120-second
+ Goal Duration Sum, but its result will decide the classification
+ result.
+
+ The "20% exceed" started to classify this load as an Upper Bound
+ somewhere between points 2 and 3.
+
+C.3.4. Point 4
+
+ This is the "last short good" point. Code for available trial
+ results is: 60x1s0l+60x1s1l
+
+ +==============+==========+============+============+=============+
+ |Goal name |RFC2544 |TST009 |1s final |20% exceed |
+ +==============+==========+============+============+=============+
+ |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |60s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |60s |0s |
+ |low-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short high- |60s |60s |0s |60s |
+ |loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short low-loss|60s |60s |0s |60s |
+ |sum | | | | |
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 70]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ +--------------+----------+------------+------------+-------------+
+ |Balancing sum |0s |60s |0s |15s |
+ +--------------+----------+------------+------------+-------------+
+ |Excess sum |60s |0s |0s |45s |
+ +--------------+----------+------------+------------+-------------+
+ |Positive |60s |0s |0s |45s |
+ |excess sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |60s |0s |60s |45s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective full|60s |0s |120s |45s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |60s |120s |120s |60s |
+ |whole sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Missing sum |0s |120s |0s |15s |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |60s |120s |60s |60s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Optimistic |100% |0% |50% |75% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |100% |100% |50% |100% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Classification|Upper |Undecided |Lower Bound |Upper Bound |
+ |Result |Bound | | | |
+ +--------------+----------+------------+------------+-------------+
+
+ Table 4
+
+ The one missing trial for "1s final" was Low-Loss, half of trial
+ results are Low-Loss which exactly matches 50% exceed ratio. This
+ shows time savings are not guaranteed.
+
+C.3.5. Point 5
+
+ This is the "first long bad" point. Code for available trial results
+ is: 60x1s0l+60x1s1l+1x60s.1l
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 71]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ +==============+==========+============+============+=============+
+ |Goal name |RFC2544 |TST009 |1s final |20% exceed |
+ +==============+==========+============+============+=============+
+ |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |60s |60s |60s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |0s |0s |120s |60s |
+ |low-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short high- |60s |60s |0s |60s |
+ |loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short low-loss|60s |60s |0s |60s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Balancing sum |0s |60s |0s |15s |
+ +--------------+----------+------------+------------+-------------+
+ |Excess sum |60s |0s |0s |45s |
+ +--------------+----------+------------+------------+-------------+
+ |Positive |60s |0s |0s |45s |
+ |excess sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |120s |60s |60s |45s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective full|120s |60s |180s |105s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |120s |120s |180s |105s |
+ |whole sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Missing sum |0s |60s |0s |0s |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |120s |120s |60s |45s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Optimistic |100% |50% |33.333% |42.857% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |100% |100% |33.333% |42.857% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Classification|Upper |Undecided |Lower Bound |Lower Bound |
+ |Result |Bound | | | |
+ +--------------+----------+------------+------------+-------------+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 72]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Table 5
+
+ As designed for TST009 goal, one Full-Length High-Loss Trial can be
+ tolerated. 120s worth of 1-second trials is not useful, as this is
+ allowed when Exceed Probability does not depend on Trial Duration.
+ As Goal Loss Ratio is zero, it is not possible for 60-second trials
+ to compensate for losses seen in 1-second results. But Load
+ Classification logic does not have that knowledge hardcoded, so
+ optimistic exceed ratio is still only 50%.
+
+ But the 0.1% Trial Loss Ratio is lower than "20% exceed" Goal Loss
+ Ratio, so this unexpected Full-Length Low-Loss trial changed the
+ classification result of this Load to Lower Bound.
+
+C.3.6. Point 6
+
+ This is the "first long good" point. Code for available trial
+ results is: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l
+
+ +==============+==========+============+============+=============+
+ |Goal name |RFC2544 |TST009 |1s final |20% exceed |
+ +==============+==========+============+============+=============+
+ |Goal code |60f60d0l0e|60f120d0l50e|1f120d.5l50e|60f60d0.5l20e|
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |60s |60s |60s |0s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Full-length |60s |60s |180s |120s |
+ |low-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short high- |60s |60s |0s |60s |
+ |loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Short low-loss|60s |60s |0s |60s |
+ |sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Balancing sum |0s |60s |0s |15s |
+ +--------------+----------+------------+------------+-------------+
+ |Excess sum |60s |0s |0s |45s |
+ +--------------+----------+------------+------------+-------------+
+ |Positive |60s |0s |0s |45s |
+ |excess sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective |120s |60s |60s |45s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Effective full|180s |120s |240s |165s |
+ |sum | | | | |
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 73]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ +--------------+----------+------------+------------+-------------+
+ |Effective |180s |120s |240s |165s |
+ |whole sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Missing sum |0s |0s |0s |0s |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |120s |60s |60s |45s |
+ |high-loss sum | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Optimistic |66.667% |50% |25% |27.273% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Pessimistic |66.667% |50% |25% |27.273% |
+ |exceed ratio | | | | |
+ +--------------+----------+------------+------------+-------------+
+ |Classification|Upper |Lower Bound |Lower Bound |Lower Bound |
+ |Result |Bound | | | |
+ +--------------+----------+------------+------------+-------------+
+
+ Table 6
+
+ This is the Low-Loss Trial the "TST009" goal was waiting for. This
+ Load is now classified for all goals; the search may end. Or, more
+ realistically, it can focus on larger load only, as the three goals
+ will want an Upper Bound (unless this Load is Max Load).
+
+C.4. Conditional Throughput Computations
+
+ At the end of this hypothetical search, the "RFC2544" goal labels the
+ load as an Upper Bound, making it ineligible for Conditional-
+ Throughput calculations. By contrast, the other three goals treat
+ the same load as a Lower Bound; if it is also accepted as their
+ Relevant Lower Bound, we can compute Conditional-Throughput values
+ for each of them.
+
+ (The load under discussion is 1 000 000 frames per second.)
+
+C.4.1. Goal 2
+
+ The Conditional Throughput is computed from sorted list of Full-
+ Length Trial results. As TST009 Goal Final Trial Duration is 60
+ seconds, only two of 122 Trials are considered Full-Length Trials.
+ One has Trial Loss Ratio of 0%, the other of 0.1%.
+
+ * Full-length high-loss sum is 60 seconds.
+
+ * Full-length low-loss sum is 60 seconds.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 74]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * Full-length is 120 seconds.
+
+ * Subceed ratio is 50%.
+
+ * Remaining sum initially is 0.5x12s = 60 seconds.
+
+ * Current loss ratio initially is 100%.
+
+ * For first result (duration 60s, loss 0%):
+
+ - Remaining sum is larger than zero, not exiting the loop.
+
+ - Set current loss ratio to this trial's Trial Loss Ratio which
+ is 0%.
+
+ - Decrease the remaining sum by this trial's Trial Effective
+ Duration.
+
+ - New remaining sum is 60s - 60s = 0s.
+
+ * For second result (duration 60s, loss 0.1%):
+
+ * Remaining sum is not larger than zero, exiting the loop.
+
+ * Current forwarding ratio was most recently set to 0%.
+
+ * Current forwarding ratio is one minus the current loss ratio, so
+ 100%.
+
+ * Conditional Throughput is the current forwarding ratio multiplied
+ by the Load value.
+
+ * Conditional Throughput is one million frames per second.
+
+C.4.2. Goal 3
+
+ The "1s final" has Goal Final Trial Duration of 1 second, so all 122
+ Trial Results are considered Full-Length Trials. They are ordered
+ like this:
+
+ 60 1-second 0% loss trials,
+ 1 60-second 0% loss trial,
+ 1 60-second 0.1% loss trial,
+ 60 1-second 1% loss trials.
+
+ The result does not depend on the order of 0% loss trials.
+
+ * Full-length high-loss sum is 60 seconds.
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 75]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * Full-length low-loss sum is 180 seconds.
+
+ * Full-length is 240 seconds.
+
+ * Subceed ratio is 50%.
+
+ * Remaining sum initially is 0.5x240s = 120 seconds.
+
+ * Current loss ratio initially is 100%.
+
+ * For first 61 results (duration varies, loss 0%):
+
+ - Remaining sum is larger than zero, not exiting the loop.
+
+ - Set current loss ratio to this trial's Trial Loss Ratio which
+ is 0%.
+
+ - Decrease the remaining sum by this trial's Trial Effective
+ Duration.
+
+ - New remaining sum varies.
+
+ * After 61 trials, duration of 60x1s + 1x60s has been subtracted
+ from 120s, leaving 0s.
+
+ * For 62-th result (duration 60s, loss 0.1%):
+
+ - Remaining sum is not larger than zero, exiting the loop.
+
+ * Current forwarding ratio was most recently set to 0%.
+
+ * Current forwarding ratio is one minus the current loss ratio, so
+ 100%.
+
+ * Conditional Throughput is the current forwarding ratio multiplied
+ by the Load value.
+
+ * Conditional Throughput is one million frames per second.
+
+C.4.3. Goal 4
+
+ The Conditional Throughput is computed from sorted list of Full-
+ Length Trial results. As "20% exceed" Goal Final Trial Duration is
+ 60 seconds, only two of 122 Trials are considered Full-Length Trials.
+ One has Trial Loss Ratio of 0%, the other of 0.1%.
+
+ * Full-length high-loss sum is 60 seconds.
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 76]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ * Full-length low-loss sum is 60 seconds.
+
+ * Full-length is 120 seconds.
+
+ * Subceed ratio is 80%.
+
+ * Remaining sum initially is 0.8x120s = 96 seconds.
+
+ * Current loss ratio initially is 100%.
+
+ * For first result (duration 60s, loss 0%):
+
+ - Remaining sum is larger than zero, not exiting the loop.
+
+ - Set current loss ratio to this trial's Trial Loss Ratio which
+ is 0%.
+
+ - Decrease the remaining sum by this trial's Trial Effective
+ Duration.
+
+ - New remaining sum is 96s - 60s = 36s.
+
+ * For second result (duration 60s, loss 0.1%):
+
+ - Remaining sum is larger than zero, not exiting the loop.
+
+ - Set current loss ratio to this trial's Trial Loss Ratio which
+ is 0.1%.
+
+ - Decrease the remaining sum by this trial's Trial Effective
+ Duration.
+
+ - New remaining sum is 36s - 60s = -24s.
+
+ * No more trials (and remaining sum is not larger than zero),
+ exiting loop.
+
+ * Current forwarding ratio was most recently set to 0.1%.
+
+ * Current forwarding ratio is one minus the current loss ratio, so
+ 99.9%.
+
+ * Conditional Throughput is the current forwarding ratio multiplied
+ by the Load value.
+
+ * Conditional Throughput is 999 thousand frames per second.
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 77]
+\f
+Internet-Draft MLRsearch September 2025
+
+
+ Due to stricter Goal Exceed Ratio, this Conditional Throughput is
+ smaller than Conditional Throughput of the other two goals.
+
+Authors' Addresses
+
+ Maciek Konstantynowicz
+ Cisco Systems
+
+
+ Vratko Polak
+ Cisco Systems
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Konstantynowicz & Polak Expires 6 March 2026 [Page 78]
--- /dev/null
+<?xml version="1.0" encoding="us-ascii"?>
+ <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
+ <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.1.2) -->
+
+
+<!DOCTYPE rfc [
+ <!ENTITY nbsp " ">
+ <!ENTITY zwsp "​">
+ <!ENTITY nbhy "‑">
+ <!ENTITY wj "⁠">
+
+<!ENTITY RFC1242 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.1242.xml">
+<!ENTITY RFC2119 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
+<!ENTITY RFC2285 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2285.xml">
+<!ENTITY RFC2544 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2544.xml">
+<!ENTITY RFC8174 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml">
+<!ENTITY RFC5180 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.5180.xml">
+<!ENTITY RFC6349 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6349.xml">
+<!ENTITY RFC6985 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.6985.xml">
+<!ENTITY RFC8219 SYSTEM "https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8219.xml">
+]>
+
+
+<rfc ipr="trust200902" docName="draft-ietf-bmwg-mlrsearch-12" category="info" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true">
+ <front>
+ <title abbrev="MLRsearch">Multiple Loss Ratio Search</title>
+
+ <author initials="M." surname="Konstantynowicz" fullname="Maciek Konstantynowicz">
+ <organization>Cisco Systems</organization>
+ <address>
+ </address>
+ </author>
+ <author initials="V." surname="Polak" fullname="Vratko Polak">
+ <organization>Cisco Systems</organization>
+ <address>
+ </address>
+ </author>
+
+ <date year="2025" month="September" day="02"/>
+
+ <area>ops</area>
+ <workgroup>Benchmarking Working Group</workgroup>
+ <keyword>Internet-Draft</keyword>
+
+ <abstract>
+
+
+<?line 108?>
+
+<t>This document specifies extensions to "Benchmarking Methodology for
+Network Interconnect Devices" (RFC 2544) throughput search by
+defining a new methodology called Multiple Loss Ratio search
+(MLRsearch). MLRsearch aims to minimize search duration,
+support multiple loss ratio searches, and improve result repeatability
+and comparability.</t>
+
+
+
+<t>MLRsearch is motivated by the pressing need to address the challenges of
+evaluating and testing the various data plane solutions, especially in
+software- based networking systems based on Commercial Off-the-Shelf
+(COTS) CPU hardware vs purpose-built ASIC / NPU / FPGA hardware.</t>
+
+
+
+
+
+ </abstract>
+
+
+
+ </front>
+
+ <middle>
+
+
+<?line 174?>
+
+
+<section anchor="introduction"><name>Introduction</name>
+
+<t>This document describes the Multiple Loss Ratio search
+(MLRsearch) methodology, optimized for determining data plane
+throughput in software-based networking functions running on commodity systems with
+x86/ARM CPUs (vs purpose-built ASIC / NPU / FPGA). Such network
+functions can be deployed on dedicated physical appliance (e.g., a
+standalone hardware device) or as virtual appliance (e.g., Virtual
+Network Function running on shared servers in the compute cloud).</t>
+
+<section anchor="purpose"><name>Purpose</name>
+
+
+<t>The purpose of this document is to describe the Multiple Loss Ratio search
+(MLRsearch) methodology, optimized for determining
+data plane throughput in software-based networking devices and functions.</t>
+
+
+<t>Applying the vanilla throughput binary search,
+as specified for example in <xref target="TST009"></xref> and <xref target="RFC2544"></xref>
+to software devices under test (DUTs) results in several problems:</t>
+
+
+
+<t><list style="symbols">
+ <t>Binary search takes long as most trials are done far from the
+eventually found throughput.</t>
+ <t>The required final trial duration and pauses between trials
+prolong the overall search duration.</t>
+ <t>Software DUTs show noisy trial results,
+leading to a big spread of possible discovered throughput values.</t>
+ <t>Throughput requires a loss of exactly zero frames, but the industry best practices
+frequently allow for low but non-zero losses tolerance (<xref target="Y.1564"></xref>, test-equipment manuals).</t>
+ <t>The definition of throughput is not clear when trial results are inconsistent.
+(e.g., when successive trials at the same - or even a higher - offered
+load yield different loss ratios, the classical <xref target="RFC1242"></xref> / <xref target="RFC2544"></xref>
+throughput metric can no longer be pinned to a single, unambiguous
+value.)</t>
+</list></t>
+
+
+
+
+<t>To address these problems,
+early MLRsearch implementations employed the following enhancements:</t>
+
+<t><list style="numbers" type="1">
+ <t>Allow multiple short trials instead of one big trial per load.
+ <list style="symbols">
+ <t>Optionally, tolerate a percentage of trial results with higher loss.</t>
+ </list></t>
+ <t>Allow searching for multiple Search Goals, with differing loss ratios.
+ <list style="symbols">
+ <t>Any trial result can affect each Search Goal in principle.</t>
+ </list></t>
+ <t>Insert multiple coarse targets for each Search Goal, earlier ones need
+to spend less time on trials.
+ <list style="symbols">
+ <t>Earlier targets also aim for lesser precision.</t>
+ <t>Use Forwarding Rate (FR) at Maximum Offered Load (FRMOL), as defined
+in Section 3.6.2 of <xref target="RFC2285"></xref>, to initialize bounds.</t>
+ </list></t>
+ <t>Be careful when dealing with inconsistent trial results.
+ <list style="symbols">
+ <t>Reported throughput is smaller than the smallest load with high loss.</t>
+ <t>Smaller load candidates are measured first.</t>
+ </list></t>
+ <t>Apply several time-saving load selection heuristics that deliberately
+prevent the bounds from narrowing unnecessarily.</t>
+</list></t>
+
+
+
+
+<t>Enhacements 1, 2 and partly 4 are formalized as MLRsearch Specification
+within this document, other implementation details are out the scope.</t>
+
+
+
+<t>The remaining enhancements are treated as implementation details,
+thus achieving high comparability without limiting future improvements.</t>
+
+<t>MLRsearch configuration
+supports both conservative settings and aggressive settings.
+Conservative enough settings lead to results
+unconditionally compliant with <xref target="RFC2544"></xref>,
+but without much improvement on search duration and repeatability - see
+<xref target="mlrsearch-compliant-with-rfc-2544">MLRsearch Compliant with RFC 2544</xref>.
+Conversely, aggressive settings lead to shorter search durations
+and better repeatability, but the results are not compliant with <xref target="RFC2544"></xref>.
+Exact settings are not specified, but see the discussion in
+<xref target="overview-of-rfc-2544-problems">Overview of RFC 2544 Problems</xref>
+for the impact of different settings on result quality.</t>
+
+
+
+
+<t>This document does not change or obsolete any part of <xref target="RFC2544"></xref>.</t>
+
+
+
+
+</section>
+<section anchor="positioning-within-bmwg-methodologies"><name>Positioning within BMWG Methodologies</name>
+
+<t>The Benchmarking Methodology Working Group (BMWG) produces recommendations (RFCs)
+that describe various benchmarking methodologies for use in a controlled laboratory environment.
+A large number of these benchmarks are based on the terminology from <xref target="RFC1242"></xref>
+and the foundational methodology from <xref target="RFC2544"></xref>.
+A common pattern has emerged where BMWG documents reference the methodology of <xref target="RFC2544"></xref>
+and augment it with specific requirements for testing particular network systems or protocols,
+without modifying the core benchmark definitions.</t>
+
+<t>While BMWG documents are formally recommendations,
+they are widely treated as industry norms to ensure the comparability of results between different labs.
+The set of benchmarks defined in <xref target="RFC2544"></xref>, in particular,
+became a de facto standard for performance testing.
+In this context, the MLRsearch Specification formally defines a new
+class of benchmarks that fits within the wider <xref target="RFC2544"></xref> framework
+(see <xref target="scope">Scope </xref>).</t>
+
+<t>A primary consideration in the design of MLRsearch is the trade-off
+between configurability and comparability. The methodology's flexibility,
+especially the ability to define various sets of Search Goals,
+supporting both single-goal and multiple-goal benchmarks in an unified way
+is powerful for detailed characterization and internal testing.
+However, this same flexibility is detrimental to inter-lab comparability
+unless a specific, common set of Search Goals is agreed upon.</t>
+
+<t>Therefore, MLRsearch should not be seen as a direct extension
+nor a replacement for the <xref target="RFC2544"></xref> Throughput benchmark.
+Instead, this document provides a foundational methodology
+that future BMWG documents can use to define new, specific, and comparable benchmarks
+by mandating particular Search Goal configurations.
+For operators of existing test procedures, it is worth noting
+that many test setups measuring <xref target="RFC2544"></xref> Throughput
+can be adapted to produce results compliant with the MLRsearch Specification,
+often without affecting Trials,
+merely by augmenting the content of the final test report.</t>
+
+</section>
+</section>
+<section anchor="overview-of-rfc-2544-problems"><name>Overview of RFC 2544 Problems</name>
+
+<t>This section describes the problems affecting usability
+of various performance testing methodologies,
+mainly a binary search for <xref target="RFC2544"></xref> unconditionally compliant throughput.</t>
+
+<section anchor="long-search-duration"><name>Long Search Duration</name>
+
+<t>The proliferation of software DUTs, with frequent software updates and a</t>
+
+<t>number of different frame processing modes and configurations,
+has increased both the number of performance tests
+required to verify the DUT update and the frequency of running those tests.
+This makes the overall test execution time even more important than before.</t>
+
+
+<t>The throughput definition per <xref target="RFC2544"></xref> restricts the potential
+for time-efficiency improvements.
+The bisection method, when used in a manner unconditionally compliant
+with <xref target="RFC2544"></xref>, is excessively slow due to two main factors.</t>
+
+<t>Firstly, a significant amount of time is spent on trials
+with loads that, in retrospect, are far from the final determined throughput.</t>
+
+
+
+
+
+<t>Secondly, <xref target="RFC2544"></xref> does not specify any stopping condition for
+throughput search, so users of testing equipment implementing the
+procedure already have access to a limited trade-off
+between search duration and achieved precision.
+However, each of the full 60-second trials doubles the precision.</t>
+
+
+<t>As such, not many trials can be removed without a substantial loss of precision.</t>
+
+<t>For reference, here is a brief <xref target="RFC2544"></xref> throughput binary
+(bisection) reminder, based on Sections 24 and 26 of [RFC2544:</t>
+
+<t><list style="symbols">
+ <t>Set Max = line-rate and Min = a proven loss-free load.</t>
+ <t>Run a single 60-s trial at the midpoint.</t>
+ <t>Zero-loss -> midpoint becomes new Min; any loss-> new Max.</t>
+ <t>Repeat until the Max-Min gap meets the desired precision, then report
+the highest zero-loss rate for every mandatory frame size.</t>
+</list></t>
+
+</section>
+<section anchor="dut-in-sut"><name>DUT in SUT</name>
+
+<t><xref target="RFC2285"></xref> defines:</t>
+
+<t>DUT as:</t>
+
+<t><list style="symbols">
+ <t>The network frame forwarding device to which stimulus is offered and
+response measured Section 3.1.1 of <xref target="RFC2285"></xref>.</t>
+</list></t>
+
+
+<t>SUT as:</t>
+
+<t><list style="symbols">
+ <t>The collective set of network devices as a single entity to which
+stimulus is offered and response measured Section 3.1.2 of <xref target="RFC2285"></xref>.</t>
+</list></t>
+
+<t>Section 19 of <xref target="RFC2544"></xref> specifies a test setup with an external tester
+stimulating the networking system, treating it either as a single
+Device Under Test (DUT), or as a system of devices, a System Under
+Test (SUT).</t>
+
+<t>For software-based data-plane forwarding running on commodity x86/ARM
+CPUs, the SUT comprises not only the forwarding application itself, the
+DUT, but the entire execution environment: host hardware, firmware and
+kernel/hypervisor services, as well as any other software workloads
+that share the same CPUs, memory and I/O resources.</t>
+
+
+<t>Given that a SUT is a shared multi-tenant environment,
+the DUT might inadvertently
+experience interference from the operating system
+or other software operating on the same server.</t>
+
+
+<t>Some of this interference can be mitigated.
+For instance, in multi-core CPU systems, pinning DUT program threads to
+specific CPU cores
+and isolating those cores can prevent context switching.</t>
+
+
+<t>Despite taking all feasible precautions, some adverse effects may still impact
+the DUT's network performance.
+In this document, these effects are collectively
+referred to as SUT noise, even if the effects are not as unpredictable
+as what other engineering disciplines call noise.</t>
+
+<t>A DUT can also exhibit fluctuating performance itself,
+for reasons not related to the rest of SUT. For example, this can be
+due to pauses in execution as needed for internal stateful processing.
+In many cases this may be an expected per-design behavior,
+as it would be observable even in a hypothetical scenario
+where all sources of SUT noise are eliminated.
+Such behavior affects trial results in a way similar to SUT noise.
+As the two phenomena are hard to distinguish,
+in this document the term 'noise' is used to encompass
+both the internal performance fluctuations of the DUT
+and the genuine noise of the SUT.</t>
+
+<t>A simple model of SUT performance consists of an idealized noiseless performance,
+and additional noise effects.
+For a specific SUT, the noiseless performance is assumed to be constant,
+with all observed performance variations being attributed to noise.
+The impact of the noise can vary in time, sometimes wildly,
+even within a single trial.
+The noise can sometimes be negligible, but frequently
+it lowers the observed SUT performance as observed in trial results.</t>
+
+<t>In this simple model, a SUT does not have a single performance value, it has a spectrum.
+One end of the spectrum is the idealized noiseless performance value,
+the other end can be called a noiseful performance.
+In practice, trial results close to the noiseful end of the spectrum
+happen only rarely.
+The worse a possible performance value is, the more rarely it is seen in a trial.
+Therefore, the extreme noiseful end of the SUT spectrum is not observable
+among trial results.</t>
+
+<t>Furthermore, the extreme noiseless end of the SUT spectrum is unlikely
+to be observable, this time because minor noise events almost always
+occur during each trial, nudging the measured performance slightly
+below the theoretical maximum.</t>
+
+
+<t>Unless specified otherwise, this document's focus is
+on the potentially observable ends of the SUT performance spectrum,
+as opposed to the extreme ones.</t>
+
+<t>When focusing on the DUT, the benchmarking effort should ideally aim
+to eliminate only the SUT noise from SUT measurements.
+However, this is currently not feasible in practice,
+as there are no realistic enough models that would be capable
+to distinguish SUT noise from DUT fluctuations
+(based on the available literature at the time of writing).</t>
+
+
+<t>Provided SUT execution environment and any co-resident workloads place
+only negligible demands on SUT shared resources, so that
+the DUT remains the principal performance limiter,
+the DUT's ideal noiseless performance is defined
+as the noiseless end of the SUT performance spectrum.</t>
+
+
+
+
+<t>Note that by this definition, DUT noiseless performance
+also minimizes the impact of DUT fluctuations, as much as realistically possible
+for a given trial duration.</t>
+
+<t>The MLRsearch methodology aims to solve the DUT in SUT problem
+by estimating the noiseless end of the SUT performance spectrum
+using a limited number of trial results.</t>
+
+<t>Improvements to the throughput search algorithm, aimed at better dealing
+with software networking SUT and DUT setups, should adopt methods that
+explicitly model SUT-generated noise, enabling to derive surrogate
+metrics that approximate the (proxies for) DUT noiseless performance
+across a range of SUT noise-tolerance levels.</t>
+
+
+</section>
+<section anchor="repeatability-and-comparability"><name>Repeatability and Comparability</name>
+
+<t><xref target="RFC2544"></xref> does not suggest repeating throughput search. Also, note that
+from simply one discovered throughput value,
+it cannot be determined how repeatable that value is.
+Unsatisfactory repeatability then leads to unacceptable comparability,
+as different benchmarking teams may obtain varying throughput values
+for the same SUT, exceeding the expected differences from search precision.
+Repeatability is important also when the test procedure is kept the same,
+but SUT is varied in small ways. For example, during development
+of software-based DUTs, repeatability is needed to detect small regressions.</t>
+
+<t><xref target="RFC2544"></xref> throughput requirements (60 seconds trial and
+no tolerance of a single frame loss) affect the throughput result as follows:</t>
+
+<t>The SUT behavior close to the noiseful end of its performance spectrum
+consists of rare occasions of significantly low performance,
+but the long trial duration makes those occasions not so rare on the trial level.
+Therefore, the binary search results tend to wander away from the noiseless end
+of SUT performance spectrum, more frequently and more widely than shorter
+trials would, thus causing unacceptable throughput repeatability.</t>
+
+<t>The repeatability problem can be better addressed by defining a search procedure
+that identifies a consistent level of performance,
+even if it does not meet the strict definition of throughput in <xref target="RFC2544"></xref>.</t>
+
+<t>According to the SUT performance spectrum model, better repeatability
+will be at the noiseless end of the spectrum.
+Therefore, solutions to the DUT in SUT problem
+will help also with the repeatability problem.</t>
+
+<t>Conversely, any alteration to <xref target="RFC2544"></xref> throughput search
+that improves repeatability should be considered
+as less dependent on the SUT noise.</t>
+
+<t>An alternative option is to simply run a search multiple times, and
+report some statistics (e.g., average and standard deviation, and/or
+percentiles like p95).</t>
+
+
+<t>This can be used for a subset of tests deemed more important,
+but it makes the search duration problem even more pronounced.</t>
+
+</section>
+<section anchor="throughput-with-non-zero-loss"><name>Throughput with Non-Zero Loss</name>
+
+<dl>
+ <dt>Section 3.17 of <xref target="RFC1242"></xref> defines throughput as:</dt>
+ <dd>
+ <t>The maximum rate at which none of the offered frames
+are dropped by the device.</t>
+ </dd>
+ <dt>Then, it says:</dt>
+ <dd>
+ <t>Since even the loss of one frame in a
+data stream can cause significant delays while
+waiting for the higher-level protocols to time out,
+it is useful to know the actual maximum data
+rate that the device can support.</t>
+ </dd>
+</dl>
+
+<t>However, many benchmarking teams accept a low,
+non-zero loss ratio as the goal for their load search.</t>
+
+<t>Motivations are many:</t>
+
+<t><list style="symbols">
+ <t>Networking protocols tolerate frame loss better,
+compared to the time when <xref target="RFC1242"></xref> and <xref target="RFC2544"></xref> were specified.</t>
+</list></t>
+
+
+<t><list style="symbols">
+ <t>Increased link speeds require trials sending way more frames within the same duration,
+increasing the chance of a small SUT performance fluctuation
+being enough to cause frame loss.</t>
+</list></t>
+
+
+<t><list style="symbols">
+ <t>Because noise-related drops usually arrive in small bursts, their
+impact on the trial's overall frame loss ratio is diluted by the
+longer intervals in which the SUT operates close to its noiseless
+performance; consequently, the averaged Trial Loss Ratio can still
+end up below the specified Goal Loss Ratio value.</t>
+</list></t>
+
+
+<t><list style="symbols">
+ <t>If an approximation of the SUT noise impact on the Trial Loss Ratio is known,
+it can be set as the Goal Loss Ratio (see definitions of
+Trial and Goal terms in <xref target="trial-terms">Trial Terms</xref> and <xref target="goal-terms">Goal Terms</xref>).</t>
+</list></t>
+
+
+
+<t><list style="symbols">
+ <t>For more information, see an earlier draft <xref target="Lencze-Shima"></xref> (Section 5)
+and references there.</t>
+</list></t>
+
+<t>Regardless of the validity of all similar motivations,
+support for non-zero loss goals makes a
+search algorithm more user-friendly.
+<xref target="RFC2544"></xref> throughput is not user-friendly in this regard.</t>
+
+
+<t>Furthermore, allowing users to specify multiple loss ratio values,
+and enabling a single search to find all relevant bounds,
+significantly enhances the usefulness of the search algorithm.</t>
+
+<t>Searching for multiple Search Goals also helps to describe the SUT performance
+spectrum better than the result of a single Search Goal.
+For example, the repeated wide gap between zero and non-zero loss loads
+indicates the noise has a large impact on the observed performance,
+which is not evident from a single goal load search procedure result.</t>
+
+<t>It is easy to modify the vanilla bisection to find a lower bound
+for the load that satisfies a non-zero Goal Loss Ratio.
+But it is not that obvious how to search for multiple goals at once,
+hence the support for multiple Search Goals remains a problem.</t>
+
+<t>At the time of writing there does not seem to be a consensus in the industry
+on which ratio value is the best.
+For users, performance of higher protocol layers is important, for
+example, goodput of TCP connection (TCP throughput, <xref target="RFC6349"></xref>), but relationship
+between goodput and loss ratio is not simple. Refer to
+<xref target="Lencze-Kovacs-Shima"></xref> for examples of various corner cases,
+Section 3 of <xref target="RFC6349"></xref> for loss ratios acceptable for an accurate
+measurement of TCP throughput, and <xref target="Ott-Mathis-Semke-Mahdavi"></xref> for
+models and calculations of TCP performance in presence of packet loss.</t>
+
+
+</section>
+<section anchor="inconsistent-trial-results"><name>Inconsistent Trial Results</name>
+
+<t>While performing throughput search by executing a sequence of
+measurement trials, there is a risk of encountering inconsistencies
+between trial results.</t>
+
+<t>Examples include, but are not limited to:</t>
+
+<t><list style="symbols">
+ <t>A trial at the same load (same or different trial duration) results
+in a different Trial Loss Ratio.</t>
+ <t>A trial at a larger load (same or different trial duration) results
+in a lower Trial Loss Ratio.</t>
+</list></t>
+
+<t>The plain bisection never encounters inconsistent trials.
+But <xref target="RFC2544"></xref> hints about the possibility of inconsistent trial results,
+in two places in its text.
+The first place is Section 24 of <xref target="RFC2544"></xref>,
+where full trial durations are required,
+presumably because they can be inconsistent with the results
+from short trial durations.
+The second place is Section 26.3 of <xref target="RFC2544"></xref>,
+where two successive zero-loss trials
+are recommended, presumably because after one zero-loss trial
+there can be a subsequent inconsistent non-zero-loss trial.</t>
+
+
+
+<t>A robust throughput search algorithm needs to decide how to continue
+the search in the presence of such inconsistencies.
+Definitions of throughput in <xref target="RFC1242"></xref> and <xref target="RFC2544"></xref> are not specific enough
+to imply a unique way of handling such inconsistencies.</t>
+
+<t>Ideally, there will be a definition of a new quantity which both generalizes
+throughput for non-zero Goal Loss Ratio values
+(and other possible repeatability enhancements), while being precise enough
+to force a specific way to resolve trial result inconsistencies.
+But until such a definition is agreed upon, the correct way to handle
+inconsistent trial results remains an open problem.</t>
+
+<t>Relevant Lower Bound is the MLRsearch term that addresses this problem.</t>
+
+</section>
+</section>
+<section anchor="requirements-language"><name>Requirements Language</name>
+
+
+<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
+"SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL"
+in this document are to be interpreted as described in BCP 14, <xref target="RFC2119"></xref>
+and <xref target="RFC8174"></xref> when, and only when, they appear in all capitals, as shown here.</t>
+
+
+<t>This document is categorized as an Informational RFC.
+While it does not mandate the adoption of the MLRsearch methodology,
+it uses the normative language of BCP 14 to provide an unambiguous specification.
+This ensures that if a test procedure or test report claims compliance with the MLRsearch Specification,
+it MUST adhere to all the absolute requirements defined herein.
+The use of normative language is intended to promote repeatable and comparable results
+among those who choose to implement this methodology.</t>
+
+
+</section>
+<section anchor="mlrsearch-specification"><name>MLRsearch Specification</name>
+
+<t>This chapter provides all technical definitions
+needed for evaluating whether a particular test procedure
+complies with MLRsearch Specification.</t>
+
+<t>Some terms used in the specification are capitalized.
+It is just a stylistic choice for this document,
+reminding the reader this term is introduced, defined or explained
+elsewhere in the document. Lowercase variants are equally valid.</t>
+
+<t>This document does not separate terminology from methodology. Terms are
+fully specified and discussed in their own subsections, under sections
+titled "Terms". This way, the list of terms is visible in table of
+contents.</t>
+
+
+
+
+<t>Each per term subsection contains a short <em>Definition</em> paragraph
+containing a minimal definition and all strict requirements, followed
+by <em>Discussion</em> paragraphs focusing on important consequences and
+recommendations. Requirements about how other components can use the
+defined quantity are also included in the discussion.</t>
+
+
+
+<section anchor="scope"><name>Scope</name>
+
+<t>This document specifies the Multiple Loss Ratio search (MLRsearch) methodology.
+The MLRsearch Specification details a new class of benchmarks
+by listing all terminology definitions and methodology requirements.
+The definitions support "multi-goal" benchmarks, with "single-goal" as a subset.</t>
+
+<t>The normative scope of this specification includes:</t>
+
+<t><list style="symbols">
+ <t>The terminology for all required quantities and their attributes.</t>
+ <t>An abstract architecture consisting of functional components
+(Manager, Controller, Measurer) and the requirements for their inputs and outputs.</t>
+ <t>The required structure and attributes of the Controller Input,
+including one or more Search Goal instances.</t>
+ <t>The required logic for Load Classification, which determines whether a given Trial Load
+qualifies as a Lower Bound or an Upper Bound for a Search Goal.</t>
+ <t>The required structure and attributes of the Controller Output,
+including a Goal Result for each Search Goal.</t>
+</list></t>
+
+<section anchor="relationship-to-rfc-2544"><name>Relationship to RFC 2544</name>
+
+<t>MLRsearch Specification is an independent methodology
+and does not change or obsolete any part of <xref target="RFC2544"></xref>.</t>
+
+<t>This specification permits deviations from the Trial procedure
+as described in <xref target="RFC2544"></xref>. Any deviation from the <xref target="RFC2544"></xref> procedure
+must be documented explicitly in the Test Report,
+and such variations remain outside the scope of the original <xref target="RFC2544"></xref> benchmarks.</t>
+
+<t>A specific single-goal MLRsearch benchmark can be configured
+to be compliant with <xref target="RFC2544"></xref> Throughput,
+and most procedures reporting <xref target="RFC2544"></xref> Throughput
+can be adapted to satisfy also MLRsearch requirements for specific search goal.</t>
+
+</section>
+<section anchor="applicability-of-other-specifications"><name>Applicability of Other Specifications</name>
+
+<t>Methodology extensions from other BMWG documents that specify details
+for testing particular DUTs, configurations, or protocols
+(e.g., by defining a particular Traffic Profile) are considered orthogonal
+to MLRsearch and are applicable to a benchmark conducted using MLRsearch methodology.</t>
+
+</section>
+<section anchor="out-of-scope"><name>Out of Scope</name>
+
+<t>The following aspects are explicitly out of the normative scope of this document:</t>
+
+<t><list style="symbols">
+ <t>This specification does not mandate or recommend any single,
+universal Search Goal configuration for all use cases.
+The selection of Search Goal parameters is left
+to the operator of the test procedure or may be defined by future specifications.</t>
+ <t>The internal heuristics or algorithms used by the Controller to select Trial Input values
+(e.g., the load selection strategy) are considered implementation details.</t>
+ <t>The potential for, and the effects of, interference between different Search Goal instances
+within a multiple-goal search are considered outside the normative scope of this specification.</t>
+</list></t>
+
+</section>
+</section>
+<section anchor="architecture-overview"><name>Architecture Overview</name>
+
+
+<t>Although the normative text references only terminology that has already
+been introduced, explanatory passages beside it sometimes profit from
+terms that are defined later in the document. To keep the initial
+read-through clear, this informative section offers a concise, top-down
+sketch of the complete MLRsearch architecture.</t>
+
+<t>The architecture is modelled as a set of abstract, interacting
+components. Information exchange between components is expressed in an
+imperative-programming style: one component "calls" another, supplying
+inputs (arguments) and receiving outputs (return values). This notation
+is purely conceptual; actual implementations need not exchange explicit
+messages. When the text contrasts alternative behaviours, it refers to
+the different implementations of the same component.</t>
+
+
+<t>A test procedure is considered compliant with the MLRsearch
+Specification if it can be conceptually decomposed into the abstract
+components defined herein, and each component satisfies the
+requirements defined for its corresponding MLRsearch element.</t>
+
+<t>The Measurer component is tasked to perform Trials,
+the Controller component is tasked to select Trial Durations and Loads,
+the Manager component is tasked to pre-configure involved entities
+and to produce the Test Report.
+The Test Report explicitly states Search Goals (as Controller Input)
+and corresponding Goal Results (Controller Output).</t>
+
+<t>This constitutes one benchmark (single-goal or multi-goal).
+Repeated or slightly differing benchmarks are realized
+by calling Controller once for each benchmark.</t>
+
+<t>The Manager calls a Controller once,
+and the Controller then invokes the Measurer repeatedly
+until Controler decides it has enough information to return outputs.</t>
+
+
+
+<t>The part during which the Controller invokes the Measurer is termed the
+Search. Any work the Manager performs either before invoking the
+Controller or after Controller returns, falls outside the scope of the
+Search.</t>
+
+<t>MLRsearch Specification prescribes Regular Search Results and recommends
+corresponding search completion conditions.</t>
+
+
+<t>Irregular Search Results are also allowed,
+they have different requirements and their corresponding stopping conditions are out of scope.</t>
+
+<t>Search Results are based on Load Classification. When measured enough,
+a chosen Load can either achieve or fail each Search Goal
+(separately), thus becoming a Lower Bound or an Upper Bound for that
+Search Goal.</t>
+
+<t>When the Relevant Lower Bound is close enough to Relevant Upper Bound
+according to Goal Width, the Regular Goal Result is found.
+Search stops when all Regular Goal Results are found,
+or when some Search Goals are proven to have only Irregular Goal Results.</t>
+
+
+
+<section anchor="test-report"><name>Test Report</name>
+
+<t>A primary responsibility of the Manager is to produce a Test Report,
+which serves as the final and formal output of the test procedure.</t>
+
+<t>This document does not provide a single, complete, normative definition
+for the structure of the Test Report. For example, Test Report may contain
+results for a single benchmark, or it could aggregate results of many benchmarks.</t>
+
+<t>Instead, normative requirements for the content of the Test Report
+are specified throughout this document in conjunction
+with the definitions of the quantities and procedures to which they apply.
+Readers should note that any clause requiring a value to be "reported"
+or "stated in the test report" constitutes a normative requirement
+on the content of this final artifact.</t>
+
+<t>Even where not stated explicitly, the "Reporting format"
+paragraphs in <xref target="RFC2544"></xref> sections are still requirements on Test Report
+if they apply to a MLRsearch benchmark.</t>
+
+</section>
+<section anchor="behavior-correctness"><name>Behavior Correctness</name>
+
+<t>MLRsearch Specification by itself does not guarantee that
+the Search ends in finite time, as the freedom the Controller has
+for Load selection also allows for clearly deficient choices.</t>
+
+
+<t>For deeper insights on these matters, refer to <xref target="FDio-CSIT-MLRsearch"></xref>.</t>
+
+<t>The primary MLRsearch implementation, used as the prototype
+for this specification, is <xref target="PyPI-MLRsearch"></xref>.</t>
+
+</section>
+</section>
+<section anchor="quantities"><name>Quantities</name>
+
+<t>MLRsearch Specification
+uses a number of specific quantities,
+some of them can be expressed in several different units.</t>
+
+
+<t>In general, MLRsearch Specification does not require particular units to be used,
+but it is REQUIRED for the test report to state all the units.
+For example, ratio quantities can be dimensionless numbers between zero and one,
+but may be expressed as percentages instead.</t>
+
+<t>For convenience, a group of quantities can be treated as a composite quantity.
+One constituent of a composite quantity is called an attribute.
+A group of attribute values is called an instance of that composite quantity.</t>
+
+
+<t>Some attributes may depend on others and can be calculated from other
+attributes. Such quantities are called derived quantities.</t>
+
+<section anchor="current-and-final-values"><name>Current and Final Values</name>
+
+<t>Some quantities are defined in a way that makes it possible to compute their
+values in the middle of a Search. Other quantities are specified so
+that their values can be computed only after a Search ends. Some
+quantities are important only after a Search ended, but their values
+are computable also before a Search ends.</t>
+
+<t>For a quantity that is computable before a Search ends,
+the adjective <strong>current</strong> is used to mark a value of that quantity
+available before the Search ends.
+When such value is relevant for the search result, the adjective <strong>final</strong>
+is used to denote the value of that quantity at the end of the Search.</t>
+
+<t>If a time evolution of such a dynamic quantity is guided by
+configuration quantities, those adjectives can be used to distinguish
+quantities. For example, if the current value of "duration"
+(dynamic quantity) increases from "initial duration" to "final
+duration" (configuration quantities), all the quoted names denote
+separate but related quantities. As the naming suggests, the final
+value of "duration" is expected to be equal to "final duration" value.</t>
+
+</section>
+</section>
+<section anchor="existing-terms"><name>Existing Terms</name>
+
+
+<t>This specification relies on the following three documents that should
+be consulted before attempting to make use of this document:</t>
+
+<t><list style="symbols">
+ <t>"Benchmarking Terminology for Network Interconnect Devices" <xref target="RFC1242"></xref>
+contains basic term definitions.</t>
+ <t>"Benchmarking Terminology for LAN Switching Devices" <xref target="RFC2285"></xref> adds
+more terms and discussions, describing some known network
+benchmarking situations in a more precise way.</t>
+ <t>"Benchmarking Methodology for Network Interconnect Devices"
+ <xref target="RFC2544"></xref> contains discussions about terms and additional
+ methodology requirements.</t>
+</list></t>
+
+<t>Definitions of some central terms from above documents are copied and
+discussed in the following subsections.</t>
+
+
+
+<section anchor="sut"><name>SUT</name>
+
+<t>Defined in Section 3.1.2 of <xref target="RFC2285"></xref> as follows.</t>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The collective set of network devices to which stimulus is offered
+as a single entity and response measured.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>An SUT consisting of a single network device is allowed by this definition.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>In software-based networking SUT may comprise multitude of
+networking applications and the entire host hardware and software
+execution environment.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>SUT is the only entity that can be benchmarked directly,
+even though only the performance of some sub-components are of interest.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="dut"><name>DUT</name>
+
+<t>Defined in Section 3.1.1 of <xref target="RFC2285"></xref> as follows.</t>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The network forwarding device
+to which stimulus is offered and response measured.</t>
+ </dd>
+</dl>
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Contrary to SUT, the DUT stimulus and response are frequently
+initiated and observed only indirectly, on different parts of SUT.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>DUT, as a sub-component of SUT, is only indirectly mentioned in
+MLRsearch Specification, but is of key relevance for its motivation.
+The device can represent a software-based networking functions running
+on commodity x86/ARM CPUs (vs purpose-built ASIC / NPU / FPGA).</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A well-designed SUTs should have the primary DUT as their performance bottleneck.
+The ways to achieve that are outside of MLRsearch Specification scope.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial"><name>Trial</name>
+
+<t>A trial is the part of the test described in Section 23 of <xref target="RFC2544"></xref>.</t>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A particular test consists of multiple trials. Each trial returns
+one piece of information, for example the loss rate at a particular
+input frame rate. Each trial consists of a number of phases:</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>a) If the DUT is a router, send the routing update to the "input"
+port and pause two seconds to be sure that the routing has settled.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>b) Send the "learning frames" to the "output" port and wait 2
+seconds to be sure that the learning has settled. Bridge learning
+frames are frames with source addresses that are the same as the
+destination addresses used by the test frames. Learning frames for
+other protocols are used to prime the address resolution tables in
+the DUT. The formats of the learning frame that should be used are
+shown in the Test Frame Formats document.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>c) Run the test trial.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>d) Wait for two seconds for any residual frames to be received.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>e) Wait for at least five seconds for the DUT to restabilize.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The traffic is sent only in phase c) and received in phases c) and d).</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Trials are the only stimuli the SUT is expected to experience during the Search.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>In some discussion paragraphs, it is useful to consider the traffic
+as sent and received by a tester, as implicitly defined
+in Section 6 of <xref target="RFC2544"></xref>.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The definition describes some traits, not using capitalized verbs
+to signify strength of the requirements.
+For the purposes of the MLRsearch Specification,
+the test procedure MAY deviate from the <xref target="RFC2544"></xref> description,
+but any such deviation MUST be described explicitly in the Test Report.
+It is still RECOMMENDED to not deviate from the description,
+as any deviation weakens comparability.</t>
+ </dd>
+</dl>
+
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>An example of deviation from <xref target="RFC2544"></xref> is using shorter wait times,
+compared to those described in phases a), b), d) and e).</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The <xref target="RFC2544"></xref> document itself seems to be treating phase b)
+as any type of configuration that cannot be configured only once (by Manager,
+before Search starts), as some crucial SUT state could time-out during the Search.
+It is RECOMMENDED to interpret the "learning frames" to be
+any such time-sensitive per-trial configuration method,
+with bridge MAC learning being only one possible examples.
+Appendix C.2.4.1 of <xref target="RFC2544"></xref> lists another example: ARP with wait time of 5 seconds.</t>
+ </dd>
+</dl>
+
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Some methodologies describe recurring tests.
+If those are based on Trials, they are treated as multiple independent Trials.</t>
+ </dd>
+</dl>
+
+</section>
+</section>
+<section anchor="trial-terms"><name>Trial Terms</name>
+
+
+<t>This section defines new and redefine existing terms for quantities
+relevant as inputs or outputs of a Trial, as used by the Measurer component.
+This includes also any derived quantities related to results of one Trial.</t>
+
+<section anchor="trial-duration"><name>Trial Duration</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Duration is the intended duration of the phase c) of a Trial.</t>
+ </dd>
+</dl>
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The value MUST be positive.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>While any positive real value may be provided, some Measurer
+implementations MAY limit possible values, e.g., by rounding down to
+nearest integer in seconds. In that case, it is RECOMMENDED to give
+such inputs to the Controller so that the Controller
+only uses the accepted values.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="trial-load"><name>Trial Load</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Load is the per-interface Intended Load for a Trial.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Load is equivalent to the quantities defined
+as constant load (Section 3.4 of <xref target="RFC1242"></xref>),
+data rate (Section 14 of <xref target="RFC2544"></xref>),
+and Intended Load (Section 3.5.1 of <xref target="RFC2285"></xref>),
+in the sense that all three definitions specify that this value
+applies to one (input or output) interface.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For specification purposes, it is assumed that this is a constant load by default,
+as specified in Section 3.4 of <xref target="RFC1242"></xref>).
+Informally, Traffic Load is a single number that can "scale" any traffic pattern
+as long as the intuition of load intended against a single interface can be applied.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>It MAY be possible to use a Trial Load value to describe a non-constant traffic
+(using average load when the traffic consists of repeated bursts of frames
+e.g., as suggested in Section 21 of <xref target="RFC2544"></xref>).
+In the case of a non-constant load, the Test Report
+MUST explicitly mention how exactly non-constant the traffic is
+and how it reacts to Traffic Load value.
+But the rest of the MLRsearch Specification assumes that is not the case,
+to avoid discussing corner cases (e.g., which values are possible within medium limitations).</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Similarly, traffic patterns where different interfaces are subject to different loads
+MAY be described by a single Trial Load value (e.g. using largest load among interfaces),
+but again the Test Report MUST explicitly describe how the traffic pattern
+reacts to Traffic Load value,
+and this specification does not discuss all the implications of that approach.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>In the common case of bidirectional traffic, as described in
+Section 14. Bidirectional Traffic of <xref target="RFC2544"></xref>,
+Trial Load is the data rate per direction, half of aggregate data rate.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Traffic patterns where a single Trial Load does not describe their scaling
+cannot be used for MLRsearch benchmarks.</t>
+ </dd>
+</dl>
+
+
+
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Similarly to Trial Duration, some Measurers MAY limit the possible values
+of Trial Load. Contrary to Trial Duration,
+documenting such behavior in the test report is OPTIONAL.
+This is because the load differences are negligible (and frequently
+undocumented) in practice.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Controller MAY select Trial Load and Trial Duration values in a way
+that would not be possible to achieve using any integer number of data frames.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>If a particular Trial Load value is not tied to a single Trial,
+e.g., if there are no Trials yet or if there are multiple Trials,
+this document uses a shorthand <strong>Load</strong>.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The test report MAY present the aggregate load across multiple
+interfaces, treating it as the same quantity expressed using different
+units. Each reported Trial Load value MUST state unambiguously whether
+it refers to (i) a single interface, (ii) a specified subset of
+interfaces (e.g., such as all logical interfaces mapped to one physical
+port), or (iii) the total across every interface. For any aggregate
+load value, the report MUST also give the fixed conversion factor that
+links the per-interface and multi-interface load values.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The per-interface value remains the primary unit, consistent
+with prevailing practice in <xref target="RFC1242"></xref>, <xref target="RFC2544"></xref>, and <xref target="RFC2285"></xref>.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The last paragraph also applies to other terms related to Load.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For example, tests with symmetric bidirectional traffic
+can report load-related values as "bidirectional load"
+(double of "unidirectional load").</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial-input"><name>Trial Input</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Input is a composite quantity, consisting of two attributes:
+Trial Duration and Trial Load.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>When talking about multiple Trials, it is common to say "Trial Inputs"
+to denote all corresponding Trial Input instances.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>A Trial Input instance acts as the input for one call of the Measurer component.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Contrary to other composite quantities, MLRsearch implementations
+MUST NOT add optional attributes into Trial Input.
+This improves interoperability between various implementations of
+a Controller and a Measurer.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Note that both attributes are <strong>intended</strong> quantities,
+as only those can be fully controlled by the Controller.
+The actual offered quantities, as realized by the Measurer, can be different
+(and must be different if not multiplying into integer number of frames),
+but questions around those offered quantities are generally
+outside of the scope of this document.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="traffic-profile"><name>Traffic Profile</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Traffic Profile is a composite quantity containing
+all attributes other than Trial Load and Trial Duration,
+that are needed for unique determination of the Trial to be performed.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>All the attributes are assumed to be constant during the Search,
+and the composite is configured on the Measurer by the Manager
+before the Search starts.
+This is why the traffic profile is not part of the Trial Input.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Specification of traffic properties included in the Traffic Profile is
+the responsibility of the Manager, but the specific configuration mechanisms
+are outside of the scope of this docunment.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Informally, implementations of the Manager and the Measurer
+must be aware of their common set of capabilities,
+so that Traffic Profile instance uniquely defines the traffic during the Search.
+Typically, Manager and Measurer implementations are tightly integrated.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Integration efforts between independent Manager and Measurer implementations
+are outside of the scope of this document.
+An example standardization effort is <xref target="Vassilev"></xref>,
+a draft at the time of writing.</t>
+ </dd>
+</dl>
+
+
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Examples of traffic properties include:
+- Data link frame size
+- Fixed sizes as listed in Section 3.5 of <xref target="RFC1242"></xref> and in Section
+ 9 of <xref target="RFC2544"></xref>
+- IMIX mixed sizes as defined in <xref target="RFC6985"></xref>
+- Frame formats and protocol addresses
+- Section 8, 12 and Appendix C of <xref target="RFC2544"></xref>
+- Symmetric bidirectional traffic
+- Section 14 of <xref target="RFC2544"></xref>.</t>
+ </dd>
+</dl>
+
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Other traffic properties that need to be somehow specified
+in Traffic Profile, and MUST be mentioned in Test Report
+if they apply to the benchmark, include:</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t><list style="symbols">
+ <t>bidirectional traffic from Section 14 of <xref target="RFC2544"></xref>,</t>
+ <t>fully meshed traffic from Section 3.3.3 of <xref target="RFC2285"></xref>,</t>
+ <t>modifiers from Section 11 of <xref target="RFC2544"></xref>.</t>
+ <t>IP version mixing from Section 5.3 of <xref target="RFC8219"></xref>.</t>
+ </list></t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="trial-forwarding-ratio"><name>Trial Forwarding Ratio</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Trial Forwarding Ratio is a dimensionless floating point value.
+It MUST range between 0.0 and 1.0, both inclusive.
+It is calculated by dividing the number of frames
+successfully forwarded by the SUT
+by the total number of frames expected to be forwarded during the trial.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>For most Traffic Profiles, "expected to be forwarded" means
+"intended to get received by SUT from tester".
+This SHOULD be the default interpretation.
+Only if this is not the case, the test report MUST describe the Traffic Profile
+in a detail sufficient to imply how Trial Forwarding Ratio should be calculated.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Forwarding Ratio MAY be expressed in other units
+(e.g., as a percentage) in the test report.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Note that, contrary to Load terms, frame counts used to compute
+Trial Forwarding Ratio are generally aggregates over all SUT output interfaces,
+as most test procedures verify all outgoing frames.
+The procedure for <xref target="RFC2544"></xref> Throughput counts received frames,
+so implicitly it implies bidirectional counts for bidirectional traffic,
+even though the final value is "rate" that is still per-interface.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For example, in a test with symmetric bidirectional traffic,
+if one direction is forwarded without losses, but the opposite direction
+does not forward at all, the Trial Forwarding Ratio would be 0.5 (50%).</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>In future extensions, more general ways to compute Trial Forwarding Ratio
+may be allowed, but the current MLRsearch Specification relies on this specific
+averaged counters approach.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial-loss-ratio"><name>Trial Loss Ratio</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Trial Loss Ratio is equal to one minus the Trial Forwarding Ratio.</t>
+ </dd>
+</dl>
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>100% minus the Trial Forwarding Ratio, when expressed as a percentage.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>This is almost identical to Frame Loss Rate of Section 3.6 of <xref target="RFC1242"></xref>.
+The only minor differences are that Trial Loss Ratio does not need to
+be expressed as a percentage, and Trial Loss Ratio is explicitly
+based on averaged frame counts when more than one data stream is present.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial-forwarding-rate"><name>Trial Forwarding Rate</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Trial Forwarding Rate is a derived quantity, calculated by
+multiplying the Trial Load by the Trial Forwarding Ratio.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>This quantity differs from the Forwarding Rate described in Section
+3.6.1 of <xref target="RFC2285"></xref>. Under the RFC 2285 method, each output interface is
+measured separately, so every interface may report a distinct rate. The
+Trial Forwarding Rate, by contrast, uses a single set of frame counts
+and therefore yields one value that represents the whole system,
+while still preserving the direct link to the per-interface load.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>When the Traffic Profile is symmetric and bidirectional, as defined in
+Section 14 of <xref target="RFC2544"></xref>, the Trial Forwarding Rate is numerically equal
+to the arithmetic average of the individual per-interface forwarding rates
+that would be produced by the RFC 2285 procedure.</t>
+ </dd>
+</dl>
+
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>For more complex traffic patterns, such as many-to-one as mentioned
+in Section 3.3.2 Partially Meshed Traffic of <xref target="RFC2285"></xref>,
+the meaning of Trial Forwarding Rate is less straightforward.
+For example, if two input interfaces receive one million frames per second each,
+and a single interface outputs 1.4 million frames per second (fps),
+Trial Load is 1 million fps, Trial Loss Ratio is 30%,
+and Trial Forwarding Rate is 0.7 million fps.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Because this rate is anchored to the Load defined for one interface,
+a test report MAY show it either as the single averaged figure just described,
+or as the sum of the separate per-interface forwarding rates.
+For the example above, the aggregate trial forwarding rate is 1.4 million fps.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial-effective-duration"><name>Trial Effective Duration</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Effective Duration is a time quantity related to a Trial,
+by default equal to the Trial Duration.</t>
+ </dd>
+</dl>
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>This is an optional feature.
+If the Measurer does not return any Trial Effective Duration value,
+the Controller MUST use the Trial Duration value instead.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Trial Effective Duration may be any positive time quantity
+chosen by the Measurer to be used for time-based decisions in the Controller.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The test report MUST explain how the Measurer computes the returned
+Trial Effective Duration values, if they are not always
+equal to the Trial Duration.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>This feature can be beneficial for time-critical benchmarks
+designed to manage the overall search duration,
+rather than solely the traffic portion of it.
+An approach is to measure the duration of the whole trial (including all wait times)
+and use that as the Trial Effective Duration.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>This is also a way for the Measurer to inform the Controller about
+its surprising behavior, for example, when rounding the Trial Duration value.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial-output"><name>Trial Output</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Output is a composite quantity consisting of several attributes.
+Required attributes are: Trial Loss Ratio, Trial Effective Duration and
+Trial Forwarding Rate.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>When referring to more than one trial, plural term "Trial Outputs" is
+used to collectively describe multiple Trial Output instances.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Measurer implementations may provide additional optional attributes.
+The Controller implementations SHOULD
+ignore values of any optional attribute
+they are not familiar with,
+except when passing Trial Output instances to the Manager.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Example of an optional attribute:
+The aggregate number of frames expected to be forwarded during the trial,
+especially if it is not (a rounded-down value)
+implied by Trial Load and Trial Duration.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>While Section 3.5.2 of <xref target="RFC2285"></xref> requires the Offered Load value
+to be reported for forwarding rate measurements,
+it is not required in MLRsearch Specification,
+as search results do not depend on it.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="trial-result"><name>Trial Result</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Trial Result is a composite quantity,
+consisting of the Trial Input and the Trial Output.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>When referring to more than one trial, plural term "Trial Results" is
+used to collectively describe multiple Trial Result instances.</t>
+ </dd>
+</dl>
+
+
+</section>
+</section>
+<section anchor="goal-terms"><name>Goal Terms</name>
+
+<t>This section defines new terms for quantities relevant (directly or indirectly)
+for inputs and outputs of the Controller component.</t>
+
+<t>Several goal attributes are defined before introducing
+the main composite quantity: the Search Goal.</t>
+
+<t>Contrary to other sections, definitions in subsections of this section
+are necessarily vague, as their fundamental meaning is to act as
+coefficients in formulas for Controller Output, which are not defined yet.</t>
+
+<t>The discussions in this section relate the attributes to concepts mentioned in Section
+<xref target="overview-of-rfc-2544-problems">Overview of RFC 2544 Problems</xref>, but even these discussion
+paragraphs are short, informal, and mostly referencing later sections,
+where the impact on search results is discussed after introducing
+the complete set of auxiliary terms.</t>
+
+<section anchor="goal-final-trial-duration"><name>Goal Final Trial Duration</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Minimal value for Trial Duration that must be reached.
+The value MUST be positive.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Certain trials must reach this minimum duration before a load can be
+classified as a lower bound.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Controller may choose shorter durations,
+results of those may be enough for classification as an Upper Bound.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>It is RECOMMENDED for all search goals to share the same
+Goal Final Trial Duration value. Otherwise, Trial Duration values larger than
+the Goal Final Trial Duration may occur, weakening the assumptions
+the <xref target="load-classification-logic">Load Classification Logic</xref> is based on.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="goal-duration-sum"><name>Goal Duration Sum</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A threshold value for a particular sum of Trial Effective Duration values.
+The value MUST be positive.</t>
+ </dd>
+</dl>
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Informally, this prescribes the sufficient number of trials performed
+at a specific Trial Load and Goal Final Trial Duration during the search.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>If the Goal Duration Sum is larger than the Goal Final Trial Duration,
+multiple trials may be needed to be performed at the same load.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Refer to Section <xref target="mlrsearch-compliant-with-tst009">MLRsearch Compliant with TST009</xref>
+for an example where the possibility of multiple trials
+at the same load is intended.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>A Goal Duration Sum value shorter than the Goal Final Trial Duration
+(of the same goal) could save some search time, but is NOT RECOMMENDED,
+as the time savings come at the cost of decreased repeatability.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>In practice, the Search can spend less than Goal Duration Sum measuring
+a Load value when the results are particularly one-sided,
+but also, the Search can spend more than Goal Duration Sum measuring a Load
+when the results are balanced and include
+trials shorter than Goal Final Trial Duration.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="goal-loss-ratio"><name>Goal Loss Ratio</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A threshold value for Trial Loss Ratio values.
+The value MUST be non-negative and smaller than one.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A trial with Trial Loss Ratio larger than this value
+signals the SUT may be unable to process this Trial Load well enough.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>See <xref target="throughput-with-non-zero-loss">Throughput with Non-Zero Loss</xref>
+for reasons why users may want to set this value above zero.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Since multiple trials may be needed for one Load value,
+the Load Classification may be more complicated than mere comparison
+of Trial Loss Ratio to Goal Loss Ratio.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="goal-exceed-ratio"><name>Goal Exceed Ratio</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A threshold value for a particular ratio of sums
+of Trial Effective Duration values.
+The value MUST be non-negative and smaller than one.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Informally, up to this proportion of Trial Results
+with Trial Loss Ratio above Goal Loss Ratio is tolerated at a Lower Bound.
+This is the full impact if every Trial was measured at Goal Final Trial Duration.
+The actual full logic is more complicated, as shorter Trials are allowed.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For explainability reasons, the RECOMMENDED value for exceed ratio is 0.5 (50%),
+as in practice that value leads to
+the smallest variation in overall Search Duration.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Refer to Section <xref target="exceed-ratio-and-multiple-trials">Exceed Ratio and Multiple Trials</xref>
+for more details.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="goal-width"><name>Goal Width</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A threshold value for deciding whether two Trial Load values are close enough.
+This is an OPTIONAL attribute. If present, the value MUST be positive.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Informally, this acts as a stopping condition,
+controlling the precision of the search result.
+The search stops if every goal has reached its precision.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Implementations without this attribute
+MUST provide the Controller with other means to control the search stopping conditions.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Absolute load difference and relative load difference are two popular choices,
+but implementations may choose a different way to specify width.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The test report MUST make it clear what specific quantity is used as Goal Width.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>It is RECOMMENDED to express Goal Width as a relative difference and
+setting it to a value not lower than the Goal Loss Ratio.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Refer to Section
+<xref target="generalized-throughput">Generalized Throughput</xref> for more elaboration on the reasoning.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="goal-initial-trial-duration"><name>Goal Initial Trial Duration</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Minimal value for Trial Duration suggested to use for this goal.
+If present, this value MUST be positive.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>This is an example of an optional Search Goal.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>A typical default value is equal to the Goal Final Trial Duration value.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Informally, this is the shortest Trial Duration the Controller should select
+when focusing on the goal.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Note that shorter Trial Duration values can still be used,
+for example, selected while focusing on a different Search Goal.
+Such results MUST be still accepted by the Load Classification logic.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Goal Initial Trial Duration is a mechanism for a user to discourage
+trials with Trial Duration values deemed as too unreliable
+for a particular SUT and a given Search Goal.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="search-goal"><name>Search Goal</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Search Goal is a composite quantity consisting of several attributes,
+some of them are required.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Required attributes: Goal Final Trial Duration, Goal Duration Sum, Goal
+Loss Ratio and Goal Exceed Ratio.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Optional attributes: Goal Initial Trial Duration and Goal Width.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Implementations MAY add their own attributes.
+Those additional attributes may be required by an implementation
+even if they are not required by MLRsearch Specification.
+However, it is RECOMMENDED for those implementations
+to support missing attributes by providing typical default values.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>For example, implementations with Goal Initial Trial Durations
+may also require users to specify "how quickly" should Trial Durations increase.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Refer to Section <xref target="compliance"></xref> for important Search Goal settings.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="controller-input"><name>Controller Input</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Controller Input is a composite quantity
+required as an input for the Controller.
+The only REQUIRED attribute is a list of Search Goal instances.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>MLRsearch implementations MAY use additional attributes.
+Those additional attributes may be required by an implementation
+even if they are not required by MLRsearch Specification.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Formally, the Manager does not apply any Controller configuration
+apart from one Controller Input instance.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For example, Traffic Profile is configured on the Measurer by the Manager,
+without explicit assistance of the Controller.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The order of Search Goal instances in a list SHOULD NOT
+have a big impact on Controller Output,
+but MLRsearch implementations MAY base their behavior on the order
+of Search Goal instances in a list.</t>
+ </dd>
+</dl>
+
+<section anchor="max-load"><name>Max Load</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Max Load is an optional attribute of Controller Input.
+It is the maximal value the Controller is allowed to use for Trial Load values.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Max Load is an example of an optional attribute (outside the list of Search Goals)
+required by some implementations of MLRsearch.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>If the Max Load value is provided, Controller MUST NOT select
+Trial Load values larger than that value.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>In theory, each search goal could have its own Max Load value,
+but as all Trial Results are possibly affecting all Search Goals,
+it makes more sense for a single Max Load value to apply
+to all Search Goal instances.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>While Max Load is a frequently used configuration parameter, already governed
+(as maximum frame rate) by <xref target="RFC2544"></xref> (Section 20)
+and (as maximum offered load) by <xref target="RFC2285"></xref> (Section 3.5.3),
+some implementations may detect or discover it
+(instead of requiring a user-supplied value).</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>In MLRsearch Specification, one reason for listing
+the <xref target="relevant-upper-bound">Relevant Upper Bound</xref> as a required attribute
+is that it makes the search result independent of Max Load value.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Given that Max Load is a quantity based on Load,
+Test Report MAY express this quantity using multi-interface values,
+as sum of per-interface maximal loads.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="min-load"><name>Min Load</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Min Load is an optional attribute of Controller Input.
+It is the minimal value the Controller is allowed to use for Trial Load values.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Min Load is another example of an optional attribute
+required by some implementations of MLRsearch.
+Similarly to Max Load, it makes more sense to prescribe one common value,
+as opposed to using a different value for each Search Goal.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>If the Min Load value is provided, Controller MUST NOT select
+Trial Load values smaller than that value.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Min Load is mainly useful for saving time by failing early,
+arriving at an Irregular Goal Result when Min Load gets classified
+as an Upper Bound.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For implementations, it is RECOMMENDED to require Min Load to be non-zero
+and large enough to result in at least one frame being forwarded
+even at shortest allowed Trial Duration,
+so that Trial Loss Ratio is always well-defined,
+and the implementation can apply relative Goal Width safely.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Given that Min Load is a quantity based on Load,
+Test Report MAY express this quantity using multi-interface values,
+as sum of per-interface minimal loads.</t>
+ </dd>
+</dl>
+
+
+</section>
+</section>
+</section>
+<section anchor="auxiliary-terms"><name>Auxiliary Terms</name>
+
+<t>While the terms defined in this section are not strictly needed
+when formulating MLRsearch requirements, they simplify the language used
+in discussion paragraphs and explanation sections.</t>
+
+<section anchor="trial-classification"><name>Trial Classification</name>
+
+<t>When one Trial Result instance is compared to one Search Goal instance,
+several relations can be named using short adjectives.</t>
+
+<t>As trial results do not affect each other, this <strong>Trial Classification</strong>
+does not change during a Search.</t>
+
+<section anchor="high-loss-trial"><name>High-Loss Trial</name>
+
+<t>A trial with Trial Loss Ratio larger than a Goal Loss Ratio value
+is called a <strong>high-loss trial</strong>, with respect to given Search Goal
+(or lossy trial, if Goal Loss Ratio is zero).</t>
+
+</section>
+<section anchor="low-loss-trial"><name>Low-Loss Trial</name>
+
+<t>If a trial is not high-loss, it is called a <strong>low-loss trial</strong>
+(or zero-loss trial, if Goal Loss Ratio is zero).</t>
+
+</section>
+<section anchor="short-trial"><name>Short Trial</name>
+
+<t>A trial with Trial Duration shorter than the Goal Final Trial Duration
+is called a <strong>short trial</strong> (with respect to the given Search Goal).</t>
+
+</section>
+<section anchor="full-length-trial"><name>Full-Length Trial</name>
+
+<t>A trial that is not short is called a <strong>full-length</strong> trial.</t>
+
+<t>Note that this includes Trial Durations larger than Goal Final Trial Duration.</t>
+
+</section>
+<section anchor="long-trial"><name>Long Trial</name>
+
+<t>A trial with Trial Duration longer than the Goal Final Trial Duration
+is called a <strong>long trial</strong>.</t>
+
+</section>
+</section>
+<section anchor="load-classification"><name>Load Classification</name>
+
+<t>When a set of all Trial Result instances, performed so far
+at one Trial Load, is compared to one Search Goal instance,
+their relation can be named using the concept of a bound.</t>
+
+<t>In general, such bounds are a current quantity,
+even though cases of a Load changing its classification more than once
+during the Search is rare in practice.</t>
+
+<section anchor="upper-bound"><name>Upper Bound</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A Load value is called an Upper Bound if and only if it is classified
+as such by <xref target="load-classification-code">Appendix A</xref>
+algorithm for the given Search Goal at the current moment of the Search.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>In more detail, the set of all Trial Result instances
+performed so far at the Trial Load (and any Trial Duration)
+is certain to fail to uphold all the requirements of the given Search Goal,
+mainly the Goal Loss Ratio in combination with the Goal Exceed Ratio.
+In this context, "certain to fail" relates to any possible results within the time
+remaining till Goal Duration Sum.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>One search goal can have multiple different Trial Load values
+classified as its Upper Bounds.
+While search progresses and more trials are measured,
+any load value can become an Upper Bound in principle.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Moreover, a Load can stop being an Upper Bound, but that
+can only happen when more than Goal Duration Sum of trials are measured
+(e.g., because another Search Goal needs more trials at this load).
+Informally, the previous Upper Bound got invalidated.
+In practice, the Load frequently becomes a <xref target="lower-bound">Lower Bound</xref> instead.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="lower-bound"><name>Lower Bound</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A Load value is called a Lower Bound if and only if it is classified
+as such by <xref target="load-classification-code">Appendix A</xref>
+algorithm for the given Search Goal at the current moment of the search.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>In more detail, the set of all Trial Result instances
+performed so far at the Trial Load (and any Trial Duration)
+is certain to uphold all the requirements of the given Search Goal,
+mainly the Goal Loss Ratio in combination with the Goal Exceed Ratio.
+Here "certain to uphold" relates to any possible results within the time
+remaining till Goal Duration Sum.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>One search goal can have multiple different Trial Load values
+classified as its Lower Bounds.
+As search progresses and more trials are measured,
+any load value can become a Lower Bound in principle.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>No load can be both an Upper Bound and a Lower Bound for the same Search goal
+at the same time, but it is possible for a larger load to be a Lower Bound
+while a smaller load is an Upper Bound.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Moreover, a Load can stop being a Lower Bound, but that
+can only happen when more than Goal Duration Sum of trials are measured
+(e.g., because another Search Goal needs more trials at this load).
+Informally, the previous Lower Bound got invalidated.
+In practice, the Load frequently becomes an <xref target="upper-bound">Upper Bound</xref> instead.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="undecided"><name>Undecided</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A Load value is called Undecided if it is currently
+neither an Upper Bound nor a Lower Bound.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>A Load value that has not been measured so far is Undecided.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>It is possible for a Load to transition from an Upper Bound to Undecided
+by adding Short Trials with Low-Loss results.
+That is yet another reason for users to avoid using Search Goal instances
+with different Goal Final Trial Duration values.</t>
+ </dd>
+</dl>
+
+</section>
+</section>
+</section>
+<section anchor="result-terms"><name>Result Terms</name>
+
+<t>Before defining the full structure of a Controller Output,
+it is useful to define the composite quantity, called Goal Result.
+The following subsections define its attribute first,
+before describing the Goal Result quantity.</t>
+
+<t>There is a correspondence between Search Goals and Goal Results.
+Most of the following subsections refer to a given Search Goal,
+when defining their terms.
+Conversely, at the end of the search, each Search Goal instance
+has its corresponding Goal Result instance.</t>
+
+<section anchor="relevant-upper-bound"><name>Relevant Upper Bound</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Relevant Upper Bound is the smallest Trial Load value
+classified as an Upper Bound for a given Search Goal at the end of the Search.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>If no measured load had enough High-Loss Trials,
+the Relevant Upper Bound MAY be non-existent.
+For example, when Max Load is classified as a Lower Bound.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Conversely, when Relevant Upper Bound does exist,
+it is not affected by Max Load value.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Given that Relevant Upper Bound is a quantity based on Load,
+Test Report MAY express this quantity using multi-interface values,
+as sum of per-interface loads.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="relevant-lower-bound"><name>Relevant Lower Bound</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Relevant Lower Bound is the largest Trial Load value
+among those smaller than the Relevant Upper Bound, that got classified
+as a Lower Bound for a given Search Goal at the end of the search.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>If no load had enough Low-Loss Trials, the Relevant Lower Bound
+MAY be non-existent.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Strictly speaking, if the Relevant Upper Bound does not exist,
+the Relevant Lower Bound also does not exist.
+In a typical case, Max Load is classified as a Lower Bound,
+making it impossible to increase the Load to continue the search
+for an Upper Bound.
+Thus, it is not clear whether a larger value would be found
+for a Relevant Lower Bound if larger Loads were possible.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Given that Relevant Lower Bound is a quantity based on Load,
+Test Report MAY express this quantity using multi-interface values,
+as sum of per-interface loads.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="conditional-throughput"><name>Conditional Throughput</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Conditional Throughput is a value computed at the Relevant Lower Bound
+according to algorithm defined in
+<xref target="conditional-throughput-code">Appendix B</xref>.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Relevant Lower Bound is defined only at the end of the Search,
+and so is the Conditional Throughput.
+But the algorithm can be applied at any time on any Lower Bound load,
+so the final Conditional Throughput value may appear sooner
+than at the end of a Search.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Informally, the Conditional Throughput should be
+a typical Trial Forwarding Rate, expected to be seen
+at the Relevant Lower Bound of a given Search Goal.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>But frequently it is only a conservative estimate thereof,
+as MLRsearch implementations tend to stop measuring more Trials
+as soon as they confirm the value cannot get worse than this estimate
+within the Goal Duration Sum.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>This value is RECOMMENDED to be used when evaluating repeatability
+and comparability of different MLRsearch implementations.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Refer to Section <xref target="generalized-throughput">Generalized Throughput</xref> for more details.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Given that Conditional Throughput is a quantity based on Load,
+Test Report MAY express this quantity using multi-interface values,
+as sum of per-interface forwarding rates.</t>
+ </dd>
+</dl>
+
+
+</section>
+<section anchor="goal-results"><name>Goal Results</name>
+
+<t>MLRsearch Specification is based on a set of requirements
+for a "regular" result. But in practice, it is not always possible
+for such result instance to exist, so also "irregular" results
+need to be supported.</t>
+
+<section anchor="regular-goal-result"><name>Regular Goal Result</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Regular Goal Result is a composite quantity consisting of several attributes.
+Relevant Upper Bound and Relevant Lower Bound are REQUIRED attributes.
+Conditional Throughput is a RECOMMENDED attribute.</t>
+ </dd>
+</dl>
+
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Implementations MAY add their own attributes.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Test report MUST display Relevant Lower Bound.
+Displaying Relevant Upper Bound is RECOMMENDED,
+especially if the implementation does not use Goal Width.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>In general, stopping conditions for the corresponding Search Goal MUST
+be satisfied to produce a Regular Goal Result.
+Specifically, if an implementation offers Goal Width as a Search Goal attribute,
+the distance between the Relevant Lower Bound
+and the Relevant Upper Bound MUST NOT be larger than the Goal Width.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For stopping conditions refer to Sections <xref target="goal-width">Goal Width</xref> and
+<xref target="stopping-conditions-and-precision">Stopping Conditions and Precision</xref>.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="irregular-goal-result"><name>Irregular Goal Result</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Irregular Goal Result is a composite quantity. No attributes are required.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>It is RECOMMENDED to report any useful quantity even if it does not
+satisfy all the requirements. For example, if Max Load is classified
+as a Lower Bound, it is fine to report it as an "effective" Relevant Lower Bound
+(although not a real one, as that requires
+Relevant Upper Bound which does not exist in this case),
+and compute Conditional Throughput for it. In this case,
+only the missing Relevant Upper Bound signals this result instance is irregular.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Similarly, if both relevant bounds exist, it is RECOMMENDED
+to include them as Irregular Goal Result attributes,
+and let the Manager decide if their distance is too far for Test Report purposes.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>If Test Report displays some Irregular Goal Result attribute values,
+they MUST be clearly marked as coming from irregular results.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The implementation MAY define additional attributes,
+for example explicit flags for expected situations, so the Manager logic can be simpler.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="goal-result"><name>Goal Result</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Goal Result is a composite quantity.
+Each instance is either a Regular Goal Result or an Irregular Goal Result.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Manager MUST be able to distinguish whether the instance is regular or not.</t>
+ </dd>
+</dl>
+
+</section>
+</section>
+<section anchor="search-result"><name>Search Result</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Search Result is a single composite object
+that maps each Search Goal instance to a corresponding Goal Result instance.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>As an alternative to mapping, the Search Result may be represented
+as an ordered list of Goal Result instances that appears in the exact
+sequence of their corresponding Search Goal instances.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>When the Search Result is expressed as a mapping, it MUST contain an
+entry for every Search Goal instance supplied in the Controller Input.</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Identical Goal Result instances MAY be listed for different Search Goals,
+but their status as regular or irregular may be different.
+For example, if two goals differ only in Goal Width value,
+and the relevant bound values are close enough according to only one of them.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="controller-output"><name>Controller Output</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Controller Output is a composite quantity returned from the Controller
+to the Manager at the end of the search.
+The Search Result instance is its only required attribute.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>MLRsearch implementation MAY return additional data in the Controller Output,
+e.g., number of trials performed and the total Search Duration.</t>
+ </dd>
+</dl>
+
+
+
+</section>
+</section>
+<section anchor="architecture-terms"><name>Architecture Terms</name>
+
+<t>MLRsearch architecture consists of three main system components:
+the Manager, the Controller, and the Measurer.
+The components were introduced in <xref target="architecture-overview">Architecture Overview</xref>,
+and the following subsections finalize their definitions
+using terms from previous sections.</t>
+
+
+<t>Note that the architecture also implies the presence of other components,
+such as the SUT and the tester (as a sub-component of the Measurer).</t>
+
+<t>Communication protocols and interfaces between components are left
+unspecified. For example, when MLRsearch Specification mentions
+"Controller calls Measurer",
+it is possible that the Controller notifies the Manager
+to call the Measurer indirectly instead. In doing so, the Measurer implementations
+can be fully independent from the Controller implementations,
+e.g., developed in different programming languages.</t>
+
+
+<section anchor="measurer"><name>Measurer</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Measurer is a functional element that when called
+with a <xref target="trial-input">Trial Input</xref> instance, performs one <xref target="trial">Trial </xref>
+and returns a <xref target="trial-output">Trial Output</xref> instance.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>This definition assumes the Measurer is already initialized.
+In practice, there may be additional steps before the Search,
+e.g., when the Manager configures the traffic profile
+(either on the Measurer or on its tester sub-component directly)
+and performs a warm-up (if the tester or the test procedure requires one).</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>It is the responsibility of the Measurer implementation to uphold
+any requirements and assumptions present in MLRsearch Specification,
+e.g., Trial Forwarding Ratio not being larger than one.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Implementers have some freedom.
+For example, Section 10 of <xref target="RFC2544"></xref>
+gives some suggestions (but not requirements) related to
+duplicated or reordered frames.
+Implementations are RECOMMENDED to document their behavior
+related to such freedoms in as detailed a way as possible.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>It is RECOMMENDED to benchmark the test equipment first,
+e.g., connect sender and receiver directly (without any SUT in the path),
+find a load value that guarantees the Offered Load is not too far
+from the Intended Load and use that value as the Max Load value.
+When testing the real SUT, it is RECOMMENDED to turn any severe deviation
+between the Intended Load and the Offered Load into increased Trial Loss Ratio.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Neither of the two recommendations are made into mandatory requirements,
+because it is not easy to provide guidance about when the difference is severe enough,
+in a way that would be disentangled from other Measurer freedoms.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For a sample situation where the Offered Load cannot keep up
+with the Intended Load, and the consequences on MLRsearch result,
+refer to Section <xref target="hard-performance-limit">Hard Performance Limit</xref>.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="controller"><name>Controller</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Controller is a functional element that, upon receiving a Controller
+Input instance, repeatedly generates Trial Input instances for the
+Measurer and collects the corresponding Trial Output instances. This
+cycle continues until the stopping conditions are met, at which point
+the Controller produces a final Controller Output instance and
+terminates.</t>
+ </dd>
+</dl>
+
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>Informally, the Controller has big freedom in selection of Trial Inputs,
+and the implementations want to achieve all the Search Goals
+in the shortest average time.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The Controller's role in optimizing the overall Search Duration
+distinguishes MLRsearch algorithms from simpler search procedures.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Informally, each implementation can have different stopping conditions.
+Goal Width is only one example.
+In practice, implementation details do not matter,
+as long as Goal Result instances are regular.</t>
+ </dd>
+</dl>
+
+</section>
+<section anchor="manager"><name>Manager</name>
+
+<t>Definition:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Manager is a functional element that is reponsible for
+provisioning other components, calling a Controller component once,
+and for creating the test report following the reporting format as
+defined in Section 26 of <xref target="RFC2544"></xref>.</t>
+ </dd>
+</dl>
+
+<t>Discussion:</t>
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>The Manager initializes the SUT, the Measurer
+(and the tester if independent from Measurer)
+with their intended configurations before calling the Controller.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>Note that Section 7 of <xref target="RFC2544"></xref> already puts requirements on SUT setups:</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>"It is expected that all of the tests will be run without changing the
+configuration or setup of the DUT in any way other than that required
+to do the specific test. For example, it is not acceptable to change
+the size of frame handling buffers between tests of frame handling
+rates or to disable all but one transport protocol when testing the
+throughput of that protocol."</t>
+ </dd>
+</dl>
+
+
+<dl>
+ <dt> </dt>
+ <dd>
+ <t>It is REQUIRED for the test report to encompass all the SUT configuration
+details, including description of a "default" configuration common for most tests
+and configuration changes if required by a specific test.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>For example, Section 5.1.1 of <xref target="RFC5180"></xref> recommends testing jumbo frames
+if SUT can forward them, even though they are outside the scope
+of the 802.3 IEEE standard. In this case, it is acceptable
+for the SUT default configuration to not support jumbo frames,
+and only enable this support when testing jumbo traffic profiles,
+as the handling of jumbo frames typically has different packet buffer
+requirements and potentially higher processing overhead.
+Non-jumbo frame sizes should also be tested on the jumbo-enabled setup.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>The Manager does not need to be able to tweak any Search Goal attributes,
+but it MUST report all applied attribute values even if not tweaked.</t>
+ </dd>
+ <dt> </dt>
+ <dd>
+ <t>A "user" - human or automated - invokes the Manager once to launch a
+single Search and receive its report. Every new invocation is treated
+as a fresh, independent Search; how the system behaves across multiple
+calls (for example, combining or comparing their results) is explicitly
+out of scope for this document.</t>
+ </dd>
+</dl>
+
+
+
+</section>
+</section>
+<section anchor="compliance"><name>Compliance</name>
+
+<t>This section discusses compliance relations between MLRsearch
+and other test procedures.</t>
+
+<section anchor="test-procedure-compliant-with-mlrsearch"><name>Test Procedure Compliant with MLRsearch</name>
+
+<t>Any networking measurement setup that could be understood as consisting of
+functional elements satisfying requirements
+for the Measurer, the Controller and the Manager,
+is compliant with MLRsearch Specification.</t>
+
+<t>These components can be seen as abstractions present in any testing procedure.
+For example, there can be a single component acting both
+as the Manager and the Controller, but if values of required attributes
+of Search Goals and Goal Results are visible in the test report,
+the Controller Input instance and Controller Output instance are implied.</t>
+
+<t>For example, any setup for conditionally (or unconditionally)
+compliant <xref target="RFC2544"></xref> throughput testing
+can be understood as a MLRsearch architecture,
+if there is enough data to reconstruct the Relevant Upper Bound.</t>
+
+<t>Refer to section
+<xref target="mlrsearch-compliant-with-rfc-2544">MLRsearch Compliant with RFC 2544</xref>
+for an equivalent Search Goal.</t>
+
+<t>Any test procedure that can be understood as one call to the Manager of
+MLRsearch architecture is said to be compliant with MLRsearch Specification.</t>
+
+</section>
+<section anchor="mlrsearch-compliant-with-rfc-2544"><name>MLRsearch Compliant with RFC 2544</name>
+
+<t>The following Search Goal instance makes the corresponding Search Result
+unconditionally compliant with Section 24 of <xref target="RFC2544"></xref>.</t>
+
+<t><list style="symbols">
+ <t>Goal Final Trial Duration = 60 seconds</t>
+ <t>Goal Duration Sum = 60 seconds</t>
+ <t>Goal Loss Ratio = 0%</t>
+ <t>Goal Exceed Ratio = 0%</t>
+</list></t>
+
+
+<t>Goal Loss Ratio and Goal Exceed Ratio attributes,
+are enough to make the Search Goal conditionally compliant.
+Adding Goal Final Trial Duration
+makes the Search Goal unconditionally compliant.</t>
+
+<t>Goal Duration Sum prevents MLRsearch
+from repeating zero-loss Full-Length Trials.</t>
+
+<t>The presence of other Search Goals does not affect the compliance
+of this Goal Result.
+The Relevant Lower Bound and the Conditional Throughput are in this case
+equal to each other, and the value is the <xref target="RFC2544"></xref> throughput.</t>
+
+<t>Non-zero exceed ratio is not strictly disallowed, but it could
+needlessly prolong the search when Low-Loss short trials are present.</t>
+
+</section>
+<section anchor="mlrsearch-compliant-with-tst009"><name>MLRsearch Compliant with TST009</name>
+
+<t>One of the alternatives to <xref target="RFC2544"></xref> is Binary search with loss verification
+as described in Section 12.3.3 of <xref target="TST009"></xref>.</t>
+
+<t>The rationale of such search is to repeat high-loss trials, hoping for zero loss on second try,
+so the results are closer to the noiseless end of performance spectrum,
+thus more repeatable and comparable.</t>
+
+<t>Only the variant with "z = infinity" is achievable with MLRsearch.</t>
+
+<t>For example, for "max(r) = 2" variant, the following Search Goal instance
+should be used to get compatible Search Result:</t>
+
+<t><list style="symbols">
+ <t>Goal Final Trial Duration = 60 seconds</t>
+ <t>Goal Duration Sum = 120 seconds</t>
+ <t>Goal Loss Ratio = 0%</t>
+ <t>Goal Exceed Ratio = 50%</t>
+</list></t>
+
+<t>If the first 60 seconds trial has zero loss, it is enough for MLRsearch to stop
+measuring at that load, as even a second high-loss trial
+would still fit within the exceed ratio.</t>
+
+<t>But if the first trial is high-loss, MLRsearch needs to perform also
+the second trial to classify that load.
+Goal Duration Sum is twice as long as Goal Final Trial Duration,
+so third full-length trial is never needed.</t>
+
+</section>
+</section>
+</section>
+<section anchor="methodology-rationale-and-design-considerations"><name>Methodology Rationale and Design Considerations</name>
+
+
+
+<t>This section explains the Why behind MLRsearch. Building on the
+normative specification in Section
+<xref target="mlrsearch-specification">MLRsearch Specification</xref>,
+it contrasts MLRsearch with the classic
+<xref target="RFC2544"></xref> single-ratio binary-search procedure and walks through the
+key design choices: binary-search mechanics, stopping-rule precision,
+loss-inversion for multiple goals, exceed-ratio handling, short-trial
+strategies, and the generalised throughput concept. Together, these
+considerations show how the methodology reduces test time, supports
+multiple loss ratios, and improves repeatability.</t>
+
+<section anchor="binary-search"><name>Binary Search</name>
+
+<t>A typical binary search implementation for <xref target="RFC2544"></xref>
+tracks only the two tightest bounds.
+To start, the search needs both Max Load and Min Load values.
+Then, one trial is used to confirm Max Load is an Upper Bound,
+and one trial to confirm Min Load is a Lower Bound.</t>
+
+<t>Then, next Trial Load is chosen as the mean of the current tightest upper bound
+and the current tightest lower bound, and becomes a new tightest bound
+depending on the Trial Loss Ratio.</t>
+
+<t>After some number of trials, the tightest lower bound becomes the throughput,
+but <xref target="RFC2544"></xref> does not specify when, if ever, the search should stop.
+In practice, the search stops either at some distance
+between the tightest upper bound and the tightest lower bound,
+or after some number of Trials.</t>
+
+<t>For a given pair of Max Load and Min Load values,
+there is one-to-one correspondence between number of Trials
+and final distance between the tightest bounds.
+Thus, the search always takes the same time,
+assuming initial bounds are confirmed.</t>
+
+</section>
+<section anchor="stopping-conditions-and-precision"><name>Stopping Conditions and Precision</name>
+
+<t>MLRsearch Specification requires listing both Relevant Bounds for each
+Search Goal, and the difference between the bounds implies
+whether the result precision is achieved.
+Therefore, it is not necessary to report the specific stopping condition used.</t>
+
+<t>MLRsearch implementations may use Goal Width
+to allow direct control of result precision
+and indirect control of the Search Duration.</t>
+
+<t>Other MLRsearch implementations may use different stopping conditions:
+for example based on the Search Duration, trading off precision control
+for duration control.</t>
+
+<t>Due to various possible time optimizations, there is no strict
+correspondence between the Search Duration and Goal Width values.
+In practice, noisy SUT performance increases both average search time
+and its variance.</t>
+
+</section>
+<section anchor="loss-ratios-and-loss-inversion"><name>Loss Ratios and Loss Inversion</name>
+
+<t>The biggest</t>
+
+
+<t>difference between MLRsearch and <xref target="RFC2544"></xref> binary search
+is in the goals of the search.
+<xref target="RFC2544"></xref> has a single goal, based on classifying a single full-length trial
+as either zero-loss or non-zero-loss.
+MLRsearch supports searching for multiple Search Goals at once,
+usually differing in their Goal Loss Ratio values.</t>
+
+<section anchor="single-goal-and-hard-bounds"><name>Single Goal and Hard Bounds</name>
+
+<t>Each bound in <xref target="RFC2544"></xref> simple binary search is "hard",
+in the sense that all further Trial Load values
+are smaller than any current upper bound and larger than any current lower bound.</t>
+
+<t>This is also possible for MLRsearch implementations,
+when the search is started with only one Search Goal instance.</t>
+
+</section>
+<section anchor="multiple-goals-and-loss-inversion"><name>Multiple Goals and Loss Inversion</name>
+
+<t>MLRsearch Specification</t>
+
+
+<t>supports multiple Search Goals, making the search procedure
+more complicated compared to binary search with single goal,
+but most of the complications do not affect the final results much.
+Except for one phenomenon: Loss Inversion.</t>
+
+<t>Depending on Search Goal attributes, Load Classification results may be resistant
+to small amounts of Section <xref target="inconsistent-trial-results">Inconsistent Trial Results</xref>.
+However, for larger amounts, a Load that is classified
+as an Upper Bound for one Search Goal
+may still be a Lower Bound for another Search Goal.
+Due to this other goal, MLRsearch will probably perform subsequent Trials
+at Trial Loads even larger than the original value.</t>
+
+<t>This introduces questions any many-goals search algorithm has to address.
+For example: What to do when all such larger load trials happen to have zero loss?
+Does it mean the earlier upper bound was not real?
+Does it mean the later Low-Loss trials are not considered a lower bound?</t>
+
+<t>The situation where a smaller Load is classified as an Upper Bound,
+while a larger Load is classified as a Lower Bound (for the same search goal),
+is called Loss Inversion.</t>
+
+<t>Conversely, only single-goal search algorithms can have hard bounds
+that shield them from Loss Inversion.</t>
+
+</section>
+<section anchor="conservativeness-and-relevant-bounds"><name>Conservativeness and Relevant Bounds</name>
+
+<t>MLRsearch is conservative when dealing with Loss Inversion:
+the Upper Bound is considered real, and the Lower Bound
+is considered to be a fluke, at least when computing the final result.</t>
+
+<t>This is formalized using definitions of
+<xref target="relevant-upper-bound">Relevant Upper Bound</xref> and
+<xref target="relevant-lower-bound">Relevant Lower Bound</xref>.</t>
+
+<t>The Relevant Upper Bound (for specific goal) is the smallest Load classified
+as an Upper Bound. But the Relevant Lower Bound is not simply
+the largest among Lower Bounds. It is the largest Load among Loads
+that are Lower Bounds while also being smaller than the Relevant Upper Bound.</t>
+
+<t>With these definitions, the Relevant Lower Bound is always smaller
+than the Relevant Upper Bound (if both exist), and the two relevant bounds
+are used analogously as the two tightest bounds in the binary search.
+When they meet the stopping conditions, the Relevant Bounds are used in the output.</t>
+
+</section>
+<section anchor="consequences"><name>Consequences</name>
+
+<t>The consequence of the way the Relevant Bounds are defined is that
+every Trial Result can have an impact
+on any current Relevant Bound larger than that Trial Load,
+namely by becoming a new Upper Bound.</t>
+
+<t>This also applies when that Load is measured
+before another Load gets enough measurements to become a current Relevant Bound.</t>
+
+<t>This also implies that if the SUT tested (or the Traffic Generator used)
+needs a warm-up, it should be warmed up before starting the Search,
+otherwise the first few measurements could become unjustly limiting.</t>
+
+<t>For MLRsearch implementations, it means it is better to measure
+at smaller Loads first, so bounds found earlier are less likely
+to get invalidated later.</t>
+
+</section>
+</section>
+<section anchor="exceed-ratio-and-multiple-trials"><name>Exceed Ratio and Multiple Trials</name>
+
+<t>The idea of performing multiple Trials at the same Trial Load comes from
+a model where some Trial Results (those with high Trial Loss Ratio) are affected
+by infrequent effects, causing unsatisfactory repeatability</t>
+
+
+<t>of <xref target="RFC2544"></xref> Throughput results. Refer to Section <xref target="dut-in-sut">DUT in SUT</xref>
+for a discussion about noiseful and noiseless ends
+of the SUT performance spectrum.
+Stable results are closer to the noiseless end of the SUT performance spectrum,
+so MLRsearch may need to allow some frequency of high-loss trials
+to ignore the rare but big effects near the noiseful end.</t>
+
+<t>For MLRsearch to perform such Trial Result filtering, it needs
+a configuration option to tell how frequent the "infrequent" big loss can be.
+This option is called the <xref target="goal-exceed-ratio">Goal Exceed Ratio</xref>.
+It tells MLRsearch what ratio of trials (more specifically,
+what ratio of Trial Effective Duration seconds)
+can have a <xref target="trial-loss-ratio">Trial Loss Ratio</xref>
+larger than the <xref target="goal-loss-ratio">Goal Loss Ratio</xref>
+and still be classified as a <xref target="lower-bound">Lower Bound</xref>.</t>
+
+<t>Zero exceed ratio means all Trials must have a Trial Loss Ratio
+equal to or lower than the Goal Loss Ratio.</t>
+
+<t>When more than one Trial is intended to classify a Load,
+MLRsearch also needs something that controls the number of trials needed.
+Therefore, each goal also has an attribute called Goal Duration Sum.</t>
+
+<t>The meaning of a <xref target="goal-duration-sum">Goal Duration Sum</xref> is that
+when a Load has (Full-Length) Trials
+whose Trial Effective Durations when summed up give a value at least as big
+as the Goal Duration Sum value,
+the Load is guaranteed to be classified either as an Upper Bound
+or a Lower Bound for that Search Goal instance.</t>
+
+</section>
+<section anchor="short-trials-and-duration-selection"><name>Short Trials and Duration Selection</name>
+
+<t>MLRsearch requires each Search Goal to specify its Goal Final Trial Duration.</t>
+
+<t>Section 24 of <xref target="RFC2544"></xref> already anticipates possible time savings
+when Short Trials are used.</t>
+
+<t>An MLRsearch implementation MAY expose configuration parameters that
+decide whether, when, and how short trial durations are used. The exact
+heuristics and controls are left to the discretion of the implementer.</t>
+
+
+<t>While MLRsearch implementations are free to use any logic to select
+Trial Input values, comparability between MLRsearch implementations
+is only assured when the Load Classification logic
+handles any possible set of Trial Results in the same way.</t>
+
+<t>The presence of Short Trial Results complicates
+the Load Classification logic, see more details in Section
+<xref target="load-classification-logic">Load Classification Logic</xref>.</t>
+
+<t>While the Load Classification algorithm is designed to avoid any unneeded Trials,
+for explainability reasons it is recommended for users to use
+such Controller Input instances that lead to all Trial Duration values
+selected by Controller to be the same,
+e.g., by setting any Goal Initial Trial Duration to be a single value
+also used in all Goal Final Trial Duration attributes.</t>
+
+</section>
+<section anchor="generalized-throughput"><name>Generalized Throughput</name>
+
+<t>Because testing equipment takes the Intended Load
+as an input parameter for a Trial measurement,
+any load search algorithm needs to deal with Intended Load values internally.</t>
+
+<t>But in the presence of Search Goals with a non-zero
+<xref target="goal-loss-ratio">Goal Loss Ratio</xref>, the Load usually does not match
+the user's intuition of what a throughput is.
+The forwarding rate as defined in Section Section 3.6.1 of <xref target="RFC2285"></xref> is better,
+but it is not obvious how to generalize it
+for Loads with multiple Trials and a non-zero Goal Loss Ratio.</t>
+
+<t>The clearest illustration - and the chief reason for adopting a
+generalized throughput definition - is the presence of a hard
+performance limit.</t>
+
+
+<section anchor="hard-performance-limit"><name>Hard Performance Limit</name>
+
+<t>Even if bandwidth of a medium allows higher traffic forwarding performance,
+the SUT interfaces may have their additional own limitations,
+e.g., a specific frames-per-second limit on the NIC (a common occurrence).</t>
+
+<t>Those limitations should be known and provided as Max Load, Section
+<xref target="max-load">Max Load</xref>.</t>
+
+
+<t>But if Max Load is set larger than what the interface can receive or transmit,
+there will be a "hard limit" behavior observed in Trial Results.</t>
+
+<t>Consider that the hard limit is at hundred million frames per second (100 Mfps),
+Max Load is larger, and the Goal Loss Ratio is 0.5%.
+If DUT has no additional losses, 0.5% Trial Loss Ratio will be achieved
+at Relevant Lower Bound of 100.5025 Mfps.</t>
+
+<t>Reporting a throughput that exceeds the SUT's verified hard limit is
+counter-intuitive. Accordingly, the <xref target="RFC2544"></xref> Throughput metric should
+be generalized - rather than relying solely on the Relevant Lower
+Bound - to reflect realistic, limit-aware performance.</t>
+
+
+<t>MLRsearch defines one such generalization,
+the <xref target="conditional-throughput">Conditional Throughput</xref>.
+It is the Trial Forwarding Rate from one of the Full-Length Trials
+performed at the Relevant Lower Bound.
+The algorithm to determine which trial exactly is in
+<xref target="conditional-throughput-code">Appendix B</xref>.</t>
+
+<t>In the hard limit example, 100.5025 Mfps Load will still have
+only 100.0 Mfps forwarding rate, nicely confirming the known limitation.</t>
+
+</section>
+<section anchor="performance-variability"><name>Performance Variability</name>
+
+<t>With non-zero Goal Loss Ratio, and without hard performance limits,
+Low-Loss trials at the same Load may achieve different Trial Forwarding Rate
+values just due to DUT performance variability.</t>
+
+<t>By comparing the best case (all Relevant Lower Bound trials have zero loss)
+and the worst case (all Trial Loss Ratios at Relevant Lower Bound
+are equal to the Goal Loss Ratio),
+one can prove that Conditional Throughput
+values may have up to the Goal Loss Ratio relative difference.</t>
+
+
+<t>Setting the Goal Width below the Goal Loss Ratio
+may cause the Conditional Throughput for a larger Goal Loss Ratio to become smaller
+than a Conditional Throughput for a goal with a lower Goal Loss Ratio,
+which is counter-intuitive, considering they come from the same Search.
+Therefore, it is RECOMMENDED to set the Goal Width to a value no lower
+than the Goal Loss Ratio of the higher-loss Search Goal.</t>
+
+<t>Although Conditional Throughput can fluctuate from one run to the next,
+it still offers a more discriminating basis for comparison than the
+Relevant Lower Bound - particularly when deterministic load selection
+yields the same Lower Bound value across multiple runs.</t>
+
+</section>
+</section>
+</section>
+<section anchor="mlrsearch-logic-and-example"><name>MLRsearch Logic and Example</name>
+
+<t>This section uses informal language to describe two aspects of MLRsearch logic:
+Load Classification and Conditional Throughput,
+reflecting formal pseudocode representation provided in
+<xref target="load-classification-code">Appendix A</xref>
+and <xref target="conditional-throughput-code">Appendix B</xref>.
+This is followed by example search.</t>
+
+<t>The logic is equivalent but not identical to the pseudocode
+on appendices. The pseudocode is designed to be short and frequently
+combines multiple operations into one expression.
+The logic as described in this section lists each operation separately
+and uses more intuitive names for the intermediate values.</t>
+
+<section anchor="load-classification-logic"><name>Load Classification Logic</name>
+
+<t>Note: For explanation clarity variables are taged as (I)nput,
+(T)emporary, (O)utput.</t>
+
+
+<t><list style="symbols">
+ <t>Collect Trial Results: <list style="symbols">
+ <t>Take all Trial Result instances (I) measured at a given load.</t>
+ </list></t>
+ <t>Aggregate Trial Durations: <list style="symbols">
+ <t>Full-length high-loss sum (T) is the sum of Trial Effective Duration
+values of all full-length high-loss trials (I).</t>
+ <t>Full-length low-loss sum (T) is the sum of Trial Effective Duration
+values of all full-length low-loss trials (I).</t>
+ <t>Short high-loss sum is the sum (T) of Trial Effective Duration values
+of all short high-loss trials (I).</t>
+ <t>Short low-loss sum is the sum (T) of Trial Effective Duration values
+of all short low-loss trials (I).</t>
+ </list></t>
+ <t>Derive goal-based ratios: <list style="symbols">
+ <t>Subceed ratio (T) is One minus the Goal Exceed Ratio (I).</t>
+ <t>Exceed coefficient (T) is the Goal Exceed Ratio divided by the subceed
+ratio.</t>
+ </list></t>
+ <t>Balance short-trial effects: <list style="symbols">
+ <t>Balancing sum (T) is the short low-loss sum
+multiplied by the exceed coefficient.</t>
+ <t>Excess sum (T) is the short high-loss sum minus the balancing sum.</t>
+ <t>Positive excess sum (T) is the maximum of zero and excess sum.</t>
+ </list></t>
+ <t>Compute effective duration totals <list style="symbols">
+ <t>Effective high-loss sum (T) is the full-length high-loss sum
+plus the positive excess sum.</t>
+ <t>Effective full sum (T) is the effective high-loss sum
+plus the full-length low-loss sum.</t>
+ <t>Effective whole sum (T) is the larger of the effective full sum
+and the Goal Duration Sum.</t>
+ <t>Missing sum (T) is the effective whole sum minus the effective full sum.</t>
+ </list></t>
+ <t>Estimate exceed ratios: <list style="symbols">
+ <t>Pessimistic high-loss sum (T) is the effective high-loss sum
+plus the missing sum.</t>
+ <t>Optimistic exceed ratio (T) is the effective high-loss sum
+divided by the effective whole sum.</t>
+ <t>Pessimistic exceed ratio (T) is the pessimistic high-loss sum
+divided by the effective whole sum.</t>
+ </list></t>
+ <t>Classify the Load: <list style="symbols">
+ <t>The load is classified as an Upper Bound (O) if the optimistic exceed
+ratio is larger than the Goal Exceed Ratio.</t>
+ <t>The load is classified as a Lower Bound (O) if the pessimistic exceed
+ratio is not larger than the Goal Exceed Ratio.</t>
+ <t>The load is classified as undecided (O) otherwise.</t>
+ </list></t>
+</list></t>
+
+</section>
+<section anchor="conditional-throughput-logic"><name>Conditional Throughput Logic</name>
+
+<t><list style="symbols">
+ <t>Collect Trial Results <list style="symbols">
+ <t>Take all Trial Result instances (I) measured at a given Load.</t>
+ </list></t>
+ <t>Sum Full-Length Durations: <list style="symbols">
+ <t>Full-length high-loss sum (T) is the sum of Trial Effective Duration
+values of all full-length high-loss trials (I).</t>
+ <t>Full-length low-loss sum (T) is the sum of Trial Effective Duration
+values of all full-length low-loss trials (I).</t>
+ <t>Full-length sum (T) is the full-length high-loss sum (I) plus the
+full-length low-loss sum (I).</t>
+ </list></t>
+ <t>Derive initial thresholds: <list style="symbols">
+ <t>Subceed ratio (T) is One minus the Goal Exceed Ratio (I) is called.</t>
+ <t>Remaining sum (T) initially is full-lengths sum multiplied by subceed
+ratio.</t>
+ <t>Current loss ratio (T) initially is 100%.</t>
+ </list></t>
+ <t>Iterate through ordered trials <list style="symbols">
+ <t>For each full-length trial result, sorted in increasing order by Trial
+Loss Ratio: <list style="symbols">
+ <t>If remaining sum is not larger than zero, exit the loop.</t>
+ <t>Set current loss ratio to this trial's Trial Loss Ratio (I).</t>
+ <t>Decrease the remaining sum by this trial's Trial Effective Duration (I).</t>
+ </list></t>
+ </list></t>
+ <t>Compute Conditional Throughput <list style="symbols">
+ <t>Current forwarding ratio (T) is One minus the current loss ratio.</t>
+ <t>Conditional Throughput (T) is the current forwarding ratio multiplied
+by the Load value.</t>
+ </list></t>
+</list></t>
+
+<section anchor="conditional-throughput-and-load-classification"><name>Conditional Throughput and Load Classification</name>
+
+<t>Conditional Throughput and results of Load Classification overlap but
+are not identical.</t>
+
+<t><list style="symbols">
+ <t>When a load is marked as a Relevant Lower Bound, its Conditional
+Throughput is taken from a trial whose loss ratio never exceeds the
+Goal Loss Ratio.</t>
+ <t>The reverse is not guaranteed: if the Goal Width is narrower than the
+Goal Loss Ratio, Conditional Throughput can still end up higher than
+the Relevant Upper Bound.</t>
+</list></t>
+
+</section>
+</section>
+<section anchor="sut-behaviors"><name>SUT Behaviors</name>
+
+<t>In Section <xref target="dut-in-sut">DUT in SUT</xref>, the notion of noise has been introduced.
+This section uses new terms
+to describe possible SUT behaviors more precisely.</t>
+
+<t>From measurement point of view, noise is visible as inconsistent trial results.
+See <xref target="inconsistent-trial-results">Inconsistent Trial Results</xref> for general points
+and <xref target="loss-ratios-and-loss-inversion">Loss Ratios and Loss Inversion</xref>
+for specifics when comparing different Load values.</t>
+
+<t>Load Classification and Conditional Throughput apply to a single Load value,
+but even the set of Trial Results measured at that Trial Load value
+may appear inconsistent.</t>
+
+<t>As MLRsearch aims to save time, it executes only a small number of Trials,
+getting only a limited amount of information about SUT behavior.
+It is useful to introduce an "SUT expert" point of view to contrast
+with that limited information.</t>
+
+<section anchor="expert-predictions"><name>Expert Predictions</name>
+
+<t>Imagine that before the Search starts, a human expert had unlimited time
+to measure SUT and obtain all reliable information about it.
+The information is not perfect, as there is still random noise influencing SUT.
+But the expert is familiar with possible noise events, even the rare ones,
+and thus the expert can do probabilistic predictions about future Trial Outputs.</t>
+
+<t>When several outcomes are possible,
+the expert can assess probability of each outcome.</t>
+
+</section>
+<section anchor="exceed-probability"><name>Exceed Probability</name>
+
+<t>When the Controller selects new Trial Duration and Trial Load,
+and just before the Measurer starts performing the Trial,
+the SUT expert can envision possible Trial Results.</t>
+
+<t>With respect to a particular Search Goal instance, the possibilities
+can be summarized into a single number: Exceed Probability.
+It is the probability (according to the expert) that the measured
+Trial Loss Ratio will be higher than the Goal Loss Ratio.</t>
+
+</section>
+<section anchor="trial-duration-dependence"><name>Trial Duration Dependence</name>
+
+<t>When comparing Exceed Probability values for the same Trial Load value
+but different Trial Duration values,
+there are several patterns that commonly occur in practice.</t>
+
+<section anchor="strong-increase"><name>Strong Increase</name>
+
+<t>Exceed Probability is very low at short durations but very high at full-length.
+This SUT behavior is undesirable, and may hint at faulty SUT,
+e.g., SUT leaks resources and is unable to sustain the desired performance.</t>
+
+<t>But this behavior is also seen when SUT uses large amount of buffers.
+This is the main reasons users may want to set large Goal Final Trial Duration.</t>
+
+</section>
+<section anchor="mild-increase"><name>Mild Increase</name>
+
+<t>Short trials are slightly less likely to exceed the loss-ratio limit,
+but the improvement is modest. This mild benefit is typical when noise
+is dominated by rare, large loss spikes: during a full-length trial,
+the good-performing periods cannot fully offset the heavy frame loss
+that occurs in the brief low-performing bursts.</t>
+
+</section>
+<section anchor="independence"><name>Independence</name>
+
+<t>Short trials have basically the same Exceed Probability as full-length trials.
+This is possible only if loss spikes are small (so other parts can compensate)
+and if Goal Loss Ratio is more than zero (otherwise, other parts
+cannot compensate at all).</t>
+
+</section>
+<section anchor="decrease"><name>Decrease</name>
+
+<t>Short trials have larger Exceed Probability than full-length trials.
+This can be possible only for non-zero Goal Loss Ratio,
+for example if SUT needs to "warm up" to best performance within each trial.
+Not commonly seen in practice.</t>
+
+</section>
+</section>
+</section>
+</section>
+<section anchor="iana-considerations"><name>IANA Considerations</name>
+
+<t>This document does not make any request to IANA.</t>
+
+</section>
+<section anchor="security-considerations"><name>Security Considerations</name>
+
+<t>Benchmarking activities as described in this memo are limited to
+technology characterization of a DUT/SUT using controlled stimuli in a
+laboratory environment, with dedicated address space and the constraints
+specified in the sections above.</t>
+
+<t>The benchmarking network topology will be an independent test setup and
+MUST NOT be connected to devices that may forward the test traffic into
+a production network or misroute traffic to the test management network.</t>
+
+<t>Further, benchmarking is performed on an "opaque" basis, relying
+solely on measurements observable external to the DUT/SUT.</t>
+
+<t>The DUT/SUT SHOULD NOT include features that serve only to boost
+benchmark scores - such as a dedicated "fast-track" test mode that is
+never used in normal operation.</t>
+
+
+<t>Any implications for network security arising from the DUT/SUT SHOULD be
+identical in the lab and in production networks.</t>
+
+
+
+</section>
+<section anchor="acknowledgements"><name>Acknowledgements</name>
+
+<t>Special wholehearted gratitude and thanks to the late Al Morton for his
+thorough reviews filled with very specific feedback and constructive
+guidelines. Thank You Al for the close collaboration over the years, Your Mentorship,
+Your continuous unwavering encouragement full of empathy and energizing
+positive attitude. Al, You are dearly missed.</t>
+
+<t>Thanks to Gabor Lencse, Giuseppe Fioccola and BMWG contributors for good
+discussions and thorough reviews, guiding and helping us to improve the
+clarity and formality of this document.</t>
+
+<t>Many thanks to Alec Hothan of the OPNFV NFVbench project for a thorough
+review and numerous useful comments and suggestions in the earlier
+versions of this document.</t>
+
+</section>
+
+
+ </middle>
+
+ <back>
+
+
+<references title='References' anchor="sec-combined-references">
+
+ <references title='Normative References' anchor="sec-normative-references">
+
+&RFC1242;
+&RFC2119;
+&RFC2285;
+&RFC2544;
+&RFC8174;
+
+
+ </references>
+
+ <references title='Informative References' anchor="sec-informative-references">
+
+&RFC5180;
+&RFC6349;
+&RFC6985;
+&RFC8219;
+<reference anchor="TST009" target="https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/009/03.04.01_60/gs_NFV-TST009v030401p.pdf">
+ <front>
+ <title>TST 009</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+<reference anchor="Y.1564" target="https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-Y.1564-201602-I!!PDF-E&type=items">
+ <front>
+ <title>Y.1564</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+<reference anchor="FDio-CSIT-MLRsearch" target="https://csit.fd.io/cdocs/methodology/measurements/data_plane_throughput/mlr_search/">
+ <front>
+ <title>FD.io CSIT Test Methodology - MLRsearch</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="2023" month="October"/>
+ </front>
+</reference>
+<reference anchor="PyPI-MLRsearch" target="https://pypi.org/project/MLRsearch/1.2.1/">
+ <front>
+ <title>MLRsearch 1.2.1, Python Package Index</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="2023" month="October"/>
+ </front>
+</reference>
+<reference anchor="Lencze-Shima" target="https://datatracker.ietf.org/doc/html/draft-lencse-bmwg-rfc2544-bis-00">
+ <front>
+ <title>An Upgrade to Benchmarking Methodology for Network Interconnect Devices - expired</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+<reference anchor="Lencze-Kovacs-Shima" target="http://dx.doi.org/10.11601/ijates.v9i2.288">
+ <front>
+ <title>Gaming with the Throughput and the Latency Benchmarking Measurement Procedures of RFC 2544</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+<reference anchor="Ott-Mathis-Semke-Mahdavi" target="https://www.cs.cornell.edu/people/egs/cornellonly/syslunch/fall02/ott.pdf">
+ <front>
+ <title>The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+<reference anchor="Vassilev" target="https://datatracker.ietf.org/doc/draft-ietf-bmwg-network-tester-cfg/06">
+ <front>
+ <title>A YANG Data Model for Network Tester Management</title>
+ <author >
+ <organization></organization>
+ </author>
+ <date year="n.d."/>
+ </front>
+</reference>
+
+
+ </references>
+
+</references>
+
+
+<?line 4412?>
+
+<section anchor="load-classification-code"><name>Load Classification Code</name>
+
+
+<t>This appendix specifies how to perform the Load Classification.</t>
+
+<t>Any Trial Load value can be classified,
+according to a given <xref target="search-goal">Search Goal</xref> instance.</t>
+
+<t>The algorithm uses (some subsets of) the set of all available Trial Results
+from Trials measured at a given Load at the end of the Search.</t>
+
+<t>The block at the end of this appendix holds pseudocode
+which computes two values, stored in variables named
+<spanx style="verb">optimistic_is_lower</spanx> and <spanx style="verb">pessimistic_is_lower</spanx>.</t>
+
+<t>Although presented as pseudocode, the listing is syntactically valid
+Python and can be executed without modification.</t>
+
+
+<t>If values of both variables are computed to be true, the Load in question
+is classified as a Lower Bound according to the given Search Goal instance.
+If values of both variables are false, the Load is classified as an Upper Bound.
+Otherwise, the load is classified as Undecided.</t>
+
+<t>Some variable names are shortened to fit expressions in one line.
+Namely, variables holding sum quantities end in <spanx style="verb">_s</spanx> instead of <spanx style="verb">_sum</spanx>,
+and variables holding effective quantities start in <spanx style="verb">effect_</spanx>
+instead of <spanx style="verb">effective_</spanx>.</t>
+
+<t>The pseudocode expects the following variables to hold the following values:</t>
+
+<t><list style="symbols">
+ <t><spanx style="verb">goal_duration_s</spanx>: The Goal Duration Sum value of the given Search Goal.</t>
+ <t><spanx style="verb">goal_exceed_ratio</spanx>: The Goal Exceed Ratio value of the given Search Goal.</t>
+ <t><spanx style="verb">full_length_low_loss_s</spanx>: Sum of Trial Effective Durations across Trials
+with Trial Duration at least equal to the Goal Final Trial Duration
+and with Trial Loss Ratio not higher than the Goal Loss Ratio
+(across Full-Length Low-Loss Trials).</t>
+ <t><spanx style="verb">full_length_high_loss_s</spanx>: Sum of Trial Effective Durations across Trials
+with Trial Duration at least equal to the Goal Final Trial Duration
+and with Trial Loss Ratio higher than the Goal Loss Ratio
+(across Full-Length High-Loss Trials).</t>
+ <t><spanx style="verb">short_low_loss_s</spanx>: Sum of Trial Effective Durations across Trials
+with Trial Duration shorter than the Goal Final Trial Duration
+and with Trial Loss Ratio not higher than the Goal Loss Ratio
+(across Short Low-Loss Trials).</t>
+ <t><spanx style="verb">short_high_loss_s</spanx>: Sum of Trial Effective Durations across Trials
+with Trial Duration shorter than the Goal Final Trial Duration
+and with Trial Loss Ratio higher than the Goal Loss Ratio
+(across Short High-Loss Trials).</t>
+</list></t>
+
+<t>The code works correctly also when there are no Trial Results at a given Load.</t>
+
+<figure><sourcecode type="python"><![CDATA[
+<CODE BEGINS>
+exceed_coefficient = goal_exceed_ratio / (1.0 - goal_exceed_ratio)
+balancing_s = short_low_loss_s * exceed_coefficient
+positive_excess_s = max(0.0, short_high_loss_s - balancing_s)
+effect_high_loss_s = full_length_high_loss_s + positive_excess_s
+effect_full_length_s = full_length_low_loss_s + effect_high_loss_s
+effect_whole_s = max(effect_full_length_s, goal_duration_s)
+quantile_duration_s = effect_whole_s * goal_exceed_ratio
+pessimistic_high_loss_s = effect_whole_s - full_length_low_loss_s
+pessimistic_is_lower = pessimistic_high_loss_s <= quantile_duration_s
+optimistic_is_lower = effect_high_loss_s <= quantile_duration_s
+<CODE ENDS>
+]]></sourcecode></figure>
+
+
+</section>
+<section anchor="conditional-throughput-code"><name>Conditional Throughput Code</name>
+
+<t>This section specifies an example of how to compute Conditional Throughput,
+as referred to in Section <xref target="conditional-throughput">Conditional Throughput</xref>.</t>
+
+<t>Any Load value can be used as the basis for the following computation,
+but only the Relevant Lower Bound (at the end of the Search)
+leads to the value called the Conditional Throughput for a given Search Goal.</t>
+
+<t>The algorithm uses (some subsets of) the set of all available Trial Results
+from Trials measured at a given Load at the end of the Search.</t>
+
+<t>The block at the end of this appendix holds pseudocode
+which computes a value stored as variable <spanx style="verb">conditional_throughput</spanx>.</t>
+
+<t>Although presented as pseudocode, the listing is syntactically valid
+Python and can be executed without modification.</t>
+
+<t>Some variable names are shortened in order to fit expressions in one line.
+Namely, variables holding sum quantities end in <spanx style="verb">_s</spanx> instead of <spanx style="verb">_sum</spanx>,
+and variables holding effective quantities start in <spanx style="verb">effect_</spanx>
+instead of <spanx style="verb">effective_</spanx>.</t>
+
+<t>The pseudocode expects the following variables to hold the following values:</t>
+
+<t><list style="symbols">
+ <t><spanx style="verb">goal_duration_s</spanx>: The Goal Duration Sum value of the given Search Goal.</t>
+ <t><spanx style="verb">goal_exceed_ratio</spanx>: The Goal Exceed Ratio value of the given Search Goal.</t>
+ <t><spanx style="verb">full_length_low_loss_s</spanx>: Sum of Trial Effective Durations across Trials
+with Trial Duration at least equal to the Goal Final Trial Duration
+and with Trial Loss Ratio not higher than the Goal Loss Ratio
+(across Full-Length Low-Loss Trials).</t>
+ <t><spanx style="verb">full_length_high_loss_s</spanx>: Sum of Trial Effective Durations across Trials
+with Trial Duration at least equal to the Goal Final Trial Duration
+and with Trial Loss Ratio higher than the Goal Loss Ratio
+(across Full-Length High-Loss Trials).</t>
+ <t><spanx style="verb">full_length_trials</spanx>: An iterable of all Trial Results from Trials
+with Trial Duration at least equal to the Goal Final Trial Duration
+(all Full-Length Trials), sorted by increasing Trial Loss Ratio.
+One item <spanx style="verb">trial</spanx> is a composite with the following two attributes available: <list style="symbols">
+ <t><spanx style="verb">trial.loss_ratio</spanx>: The Trial Loss Ratio as measured for this Trial.</t>
+ <t><spanx style="verb">trial.effect_duration</spanx>: The Trial Effective Duration of this Trial.</t>
+ </list></t>
+</list></t>
+
+<t>The code works correctly only when there is at least one
+Trial Result measured at a given Load.</t>
+
+<figure><sourcecode type="python"><![CDATA[
+<CODE BEGINS>
+full_length_s = full_length_low_loss_s + full_length_high_loss_s
+whole_s = max(goal_duration_s, full_length_s)
+remaining = whole_s * (1.0 - goal_exceed_ratio)
+quantile_loss_ratio = None
+for trial in full_length_trials:
+ if quantile_loss_ratio is None or remaining > 0.0:
+ quantile_loss_ratio = trial.loss_ratio
+ remaining -= trial.effect_duration
+ else:
+ break
+else:
+ if remaining > 0.0:
+ quantile_loss_ratio = 1.0
+conditional_throughput = intended_load * (1.0 - quantile_loss_ratio)
+<CODE ENDS>
+]]></sourcecode></figure>
+
+
+</section>
+<section anchor="example-search"><name>Example Search</name>
+
+
+<t>The following example Search is related to
+one hypothetical run of a Search test procedure
+that has been started with multiple Search Goals.
+Several points in time are chosen, to show how the logic works,
+with specific sets of Trial Result available.
+The trial results themselves are not very realistic, as
+the intention is to show several corner cases of the logic.</t>
+
+<t>In all Trials, the Effective Trial Duration is equal to Trial Duration.</t>
+
+<t>Only one Trial Load is in focus, its value is one million frames per second.
+Trial Results at other Trial Loads are not mentioned,
+as the parts of logic present here do not depend on those.
+In practice, Trial Results at other Load values would be present,
+e.g., MLRsearch will look for a Lower Bound smaller than any Upper Bound found.</t>
+
+<t>At any given moment, exactly one Search Goal is designated as in focus.
+This designation affects only the Trial Duration chosen for new trials;
+it does not alter the rest of the decision logic.</t>
+
+<t>An MLRsearch implementation is free to evaluate several goals
+simultaneously - the "focus" mechanism is optional and appears here only
+to show that a load can still be classified against goals that are not
+currently in focus.</t>
+
+<section anchor="example-goals"><name>Example Goals</name>
+
+<t>The following four Search Goal instances are selected for the example Search.
+Each goal has a readable name and dense code,
+the code is useful to show Search Goal attribute values.</t>
+
+<t>As the variable "exceed coefficient" does not depend on trial results,
+it is also precomputed here.</t>
+
+<t>Goal 1:</t>
+
+<figure><artwork><![CDATA[
+name: RFC2544
+Goal Final Trial Duration: 60s
+Goal Duration Sum: 60s
+Goal Loss Ratio: 0%
+Goal Exceed Ratio: 0%
+exceed coefficient: 0% / (100% / 0%) = 0.0
+code: 60f60d0l0e
+]]></artwork></figure>
+
+<t>Goal 2:</t>
+
+<figure><artwork><![CDATA[
+name: TST009
+Goal Final Trial Duration: 60s
+Goal Duration Sum: 120s
+Goal Loss Ratio: 0%
+Goal Exceed Ratio: 50%
+exceed coefficient: 50% / (100% - 50%) = 1.0
+code: 60f120d0l50e
+]]></artwork></figure>
+
+<t>Goal 3:</t>
+
+<figure><artwork><![CDATA[
+name: 1s final
+Goal Final Trial Duration: 1s
+Goal Duration Sum: 120s
+Goal Loss Ratio: 0.5%
+Goal Exceed Ratio: 50%
+exceed coefficient: 50% / (100% - 50%) = 1.0
+code: 1f120d.5l50e
+]]></artwork></figure>
+
+<t>Goal 4:</t>
+
+<figure><artwork><![CDATA[
+name: 20% exceed
+Goal Final Trial Duration: 60s
+Goal Duration Sum: 60s
+Goal Loss Ratio: 0.5%
+Goal Exceed Ratio: 20%
+exceed coefficient: 20% / (100% - 20%) = 0.25
+code: 60f60d0.5l20e
+]]></artwork></figure>
+
+<t>The first two goals are important for compliance reasons,
+the other two cover less frequent cases.</t>
+
+</section>
+<section anchor="example-trial-results"><name>Example Trial Results</name>
+
+<t>The following six sets of trial results are selected for the example Search.
+The sets are defined as points in time, describing which Trial Results
+were added since the previous point.</t>
+
+<t>Each point has a readable name and dense code,
+the code is useful to show Trial Output attribute values
+and number of times identical results were added.</t>
+
+<t>Point 1:</t>
+
+<figure><artwork><![CDATA[
+name: first short good
+goal in focus: 1s final (1f120d.5l50e)
+added Trial Results: 59 trials, each 1 second and 0% loss
+code: 59x1s0l
+]]></artwork></figure>
+
+<t>Point 2:</t>
+
+<figure><artwork><![CDATA[
+name: first short bad
+goal in focus: 1s final (1f120d.5l50e)
+added Trial Result: one trial, 1 second, 1% loss
+code: 59x1s0l+1x1s1l
+]]></artwork></figure>
+
+<t>Point 3:</t>
+
+<figure><artwork><![CDATA[
+name: last short bad
+goal in focus: 1s final (1f120d.5l50e)
+added Trial Results: 59 trials, 1 second each, 1% loss each
+code: 59x1s0l+60x1s1l
+]]></artwork></figure>
+
+<t>Point 4:</t>
+
+<figure><artwork><![CDATA[
+name: last short good
+goal in focus: 1s final (1f120d.5l50e)
+added Trial Results: one trial 1 second, 0% loss
+code: 60x1s0l+60x1s1l
+]]></artwork></figure>
+
+<t>Point 5:</t>
+
+<figure><artwork><![CDATA[
+name: first long bad
+goal in focus: TST009 (60f120d0l50e)
+added Trial Results: one trial, 60 seconds, 0.1% loss
+code: 60x1s0l+60x1s1l+1x60s.1l
+]]></artwork></figure>
+
+<t>Point 6:</t>
+
+<figure><artwork><![CDATA[
+name: first long good
+goal in focus: TST009 (60f120d0l50e)
+added Trial Results: one trial, 60 seconds, 0% loss
+code: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l
+]]></artwork></figure>
+
+<t>Comments on point in time naming:</t>
+
+<t><list style="symbols">
+ <t>When a name contains "short", it means the added trial
+had Trial Duration of 1 second, which is Short Trial for 3 of the Search Goals,
+but it is a Full-Length Trial for the "1s final" goal.</t>
+ <t>Similarly, "long" in name means the added trial
+had Trial Duration of 60 seconds, which is Full-Length Trial for 3 goals
+but Long Trial for the "1s final" goal.</t>
+ <t>When a name contains "good" it means the added trial is Low-Loss Trial
+for all the goals.</t>
+ <t>When a name contains "short bad" it means the added trial is High-Loss Trial
+for all the goals.</t>
+ <t>When a name contains "long bad", it means the added trial
+is a High-Loss Trial for goals "RFC2544" and "TST009",
+but it is a Low-Loss Trial for the two other goals.</t>
+</list></t>
+
+</section>
+<section anchor="load-classification-computations"><name>Load Classification Computations</name>
+
+<t>This section shows how Load Classification logic is applied
+by listing all temporary values at the specific time point.</t>
+
+<section anchor="point-1"><name>Point 1</name>
+
+<t>This is the "first short good" point.
+Code for available results is: 59x1s0l</t>
+
+<texttable>
+ <ttcol align='left'>Goal name</ttcol>
+ <ttcol align='left'>RFC2544</ttcol>
+ <ttcol align='left'>TST009</ttcol>
+ <ttcol align='left'>1s final</ttcol>
+ <ttcol align='left'>20% exceed</ttcol>
+ <c>Goal code</c>
+ <c>60f60d0l0e</c>
+ <c>60f120d0l50e</c>
+ <c>1f120d.5l50e</c>
+ <c>60f60d0.5l20e</c>
+ <c>Full-length high-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Full-length low-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>Short high-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Short low-loss sum</c>
+ <c>59s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>Balancing sum</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>14.75s</c>
+ <c>Excess sum</c>
+ <c>0s</c>
+ <c>-59s</c>
+ <c>0s</c>
+ <c>-14.75s</c>
+ <c>Positive excess sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Effective high-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Effective full sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>Effective whole sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>Missing sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>61s</c>
+ <c>60s</c>
+ <c>Pessimistic high-loss sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>61s</c>
+ <c>60s</c>
+ <c>Optimistic exceed ratio</c>
+ <c>0%</c>
+ <c>0%</c>
+ <c>0%</c>
+ <c>0%</c>
+ <c>Pessimistic exceed ratio</c>
+ <c>100%</c>
+ <c>100%</c>
+ <c>50.833%</c>
+ <c>100%</c>
+ <c>Classification Result</c>
+ <c>Undecided</c>
+ <c>Undecided</c>
+ <c>Undecided</c>
+ <c>Undecided</c>
+</texttable>
+
+
+<t>This is the last point in time where all goals have this load as Undecided.</t>
+
+</section>
+<section anchor="point-2"><name>Point 2</name>
+
+<t>This is the "first short bad" point.
+Code for available results is: 59x1s0l+1x1s1l</t>
+
+<texttable>
+ <ttcol align='left'>Goal name</ttcol>
+ <ttcol align='left'>RFC2544</ttcol>
+ <ttcol align='left'>TST009</ttcol>
+ <ttcol align='left'>1s final</ttcol>
+ <ttcol align='left'>20% exceed</ttcol>
+ <c>Goal code</c>
+ <c>60f60d0l0e</c>
+ <c>60f120d0l50e</c>
+ <c>1f120d.5l50e</c>
+ <c>60f60d0.5l20e</c>
+ <c>Full-length high-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>Full-length low-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>Short high-loss sum</c>
+ <c>1s</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>1s</c>
+ <c>Short low-loss sum</c>
+ <c>59s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>Balancing sum</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>14.75s</c>
+ <c>Excess sum</c>
+ <c>1s</c>
+ <c>-58s</c>
+ <c>0s</c>
+ <c>-13.75s</c>
+ <c>Positive excess sum</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Effective high-loss sum</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>Effective full sum</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>Effective whole sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>Missing sum</c>
+ <c>59s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>Pessimistic high-loss sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>61s</c>
+ <c>60s</c>
+ <c>Optimistic exceed ratio</c>
+ <c>1.667%</c>
+ <c>0%</c>
+ <c>0.833%</c>
+ <c>0%</c>
+ <c>Pessimistic exceed ratio</c>
+ <c>100%</c>
+ <c>100%</c>
+ <c>50.833%</c>
+ <c>100%</c>
+ <c>Classification Result</c>
+ <c>Upper Bound</c>
+ <c>Undecided</c>
+ <c>Undecided</c>
+ <c>Undecided</c>
+</texttable>
+
+<t>Due to zero Goal Loss Ratio, RFC2544 goal must have mild or strong increase
+of exceed probability, so the one lossy trial would be lossy even if measured
+at 60 second duration.
+Due to zero exceed ratio, one High-Loss Trial is enough to preclude this Load
+from becoming a Lower Bound for RFC2544. That is why this Load
+is classified as an Upper Bound for RFC2544 this early.</t>
+
+<t>This is an example how significant time can be saved, compared to 60-second trials.</t>
+
+</section>
+<section anchor="point-3"><name>Point 3</name>
+
+<t>This is the "last short bad" point.
+Code for available trial results is: 59x1s0l+60x1s1l</t>
+
+<texttable>
+ <ttcol align='left'>Goal name</ttcol>
+ <ttcol align='left'>RFC2544</ttcol>
+ <ttcol align='left'>TST009</ttcol>
+ <ttcol align='left'>1s final</ttcol>
+ <ttcol align='left'>20% exceed</ttcol>
+ <c>Goal code</c>
+ <c>60f60d0l0e</c>
+ <c>60f120d0l50e</c>
+ <c>1f120d.5l50e</c>
+ <c>60f60d0.5l20e</c>
+ <c>Full-length high-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>Full-length low-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>Short high-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Short low-loss sum</c>
+ <c>59s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>Balancing sum</c>
+ <c>0s</c>
+ <c>59s</c>
+ <c>0s</c>
+ <c>14.75s</c>
+ <c>Excess sum</c>
+ <c>60s</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>45.25s</c>
+ <c>Positive excess sum</c>
+ <c>60s</c>
+ <c>1s</c>
+ <c>0s</c>
+ <c>45.25s</c>
+ <c>Effective high-loss sum</c>
+ <c>60s</c>
+ <c>1s</c>
+ <c>60s</c>
+ <c>45.25s</c>
+ <c>Effective full sum</c>
+ <c>60s</c>
+ <c>1s</c>
+ <c>119s</c>
+ <c>45.25s</c>
+ <c>Effective whole sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>Missing sum</c>
+ <c>0s</c>
+ <c>119s</c>
+ <c>1s</c>
+ <c>14.75s</c>
+ <c>Pessimistic high-loss sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>61s</c>
+ <c>60s</c>
+ <c>Optimistic exceed ratio</c>
+ <c>100%</c>
+ <c>0.833%</c>
+ <c>50%</c>
+ <c>75.417%</c>
+ <c>Pessimistic exceed ratio</c>
+ <c>100%</c>
+ <c>100%</c>
+ <c>50.833%</c>
+ <c>100%</c>
+ <c>Classification Result</c>
+ <c>Upper Bound</c>
+ <c>Undecided</c>
+ <c>Undecided</c>
+ <c>Upper Bound</c>
+</texttable>
+
+<t>This is the last point for "1s final" goal to have this Load still Undecided.
+Only one 1-second trial is missing within the 120-second Goal Duration Sum,
+but its result will decide the classification result.</t>
+
+<t>The "20% exceed" started to classify this load as an Upper Bound
+somewhere between points 2 and 3.</t>
+
+</section>
+<section anchor="point-4"><name>Point 4</name>
+
+<t>This is the "last short good" point.
+Code for available trial results is: 60x1s0l+60x1s1l</t>
+
+<texttable>
+ <ttcol align='left'>Goal name</ttcol>
+ <ttcol align='left'>RFC2544</ttcol>
+ <ttcol align='left'>TST009</ttcol>
+ <ttcol align='left'>1s final</ttcol>
+ <ttcol align='left'>20% exceed</ttcol>
+ <c>Goal code</c>
+ <c>60f60d0l0e</c>
+ <c>60f120d0l50e</c>
+ <c>1f120d.5l50e</c>
+ <c>60f60d0.5l20e</c>
+ <c>Full-length high-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>Full-length low-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>Short high-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Short low-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Balancing sum</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>15s</c>
+ <c>Excess sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>45s</c>
+ <c>Positive excess sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>45s</c>
+ <c>Effective high-loss sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>45s</c>
+ <c>Effective full sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>120s</c>
+ <c>45s</c>
+ <c>Effective whole sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>Missing sum</c>
+ <c>0s</c>
+ <c>120s</c>
+ <c>0s</c>
+ <c>15s</c>
+ <c>Pessimistic high-loss sum</c>
+ <c>60s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>Optimistic exceed ratio</c>
+ <c>100%</c>
+ <c>0%</c>
+ <c>50%</c>
+ <c>75%</c>
+ <c>Pessimistic exceed ratio</c>
+ <c>100%</c>
+ <c>100%</c>
+ <c>50%</c>
+ <c>100%</c>
+ <c>Classification Result</c>
+ <c>Upper Bound</c>
+ <c>Undecided</c>
+ <c>Lower Bound</c>
+ <c>Upper Bound</c>
+</texttable>
+
+<t>The one missing trial for "1s final" was Low-Loss,
+half of trial results are Low-Loss which exactly matches 50% exceed ratio.
+This shows time savings are not guaranteed.</t>
+
+</section>
+<section anchor="point-5"><name>Point 5</name>
+
+<t>This is the "first long bad" point.
+Code for available trial results is: 60x1s0l+60x1s1l+1x60s.1l</t>
+
+<texttable>
+ <ttcol align='left'>Goal name</ttcol>
+ <ttcol align='left'>RFC2544</ttcol>
+ <ttcol align='left'>TST009</ttcol>
+ <ttcol align='left'>1s final</ttcol>
+ <ttcol align='left'>20% exceed</ttcol>
+ <c>Goal code</c>
+ <c>60f60d0l0e</c>
+ <c>60f120d0l50e</c>
+ <c>1f120d.5l50e</c>
+ <c>60f60d0.5l20e</c>
+ <c>Full-length high-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>Full-length low-loss sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>Short high-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Short low-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Balancing sum</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>15s</c>
+ <c>Excess sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>45s</c>
+ <c>Positive excess sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>45s</c>
+ <c>Effective high-loss sum</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>45s</c>
+ <c>Effective full sum</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>180s</c>
+ <c>105s</c>
+ <c>Effective whole sum</c>
+ <c>120s</c>
+ <c>120s</c>
+ <c>180s</c>
+ <c>105s</c>
+ <c>Missing sum</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Pessimistic high-loss sum</c>
+ <c>120s</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>45s</c>
+ <c>Optimistic exceed ratio</c>
+ <c>100%</c>
+ <c>50%</c>
+ <c>33.333%</c>
+ <c>42.857%</c>
+ <c>Pessimistic exceed ratio</c>
+ <c>100%</c>
+ <c>100%</c>
+ <c>33.333%</c>
+ <c>42.857%</c>
+ <c>Classification Result</c>
+ <c>Upper Bound</c>
+ <c>Undecided</c>
+ <c>Lower Bound</c>
+ <c>Lower Bound</c>
+</texttable>
+
+<t>As designed for TST009 goal, one Full-Length High-Loss Trial can be tolerated.
+120s worth of 1-second trials is not useful, as this is allowed when
+Exceed Probability does not depend on Trial Duration.
+As Goal Loss Ratio is zero, it is not possible for 60-second trials
+to compensate for losses seen in 1-second results.
+But Load Classification logic does not have that knowledge hardcoded,
+so optimistic exceed ratio is still only 50%.</t>
+
+<t>But the 0.1% Trial Loss Ratio is lower than "20% exceed" Goal Loss Ratio,
+so this unexpected Full-Length Low-Loss trial changed the classification result
+of this Load to Lower Bound.</t>
+
+</section>
+<section anchor="point-6"><name>Point 6</name>
+
+<t>This is the "first long good" point.
+Code for available trial results is: 60x1s0l+60x1s1l+1x60s.1l+1x60s0l</t>
+
+<texttable>
+ <ttcol align='left'>Goal name</ttcol>
+ <ttcol align='left'>RFC2544</ttcol>
+ <ttcol align='left'>TST009</ttcol>
+ <ttcol align='left'>1s final</ttcol>
+ <ttcol align='left'>20% exceed</ttcol>
+ <c>Goal code</c>
+ <c>60f60d0l0e</c>
+ <c>60f120d0l50e</c>
+ <c>1f120d.5l50e</c>
+ <c>60f60d0.5l20e</c>
+ <c>Full-length high-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>Full-length low-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>180s</c>
+ <c>120s</c>
+ <c>Short high-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Short low-loss sum</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>Balancing sum</c>
+ <c>0s</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>15s</c>
+ <c>Excess sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>45s</c>
+ <c>Positive excess sum</c>
+ <c>60s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>45s</c>
+ <c>Effective high-loss sum</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>45s</c>
+ <c>Effective full sum</c>
+ <c>180s</c>
+ <c>120s</c>
+ <c>240s</c>
+ <c>165s</c>
+ <c>Effective whole sum</c>
+ <c>180s</c>
+ <c>120s</c>
+ <c>240s</c>
+ <c>165s</c>
+ <c>Missing sum</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>0s</c>
+ <c>Pessimistic high-loss sum</c>
+ <c>120s</c>
+ <c>60s</c>
+ <c>60s</c>
+ <c>45s</c>
+ <c>Optimistic exceed ratio</c>
+ <c>66.667%</c>
+ <c>50%</c>
+ <c>25%</c>
+ <c>27.273%</c>
+ <c>Pessimistic exceed ratio</c>
+ <c>66.667%</c>
+ <c>50%</c>
+ <c>25%</c>
+ <c>27.273%</c>
+ <c>Classification Result</c>
+ <c>Upper Bound</c>
+ <c>Lower Bound</c>
+ <c>Lower Bound</c>
+ <c>Lower Bound</c>
+</texttable>
+
+<t>This is the Low-Loss Trial the "TST009" goal was waiting for.
+This Load is now classified for all goals; the search may end.
+Or, more realistically, it can focus on larger load only,
+as the three goals will want an Upper Bound (unless this Load is Max Load).</t>
+
+</section>
+</section>
+<section anchor="conditional-throughput-computations"><name>Conditional Throughput Computations</name>
+
+<t>At the end of this hypothetical search, the "RFC2544" goal labels the
+load as an Upper Bound, making it ineligible for Conditional-Throughput
+calculations. By contrast, the other three goals treat the same load as
+a Lower Bound; if it is also accepted as their Relevant Lower Bound, we
+can compute Conditional-Throughput values for each of them.</t>
+
+<t>(The load under discussion is 1 000 000 frames per second.)</t>
+
+<section anchor="goal-2"><name>Goal 2</name>
+
+<t>The Conditional Throughput is computed from sorted list
+of Full-Length Trial results. As TST009 Goal Final Trial Duration is 60 seconds,
+only two of 122 Trials are considered Full-Length Trials.
+One has Trial Loss Ratio of 0%, the other of 0.1%.</t>
+
+<t><list style="symbols">
+ <t>Full-length high-loss sum is 60 seconds.</t>
+ <t>Full-length low-loss sum is 60 seconds.</t>
+ <t>Full-length is 120 seconds.</t>
+ <t>Subceed ratio is 50%.</t>
+ <t>Remaining sum initially is 0.5x12s = 60 seconds.</t>
+ <t>Current loss ratio initially is 100%.</t>
+ <t>For first result (duration 60s, loss 0%):
+ <list style="symbols">
+ <t>Remaining sum is larger than zero, not exiting the loop.</t>
+ <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0%.</t>
+ <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
+ <t>New remaining sum is 60s - 60s = 0s.</t>
+ </list></t>
+ <t>For second result (duration 60s, loss 0.1%):</t>
+ <t>Remaining sum is not larger than zero, exiting the loop.</t>
+ <t>Current forwarding ratio was most recently set to 0%.</t>
+ <t>Current forwarding ratio is one minus the current loss ratio, so 100%.</t>
+ <t>Conditional Throughput is the current forwarding ratio multiplied by the Load value.</t>
+ <t>Conditional Throughput is one million frames per second.</t>
+</list></t>
+
+</section>
+<section anchor="goal-3"><name>Goal 3</name>
+
+<t>The "1s final" has Goal Final Trial Duration of 1 second,
+so all 122 Trial Results are considered Full-Length Trials.
+They are ordered like this:</t>
+
+<figure><artwork><![CDATA[
+60 1-second 0% loss trials,
+1 60-second 0% loss trial,
+1 60-second 0.1% loss trial,
+60 1-second 1% loss trials.
+]]></artwork></figure>
+
+<t>The result does not depend on the order of 0% loss trials.</t>
+
+<t><list style="symbols">
+ <t>Full-length high-loss sum is 60 seconds.</t>
+ <t>Full-length low-loss sum is 180 seconds.</t>
+ <t>Full-length is 240 seconds.</t>
+ <t>Subceed ratio is 50%.</t>
+ <t>Remaining sum initially is 0.5x240s = 120 seconds.</t>
+ <t>Current loss ratio initially is 100%.</t>
+ <t>For first 61 results (duration varies, loss 0%):
+ <list style="symbols">
+ <t>Remaining sum is larger than zero, not exiting the loop.</t>
+ <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0%.</t>
+ <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
+ <t>New remaining sum varies.</t>
+ </list></t>
+ <t>After 61 trials, duration of 60x1s + 1x60s has been subtracted from 120s, leaving 0s.</t>
+ <t>For 62-th result (duration 60s, loss 0.1%):
+ <list style="symbols">
+ <t>Remaining sum is not larger than zero, exiting the loop.</t>
+ </list></t>
+ <t>Current forwarding ratio was most recently set to 0%.</t>
+ <t>Current forwarding ratio is one minus the current loss ratio, so 100%.</t>
+ <t>Conditional Throughput is the current forwarding ratio multiplied by the Load value.</t>
+ <t>Conditional Throughput is one million frames per second.</t>
+</list></t>
+
+</section>
+<section anchor="goal-4"><name>Goal 4</name>
+
+<t>The Conditional Throughput is computed from sorted list
+of Full-Length Trial results. As "20% exceed" Goal Final Trial Duration
+is 60 seconds, only two of 122 Trials are considered Full-Length Trials.
+One has Trial Loss Ratio of 0%, the other of 0.1%.</t>
+
+<t><list style="symbols">
+ <t>Full-length high-loss sum is 60 seconds.</t>
+ <t>Full-length low-loss sum is 60 seconds.</t>
+ <t>Full-length is 120 seconds.</t>
+ <t>Subceed ratio is 80%.</t>
+ <t>Remaining sum initially is 0.8x120s = 96 seconds.</t>
+ <t>Current loss ratio initially is 100%.</t>
+ <t>For first result (duration 60s, loss 0%):
+ <list style="symbols">
+ <t>Remaining sum is larger than zero, not exiting the loop.</t>
+ <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0%.</t>
+ <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
+ <t>New remaining sum is 96s - 60s = 36s.</t>
+ </list></t>
+ <t>For second result (duration 60s, loss 0.1%):
+ <list style="symbols">
+ <t>Remaining sum is larger than zero, not exiting the loop.</t>
+ <t>Set current loss ratio to this trial's Trial Loss Ratio which is 0.1%.</t>
+ <t>Decrease the remaining sum by this trial's Trial Effective Duration.</t>
+ <t>New remaining sum is 36s - 60s = -24s.</t>
+ </list></t>
+ <t>No more trials (and remaining sum is not larger than zero), exiting loop.</t>
+ <t>Current forwarding ratio was most recently set to 0.1%.</t>
+ <t>Current forwarding ratio is one minus the current loss ratio, so 99.9%.</t>
+ <t>Conditional Throughput is the current forwarding ratio multiplied by the Load value.</t>
+ <t>Conditional Throughput is 999 thousand frames per second.</t>
+</list></t>
+
+<t>Due to stricter Goal Exceed Ratio, this Conditional Throughput
+is smaller than Conditional Throughput of the other two goals.</t>
+
+
+
+
+</section>
+</section>
+</section>
+
+
+ </back>
+
+<!-- ##markdown-source:
+H4sIAAAAAAAAA9S963Ib15Im+r+eogaO7k1qAPCiiyV2dGtoXbwVLVkak96e
+PQ7HjiJQJKsFoNBVBVL0xImYBznn5eZJTuaXmWvlKhRI2d3n9MyemDZFAlXr
+kitXXr78cjKZZFlXdYvyJP+wWXTVelHm7+u2zX8suqrOz8qimV1nxcVFU97Q
+R97/2Mpv5vVsVSzpW/OmuOwmVdldTi6Wt1eT5aKRj0yOjrN50dFHjg+Pn04O
+X0wOj7Os3Vwsq7at6tX53Zr+9u7N+dusWjcnedds2u748PAFfaxoyuIkr9dt
+dnt1kn9XrmbXy6L5XK2u8p9r+e/3Tb1ZZ59v6RGrrmxWZTd5zUPJZkV3kler
+yzrLZvWcPnqSb9pJ0c6qKltXJzn975t8Vqzot2VeNE1xl+9Vl3mxWOR3Zbuf
+101+XbTX+XXZlFmed/XshP9AP7Z10zXlZXuCR8zLy4JWrKVP2N/vlvJn/mdW
+bLrrujnJcvxvov/NaWj0iQ/T/J/rVdsVq+5uVd9Ws9/C32VZPxSzqvy880N1
+Q9N6VbUz2qK7tiuXbfhTuSyqxUm+/Cxf/S8z/tR0Vi+HR/KXaf6pXhSfe+//
+S1N0n+venx5+602z5m+4l2arulmSLN2UvBQ/vn11dPzkWH88Pjp6YT8eP39q
+Pz598kR/fH70Lf2Y8W6mD3l69PzwJPsmf1Uvl+WKNvz8umrzddF0+UVJHy7z
+y6ppu3wymeT0h1Xd5SxA8/p2lV9suvyvpx/ej2lD85k8gD5SlvN8Xl1e0rav
+urxsZ8W6nGbf/I+TE/3M/5V9Q2+k/3347ujo2Un+aVEWJELL+qZkGQiDrFfj
+vCCx4BHd0g/rpr6p5vT0erW447+Q7JVfiiUdtak98i+fTvKPn8M/P/wzSQA9
+d47f/I+TgzAEzP7Z4ye2bs9ehHV7fsyruWvA39JBmZdLGVmZ1/R/mpw+1Nz9
+kUGcn53TSRXh7ormqqQtuO66dXtycHB7ezstu7aakrgczMsF7VpzwL/421V7
+8MPbv0zoyweHh0d/O3zxgv5L///x9PDJlH7x7PDgqv2bfoT+cnP4+PDJ4dF6
+up5fyqtEU43ozzn9fUS//Ov06OmzJ7tHUnWbabXqDppydjCvF/VVtfrbenMx
+Ldr1y0WxuvrH8u+r+T+eT35882oiz5ocHx49OzyevPtP/+nT67eTN3/fkab6
+xyqIuw1CPs1jePu6qievzt6dT4KCHB7QrK266eV8WtUHM9Kg7cGyJCWBYd3R
+z0W7aUqI4wFpzuJvaxpg+bfumlTd1fV60x2Qcv2bPP8gGcrb1/TInEeQn5ck
+9h/iY/NJ1NojfCko5ceTo0P6zae7T+8eGvj6bi37ScL8L+WsOwifPziaHk+P
+0uGEP+b445heQeNZ5Z+K2efiqiSNPS+/DA/mPen638rJ2XW1LIaHwivTNfSk
+spnytSNiVs8Orrvl4kCuowU9pS3lQmouZ6xRJhdVOzk8TMZ5usp/Wl81xRwH
+OLlm/ArSuc5/KLtbunnkspnVqxUtQv66vKlmZUtLXH5ZV005H+08fs9VRbFC
+0A/nbbWalTz3w6kpE9p+OnFTOar8Xr6W2mpZLQo5rRVdLe64/nm5HOd1emZf
+V21x1ZT8lI6VX9tV9JCC77vLzYJeARU3o79/0MnGMY3lMfI/VpSfy3LNH6k6
+XIjT/BVpr1u6OOdzGt9iQdcS/RV65MQGIf/7xW/lr9jYthzn30/H+ejepX73
+yT/mbd3cFg3f4265SdvlvKm0p6MxTAJSv/mnpqZ5t20yidQ6eEg8XvCakNyS
+Cng69Y/ZeyMLNN33y//64w9v6KP0Q7kou5Jth82aRZp/ako6v7TBrG6LGV9e
+OSuRAW2qK/XP9U0xa3fJPov+l+m8lnN4dDilS+jw6KD6F3pdO715UR1Pj58/
+T+T7+2LJ63ZbddcYxXnQJHQHzfGr9/Tt1eyuL/xBE/Gizso5/avN68uw8Hx4
+P3bd5EPB19zkrFx+Lukf1/Piptqti2ctmQS0GYvFlB55sC5rugMPSroW9Nd8
+Qx60d+1iQ6M5uCThPzw+qLtu+wKgoZOJ1NRkZqyrGQ3/mt5Mi05jxExffSLb
+YHVFypDu4/z0pq7mBZ+208VV3dB6LHkGfynIEl2QYfv7FE3f5F2JbpjQPpCs
+TWaXVweHz1JFQwbHD9/nr+mJdKXSnZjolHN8j+azIuXIiz7KMjZciouW399l
+GVQHvXqDLWnX5ay6JE1Ap5Z2j41pWKH3HqvsPg02yvdsZ/fzeN/kqsUv7jKy
+dqsVdAVZSre5u7fIll4sSJsNuQ/qLOyFC2F/Gm+jvKiWGDcJKWm430p7HUmb
+mFHkLazXZHTnS3v2gp/duGeX7RiyXC3ZzGIF2tJn+eyVtNgX1aLq7jL+AB03
+Mg/1N9Ms83o6Ey19zDq6DMuet9f1ZkFqulxcTmjBuqJakQLI/8zKE1IGm5Em
+QMrTzhMvI3Z9Kk81myoz7Uz/yN/MSRHMMYioBoZGdKS3xpKcFNL9V1ckJuRX
+1Gxm8rKTOWkWbH57XcKmI+U8uybDpsz3oi4im34136eVutNTzFuddyQ/U7zq
+dU3fexmH/Ik+zrv93YefvydPh8x52o9p+DMpZXH8jmiEHy8vq1mF0VR0sa/p
+qzQeN+W445DXeWsDmNJ7S7HOdchVN6Z/0sP593TnYNz0g1tN3iIypwtcpVj8
+Dp9Z0Uat6Aql+dHlxIOvVyWE9cKOhUzgO1Z/rV5itIFLdhea8l83lRpfEKg1
+SVp1QSI3pyMCcaTf08HgfW5otnTL4IRfl1sHgjbMLUR+2dTLfO/2mjRNfGoH
+1+OCJArqeT8sSVy299+ff+jLSFxKloqaLhXa4DkdUAxkzbcfz9wEkybIv8If
+aYXpnLJKpCXKyptisaFp8YnmKbGipJ/5gzdFU9Wblk2zIocJSo7SYoMlGJNj
+xNpHdztr68uOrudykl+Q+TLPVRXyo1rxEPUPpIThqzX8XRaZCb2Krjo6W9ne
+q4/nZ/v5q08/kefdzPl5+Q35TRtyJvma3lR0ok/P3r3KD/If6EMH+dtP35+G
+zw4f5scn+c/XBQwgMqxXYr68DL9THTrjVaKRkNzoeF/G8xCWke6WUvw5fgVf
+KPQQ/Js87Ct/buh7LM3zCsu5qdprFsyoyG1vf8LJnOME8hsuSbe3WCRait2r
+ENztPM6erIeObLv886q+zatLGZeKc5tfbhpoBbzIDWUaH7X3c2mqjq09Pnak
+7ejIl2TsPKiintCJrGs6JaQe+RqA0ereA4PxorSVK3FS6VekrJubqrw9IOes
+qeebGda1LfHfuKCnbMnOx9gNWVvblAWtsbiyazmdsE+dov3nafj5R5jVWxqX
+L9llNZ8vyoGpnUJFcJDon5tiicgBDu2KBq6H+oOGFEQV6KfV74eitbs4Rhlq
+bDetx00lioBWgyfRsLJgpwBqVUZuhggJ2ewzr9clmdscVDkoDp4cPz5+/PiF
++DPvSOWtzKO/Yp1CQ6Cx0LrZeCHmMrCTBx9+fPj8+dMXzw+z7Jeunv3aW7Vv
+2IQIW9Y3TuZlO2uqi1KUztcZBV57jrGfbA6ILJFglc1SbI+okTJnpLDmNzW0
+pYUuyZIU8Wg2q5XcCtiOmu7gu6CkWAdnX54/Ozj98QMrojbfe1gDkTFztiFV
+rK/L4rtmIvN0ByzqO9F+dNPQlc1nfn191/LtnRfr9aKCVbpXTq/ILyoyjtjN
+iwXfXEEJzGGjITBJ19ZN1XSboS//Rf4Q7Ly3Oho/8ZYeyo4nHb2yaU322DTa
+kJ0wW9Sb+T4dkm++yT/J1IeOPJ2ns80Vm9Y4EHJxw+bhf+sXEQIjTTK3C16j
+YHjEBbmxl0EDmM1uIvQnVqsNSRc5HbicPpd3LMcdixdHhtupV7irsn+s2ULQ
+rZNne/msYDSYlP77C2nmrs2vFdK5OrY82SBEw9faM1p80dWQr0uYpPjbD7TI
+7LWRIdi74hByvNt+IX+o6A9L/kQ26k25CjI45q20OCqbSx3tOuvA38o4r/wf
+9uO+0J3Ch8Duy9GqJn23IltkBD09xqMw3Gu6tuz2iBL2fV2zDUaXwzQ/hZmG
+G2y2IOPk8s5MFYvX8oDPytKuvxjhbvMrflC54n0Y52ZZQiRwkfDCYEZkw9Oe
+wdCL98awoX5KR+8uWkurarEo/F6TVVc0dypC46wIW6HSosFflolfJND5K7b+
+F7UBf81IQsOqmnBs+IKAoZbvvf7pvN1XX6cV05cOE2kFOmV0qSzbk0Hh+fbE
+wjewaBCHJi0yq7oYFcIIeWIi7s62+UEvXnMKN8FDYlGsiwu+RsONqy6GCkCr
+izLRL7fdHVvDFluA/q3pgZcbMiydlSMLFxa7MO99CjcAAXW590RxNuyEFZ0e
+DFqZ1j3rzfnZu/z7s1zjyxxAzifmK1h8a3vrbfpqQ4QQCB+rd7IreDmtL92u
+fMnHteTV0lPYOtPkO7jWop2Pn/i4iognTJtRIkYjdXM7PyeIMSZ8TYO5KOkw
+FnPSTCzXfLQ5RTXjMejiTGkSkzjhC7r5XTIoP/kKc+85rQy8Xe/e5ns4qTxd
+ksx9N9PhAzTJv/Nzy7viM8+6llDksuabBT5WjgPAt+Fl0YjNRbtNTydxX/Ft
+t+DYBnyycPym9Pjz6+DR0ZGjdy3kgSG0IC5eQTYSr1t3y0snr6SHk1hiLDAu
+ca4W/dAEv+TMTiifRj4KtzlruTt9lZ5Otg0XZYE1Z4+MDgL5RuSpFXPe+Ohm
+csbsBiagUyXspZWtTCn8Mhj3hcRD6DGkUmYdLcZvZUPOBNmrHBfhE8pzIGd0
+QzfqHc205YuYg5EzZCwv+VG0khxI4FguDj//l7+6qlcTPI9fwkqzXtBawOL4
+RXIfv46hjyY8njVu12Wxol1p920TJGyEFcdNHO9DdftpaRoOXazSVcPGVyu6
+9Vs6Cwgn5Gbo4NPtZjZjZ5cdMxUVmWxLc6djzWqWb7Aiv66uWEHQr3BXzHk/
+alr8u6pc+IxfDC3R0sEsWnB4kA21XzRt+SsZflFJ534+pC0bumr5zlvVEGV6
+J+nFdbVaqS/O8f6rBV19m1VBnv/Vhpxsegi2WDytrdP2oqewyay5IH3d0zBs
+1Il7+7uUtT+nptxGXV2PpnzE7TKUk9J7reyPJTadjlWL8h0HPuTC+6Q3kim8
+hy7XoSjYYd+X73qzFZWU75mk+6m9k8jgXGyHjmy3izuZSaNnEjsfpNvsjTjZ
+PzLko7BzGhysV6KuSZm5gZ/VyxJXfQ43WPLIUepNtjn+dyHxCBpZz9F15hEt
+/wUi1l0bDCKx3NoNonR6ku6bzXkSNmrLYFOMM5Il0hQuBsVmDH9N42PlUv0d
+lgGfHbrmhYUHTIbJEa0wlE2I6pJkNmGu1YqmLsqRFT+rS1EN67LBycUEJvlH
+uP18BYx18zqckLKZ8YiuxPZPlAoSIaoPeNOn2bGNRWYEj5FURxiZ4F/IGuWA
+n3xfNAZ/0mkMHdPpKtX+0AcFfX7W5WVBD3LP481a03NmFYAAj6fkVrelD3bP
+6qLhgALyEq2Yjr2HjHPek4rmU7O9wwLCIxG7g6RugW2sliXkDwusQ32j37On
+019qjsjLFUBfo7/RLeUiv5P8JxqNy8f9yCu+9/bHfda9H4ov1XKz5NAeLrH3
+rGPpjx8+vt+HM6jOSmZ64kwNoMfTZ9Nj3qtfFAbCtwofA7o2igWnBS74hqdx
+P5nm37Fb0SCPiWtgXtJHLMW1fXJs53X8P5acTUhvWD4bS46MNuLU4A7BL9pO
+LoogNSozeNSZfgefoF2eVxzQE72oify5YFCm2VMSMnYZgpXOGzJpixsRogIp
+Bl2N63LTcPBw1poluyA3lWVb7GLakptSNKCui5hFZE41cto2nNmh/SNlt9iR
+6DiSTEdTyjXMl+l1MEhVhZvDHPIDd/VGte/unXtp4ai2HpMPjJS2JiGK2ayG
+2PCg8KmhFHc9m20avWL6A6EhF6vP3vl/Jd7gNH91Xc4+ywGhFb5DTqBhq88m
+FZT53CnOv9JLds8lDykvCPEfuQQen8ixqH7jx/DitS+9Q8GAoc2CVudBi3nr
+6fyAdx2t4GN48Xx42W5mf4iEmubBidFSApN0CdzyOorFRULTiQJ6J0csP8dR
+ea2mLb6yJ7cGX1zmrbI+VpVL552OF9kU6scXtL4yvDkeYdfv+cfXH48PSdje
+VOKkFZKnkNBtsfhMb7jg8QK5kt4mMnQ47LcrwWhUMl++sWmbFzwi/OopWYY2
+SAnLxIl0ajewTl3iFCULxD/E++xMfbUwiWQP3tA1prdYfjTOj9WHaHjRn+Dg
+AwC2QFSIFN6O52asT6pVGpYaqxubroHGRUSr1GrNY9w7zvUTNpQqeiPfBj4t
+4cNdvPrwejjpTHIzaskIGZlFPmxjfL3MP+UhINXyrjdHN56/lu2DDxcfblnI
+GfQ2BJajIw9KHd3hRRtn3fWGPku3eglNCxWepIFD2GFBQtNJuBhRR80m421T
+n3WjC+aSLHc9KZqepsWk3eO/cWAVMEVSPB0/UI5QcQVciv/9NHvlPy4Bqvgt
+9hhZ0PX+yjZ8tc0rs3gwDQ7/dnI7BadknLGlb9NabsRIs7kgBJw6shhfki2n
+y60ty+yXOOlX6cssVvHr3jcR7BsGNOHPMKpmAjQB5snB5pLttIGFCFOFEcgS
+mQ6wRfZepTUZaHRwvdMoOm54dabZmy9I7IfN0S+E+Jw8k+av4cV2tgFYmXOd
+v3x0AesQsDH/hlbDAtqT+jIswMSs5/3MAmu0ITwIekh0PsOIcFfBdPxXcqR3
+QhUYf4pjJmfhmkPdamC9NCwZQ7QKiaGycVNLBG1NAyKzk6/k4JHFbNt8bl9w
+51NQTmSg/hxuEY0f3qVWg4+zI5jB8fDgGg6fUgWktmVQcqabyYPhlLWhzYrw
+JBpAKfMpOBhyReboy6/QTN+e5KPLRfmF9CC0XXDC6UU8OBKotuN1QQAdN+b0
+3+m2prc/T1a3ZfO10WRhFdO05YIuRLb2fOTxkwXXQkS/3Vykmxf8wG0tmiTn
+enALBkxdtDWwa7ybAFCbESQnZnAyL07y95Z+JdGFp1U2y/YgxnskAbklF2r7
+5Vcb9kxuQ+aZVpyltsPV3rD3hgeaUPNzeEAHHIZxS8P4ZDEnbAg+ECKRDXoi
+qwLaeMU48MfG7C0sNnOLLMN9giVNJ5fud3rN2MU12CKEquRBjCGqs2Ldwcqv
+Qto6fzJ9nL+x95zLe8QPTqIkCtdg3IiMGZLM8iX5qGHDAefvltbQPUkdGVIW
+FbzfS3ok/aSqkxflSjQLhwEaugA7P/HCPakt+V7sZN2rlSBZbF4aW7soryrN
+JaYpOwSn3cOi7w8AUUh/y2XDOrLNOeE9J+8gwhPF9PfTi5A3SwhyCNuyWBrC
+lvDGBeok2JBFLjaGfzSc8I7jGEgJ3prErPiHFR9GJH/E0OCB8BKwxvfrXEoK
+8ktnc7/wYLt5zYaKE8wzvUIMnUVCIhUJyZbygSvIuF1fy7lgXFU45B++s2+P
+3ehZDZAZTV5bSbpHzTUR1QsAmz7zHcK4xJxLMkTzw2xpll+hp44PSZEsnPsm
+twgU41IcTxUGIAsku2cnnd+K54gxLHKfyzXFh52U+R4d6LG7WMd52c2mLnH4
+0e0dm4A/emDWe9JaG1EOXySVywrIUj+8oQrC8JvHIjhVf1cufLm5OGDbmYDa
+XPQsJhPwD0uVGYdA6LnjwQXBfR+Pd9GU8TEp/ChxBPgfaq7PzQc2L/WJ5bPj
+k5IjCB3SBjV+nzJxbiQ/cfU53EWcbxnnlowgXcWCRmdk0Za36hbcL0U/f/zh
+/O27/3Z8+MSATrJP4dJir++m1LB/8hf2SzgcBwXLVhzfTGIcywlhrJHewSko
+5Ru6JlvcPBYPorXD5kYgLIPn4VfsBMom1WX5Hn9/n4/efMMRidTLbQGaJcNO
+AzXqYhl8LtEPSz8GSIZIKfCbHPkFhFYv/5ojGKubqqlXcmWe0l8akvrVZnlR
+Gsa5dSpILNmAtDMdZmoc91pIYmSGVUXmrFCX3gMYw+fVBjgVxAyrKzbDV0g3
+0pGkMc3VkBGgqMpikgINqWR5trcuMJJicyUaUq31cDQSSKYcJrlYWbyr2Ybj
+RYppCDiemoOWdVfPanYAgx9Uz10Secbp/rB0LkXFrt7PgGn2ZhN9+8VdXwjY
+zSzv8JHbas7xBe+ZWuZthXPJaNUVQA8Gu4muKK2LuTGWkHS5qeKCBncerS23
+885Cim4ggsthmcgrLGecF+OUcJ7mhLGy67LB/LBhssrTzPx3vffG96mTuDyW
+egdOPEMOrTdgMXYqjcerhuO1a+IEJIeJm3yPvbFfzhCtIScLrsE+B2dOOXy+
+5DQywr5ztXpC3LBsq6tVGuDRG6PjcBJ5aZeZLXXw63U3thHjuIycIP+JZBLO
+hPiimcPFws/SBwFrxCsS9ELL0XYaVZJasFACyyiCCZIrnFxxnI4HY0kB+Y1b
+TUFTblaSbbst7jIuhKR7q+EguYKTBJpJRj/nfskt+y36/nBHkCC3ff8zfflG
+LjW2rFhu3EzlfqK7E67cQkL1XPZAMpouWbZZIf1QhCM9Nj2iUuyXAB4ZwJ5k
+E+J+wo3NtZxjt4PqLrAPc8GngVO8gDqQpuAkixVEcNkp4pKowYGCMf87CpnL
+qTuMuFqM455Dq9lOftkuxSk3gcaQejrEKo6jQND5GLul8SK38Lo9oyt4WeCF
+qfLzyaQkMEW6gi1JOii4TRQhENwfgACsqsegXXTWSOxoXdlnxzyWbCXh07Rd
+m3WreQ1+xtASZgp6LOYFECg0Ub04g2LrxWXu0SfjrL6knQyRLMmhwatS8D3d
+PqxsaW30AonqXZCCav8o+oOn0SD9w/jG/N5QjnrNZqykiFaL57ghbVqTeHqa
+HfMBlZpaATQFchlQwpEgfSCncYF3R/485IXtn/eMWVGZCMF8AUOSfVFdmn7k
++IqHrmhe07Ag8Y+SudEQZpZF6yNeTFDTIk1SeUBXrX4jlchxdo37cMb4TTYr
+axWA+NT+irVZgPCQLHE06VJ0Kw06ZJXMlJHBz+QeVcCrhMXwqGmmdTSfdRsN
+1wPBKL+UM9Q3SKoUwBGAAqslS4wsNqSb1dGO4qEjsqRbQ99jq+iAe0SGAh7H
+8NGkvj+6Oq8QkJnnIzGraA1GKOlKtm40zU9pwnelIFScf5rusK5K1YQJCNiT
+BvRg9v86ga06/M46uZ/pSMN/0kNR85GjgylBTk5vloo3oC1Jo+n8hovKDpcc
+CQX1mB9ZsO4hx3y38Ge9sDfPrvyiiCDOtLITMd9ImPC2Zld4JYZPw1beW3bF
+EZDO2UiA2mFw8RLJMdYcLAeCoZWwuYLD8FpO2YoZAzuroduwZj3eSWDIQ9VU
+/Rg+uEyBaoNydHxCXsjqTzSaqxI+s/fUcEOOHZSniWHJeQqq9ZHB2ayETtYg
+goFpIXN/ILF5/PiEqyvpLHcI67PWv+QAgyQzJN+o9yVQpiGqyQKlmT12nG8r
+K5iIAQa9RppQsmGzdxFY4KnwRq3VqpKCs1Mpfz5Xh1ZVdnyHLpQWNamCXwtz
+Q1u7Y6WDkCH2YjX31JUMLdmTPpQruEhOR7ytVinwqAfyuZUUeq7lm4pfStE2
+YZha1RBn4yDOfwTTdPyUE4kwXUZ0Upt25Dbnr6yTYjiXC37qJoI1tHKOtFQJ
+gpHtqcVaxN2DOitZHbDERUUUQtoKFdbYTr1e60KJ+kAJ7FZlKyhAMBeJH8s1
+HeGMIWehxkUWEcvFgiGcd7qbACMKyg+5RJ7QlpMxlHqT9CSOZsDaBAMcaB+z
+YzZ0Wz07nLRYAwNLzesNibZZJuEJg9v3LEigRIBFCJcVcOX6FthBhhks2mrW
+vsz3uPBK4YQc3yibCxR4sGV2db3vxPdDcXfRh/Zf0n3PANftRMX9e33aApYy
+Vu4WA1a1Pf0QLcQkBmy4WL8mb1GTr2drnBv+pdBylChSW1D+bC9cWPthycYx
+3nJmwavjJ5IkeObDHCdZ9og+AohU/o8IOU8aM14+0AXyj5ym4xtyhXFPyJgp
+Fen2KP9xswrQUey/Ips0Crus5qr9HuX/vWzqCWY++afwB1orWlRgw275bf+A
+A4L3/JP8rviC9yC1SjduVy3EMi++THh0V8WabulSb3r2qhsvrrBEVmpbZxIp
+B8SO7KrfwoAw34jPEXeG41xiQLbVb6UYsWzbMTjsp/Ms60FxSgYP8t8L/kHw
+xRb/kcdcRmyaVriQyN4CEEEnm66ODe4DBQJrzoIsmTWn4iNsK2KCjqZHCSZo
+x8n6luuQ+CDSlVrOPu+o/AZIiQvIfWXiDkWXznLG4cGZ4Qp4RDbvUDnURiFh
+dSWRB0ydXrVj8g9M/bg/dfvj0Yskiuf4CArnL4pXgRoTF2Eom0xGUwSHbatm
+eCwhNE3xlIIdcjPMhLUg/wma69wqYvbHWiJX6HPgqsgCsaUnlFXyrUy+dSbV
+Cki79Mqgiq6YSBWXk6rBGkItG8y4bFCCZLx9bKo2VauXEypV5J4OD0MFn4bP
+qo5ZBsSqp6lEaANvJqqTzENxgeETUrBt54q02CCC58aS/ZlpTxYH13drdnVb
+niL/IIuh2RitDNMSGvP7eDdg40ocQPDlAVsv01yS+m0kTvbu4CNLUr1pxGQZ
+OiHPA5HBZ0hru1lq1a6Gq8PLAcPtSk32/RyLsPmruQZPxa4jN6lk0L3PD7yN
+9VXjUAlXaMXjy/xNNzNAoxIM8CaguCE3igfS7MuKqUVkucb52Vu+1gsOFHz7
+7NnTcf6XH/Q3XFIkGSx3C54G9y4B2N82VcchDX4p3StdmFOlNXU6naSSjW6p
+xfpBY+376gaZFM4yQfhkzlLsicjhhN6MJHiUHgSuoW+XnBcmrVvMuVQY5SAZ
+2Yo0GETwEd4L1WHm2EhwKR7ajANOqRzFj9gmQ9djK3bIyYsTKaktPHot1qc7
+b4fUKN7G56xeIcTGKHkJTeR7V6wsORuuwtDu40glczERKhYDlRAkaWtWmRL6
+lmJWQORuLVssvipfhOU8Mbwf1O4M+rfC1GREatswGO1KQCAszxWI/FgS6X2y
+nchhMGGCJj7GKDQBSRJt6JqJkAreJ7ZRWbizkFXhL/G3BVdFeiFoYg6W4C8Y
+hgGNLevdkjoHQH4H3cLhCeN+2VB7mX9k1CZwO3zQSynXgG9cLG6Lu1Y+9hC2
+5oNgPmONogRwhz2x4ZV+TRccAyO6QniuSOWZPYqNK4zVgiGQOeSfFqFEZE/k
+ScizBLJlB+ZPbbh/XcwqJkxiSlbSdPY8SWraVU6HTKpQtDyoxcnlCjLaaISg
+KrHJ/deBtuVaUBr9vJp1KLpkbc5n3zMASIUC59mr9QIJGeaLkccjd8JyguoE
+hg6XX64rVkmXC0ZZaKDZheP0ekJ0hwN4rLh5KE25KDTWqzg8iev/dD5NNbEk
+kiDdmYZltPquWrnLrWgNf3oJuTcUNzvu7EbGQCNWG27BrGitolfLZKSuldaM
+7dSymWgm6EI5olCRy9lG81TrCwAxWShk3VE2drfm9QTPTt7OSH02VZ1JshOI
+dbnvdLqysNiikh3AlZxe6DJ7r0aM215RCt7GfBGGgae1CU+csh+ElNUtrRgZ
+2iSnqwLvuUa1a+25RsbZFgwvwFn+hOf9yZghJBeJfEPbZiEaG5bc736QCoUU
+6TEIeWRSohskNLAG+gGWAZazFj404sILWyz/cC3ZwINp4ypUdDCCGk9DAsl9
+fCwp47nFBPWdekREXcZ8E79MLLLBh+GahIYy33amdKuSOsY+i3CIJIUvcpC/
+MLwCNIsB4vEk3brzBOwZhoFzcMMB/0oCzqJ++CcO2Sw4wJFBEDU/Gix7CI48
+Nz4pfhcVYleL6or1m5iQscAzq7ic5ZZDHbi+bVr9/aCzEf5WrfplNEHH+W0d
+q9URQjEa4dJRpwu32AibwHVhmcGu2Syn2ccVG7tzWyn7g6VsH5ALfTA0tKnB
+uV2nSlBWyFehR3pq22pix72zOVso5CpsHn97YJjZNRmStGOw85uCU1OyT3RL
+tChLszrfrUHTDEVGhYcK37WivdK0Udx5S4niZvjSMT5icGS8IX4R4YUERZcV
+S1Q493b3rcBQl8NvwKLf84rNalF95ntNDlN8m+p/BNUZicB5UAanNHZ8bwRm
+sUDpt9gIGUqBOFiGkBxHwjDacb7azK/MdwyOq1/WdsEGLY1DsERQgdclzUmU
++VIq1XbYMZzGYQjUnzo2Myw8zSwjkaOTw7C7+N1+LNfXDbuQ0/zPDFkAtwIb
+YQphX9UP5mB+krx5JI0IsfJeTpoxCCCOqmi5Vmkmhgtd3L3GhWJu25Ll0i3E
+vVivNUFQJ/vP8FigY8qVvNJZ9a9NySaIJ1LJXNSphjuOL+c6qyVLR7gjo2Mc
+L1G4GPxPT4jbhyWwOSFwUU60kWwHu65yxzkTcHkE/nFmDuV1VnEBBaaYlGAQ
+zIo1Dkl6u/aHyBaUvxqzvQSAVdwUdJ3ziBaCyEXUWG/kSjwA9gnp6fs7ZPH4
+RFmozAemw8/WI57hqXN+/v6AidTHMdYg7OPtA17ppyQftKjrz0KY8jUxqk9G
+a83LMhifkAA3m2j1hHRMBTa8EFzIgczIMNp4beXzkoODiAtAv4gTG+ILiNjz
+bgXvVWqELASOetqe/SIR+Wbs7HfI426rwGpVlSx7p/YbOkY79vIxR0AQjnjQ
+8ZGyxurhGPnQe54EYvKCSU/z0W05EruGEdRtqrYMLqBFQKCrZaOHzEA20KVS
+hx4z88MzHM3tNWdzIrA/VpuBvgosPzPFwgV0eYCCfz2clCb1NKQuZsC5BwL1
+tKZMapeF0TaO961w65FzatVKqLioVgr+D/nY/OMqP4ZNy7/pPP4f2QZEAD3z
+gr7KRwc6C+sh5XH/9v2AOhYWChQEmNxVkoh8bdqmL6CZVHsqaaqaR8HM7Gsl
+xC6WElOJ6g/K2GwSuHVFfiVRpIQnRWBYDqXjoZzG39rWi5syIDMkem9JVwYv
+cVpt6eK9v+c0ZXLVxLSaw8D2TVMHM7D7a5vLNlAIjXn8JSpP9GrWYnKx/UMU
+y4WnEZanMfM0BRMVUvHg3dHVkesks7jF4k69H/r6RAJOndmx5OmvSFQ0kTsn
+r/0GQc6m5vhPJqwiBiDn0q0vqGLF3PbwT5H6/fvEhWOZbGw3UvPjHNZJ5HRB
+Se2O8O3jZyf5Hyxc/uYbzSx5ZOWrBCaYDSVzlWBOCv7Sc6o7OdU6hZWdogy3
+MhyTO9QC3EOpM2Z3iNwDxRE6fAbnLK3McKHH0+z0KZlmLQ2nFSTJXa9wEomw
+hUbcmOUFsAs8JgFGwiyJGKrEbOrKYimBjPqCo9FwFHvzF1KgUE+IqCo8XUbA
+lKGkKcRAIkualjjpUXDJ0XSPqtYhn6BtAlQ/RQ/yJz/THMM4pP5UQ9DsJWt9
+1BKFfGTW9+JCat7PWfhq5NwzB0/TXIyA1Jr+EDVShHPDCQN9S1NKjanguQez
+ugmyfO/ZYS5J9ZBaXc2zVe34jjg0Yd6sZBo5qblvzB49RaP1m0WroAzO452r
+iguxoHs9S8ZGDypDHyxpEGKfzYrW4jIOyrS4A41TEjmxlNIien4BimDAOFAm
+hmcKSkbfpPuP70FZbLmjKYzRfGiwM3MyskC6ruBQV8giJHdBNhAeCs6JuMee
+sGql1M0Gu2eEnhYRZ4oVgEXPg9tw9FGukuRcJtvm5Gtq9ede5gxHpGEFvTWU
+LEdKZRxNejhkelYkmVYZM1IrpR9GVxIYDZItszCwJ9LkXLycNylg2k2xtUrq
+OE+NgMOkbtdSW1BnqOaa7sbFAjHWbvddHi1hJyCBQtpeP2Ar4OGc8VKlYyDh
+wW2gKSWl5SvmMLPSR37J4NFXXk/ZCzEY2t7zI4TJ6grEE8A0I623HggXqs1O
+VzKEleDghC5DGUf1XmoEzqHGlDH9IH4nRZQCpJCEBMe9lQzGGGK1OJaFP1Rw
+BIZyPOGgbjIlQaoWIBP8TI7Ri6e7nMxvlVxLmDgKfbUE0WgwJNtSs31T5u6x
+L3eEPwTM5T74MOA0ZAZi2ZyAeaywmBXevCyXxtUe7ibRaZw/DbjePsbKzmzE
+9NJvyPVXKhgyT1wRAATuh3o1YSQN2GAj6OHx9Ohbgz0IF5xVujjpYsyGlC1q
+kEmALxxbAAplxWaJnhIDYghbXwamQ3LH15HOXalYoYpWCJq2dIPyG87QOaa8
+0UvZ0E4gSsT9xEFDYaJtGUshOksCbx7wSuec03Hgps9uC+XAULtCiLImoplC
+VRWOL8IXmw5WlCQU+Paiv4CEHNGPGViKbRl4KJmUF1v9qCJ0EMKW4heaaQjy
+ILUzYBmJ7gbz4e04SygKtS+E+uyok9GpVI3RLMFwzLIPwpyvpP4l3ga0zQ/R
+2PczVnaxePercmQXUUy7GDLD4sBeiqKSsLuSG9uUMbq341g+P0EROrocKL3P
+nEvvfASp48ozPA0Efe01jeIf9p3bmuf+eHGEjLHRXbEQ2vg/tRokgPk8fKA5
+ENRn73wXYPwLLhylqZTzwDlvOL1WW0jwfa93dyEZjlD0Bcs1dvvIrT4glHJc
+O/MLxl3/znKeLn1dcjEa2WNEBwQ+7tqOpX7RR1wnQRKlBKmx3uDomFfzQKBw
+UwrenbkHLRh7iXsaiU3c7rNyfzocf9tJj6pBcnHSLMvK6oEPm/CeFg2cxWBd
+X2waUpRjkfcsDyEBZ7fRflvdgxNlOTYcfagWm9hNIsuNxhJJwRvh5lNFZjef
+QDxKlympwHKvVgE9w+3WPwhFjhpwY42S4kKbKwmVY+CGYuDsOz2EjYvNOo/x
+/BgbR/WT+5rQaQ63KjgMsbGWXHNmFKuFq9OFSbWxg2AnBcZ+QabiXXT053Y9
+acsCkgwEsny8yArcmXaI2QXZLJ1ATrz0sptnElzawZD8Olunk3CLYffc4/eE
+27RV5kT64aj1nJdnPD0tp2XBlZQF0uYPiR56CbhYQ7ArfYg+la2traukG4Uc
+6YDo5/tclXN/01Dj6blDam73dG7emHxeaulBmo0/gF/j171vsDgT/HVfNC0+
+bn/mi0D/usMEenJ0kv+ZrU5O1HCClGODRtBSIywe2logrF6pPRRH7DQG3/70
+kSsUfgSkCu81Ag1WS8bwqnpezQzTwN2cJNLBCB2jr1XgSdx5lAknpVXmQ3cN
+V0TMRF0hoa9wiK5mgEfNSY+AoGlTLoD4/Hexvlk8a1Tg91A3Ar2ShBSfB74F
+fi++e2gfjv899+FUB2b6WKYyf/k7xzlBwELsTt/hkyWW49BKo4m2Hf2ed3tm
+PD7lS7lIajskL0Vv+xEERnAu9JyRCqvmWhPuyRGX0WqJrbEukUX1ZtBVLZSx
+QFpm/bCnTIVrHiaXDPCbc6Z60FXSeHby0cDnI7RLO87TY+P/B2F/tdyyL0RI
+YdZoFo/NBVyVAAbMqkZKXM3dmt2FqvgwWVw8TnThExnyw48afQemO8KWczSQ
+uF8GkrR4YYU2UjcSaJnuBruUSaxOYCsh1BuCScZ/XouoS/hK+YqE3ZM2Oonq
+KA+eqFGxu1dOdPq7DdT2g8y2shOAmm41x+hZXFmIEmh4IJCmatjLx8rcOwSa
+4/Bg5tWjhGNeoszAthjbyyuWCrZgkjlvM4PNERE1AiYRkoz0bhqC74wzMWJU
+wMubKvbpCmOH/+C8Bhf0lJlyxgGHhO5roO2FZUJPsDSFiHWOYYsFhCO7G0K4
+eI+ArRFelvBQmH3vupxm34nbqxPAF+uLG9Qds7kReb+TLRfFwJ/FIlwHfg6v
+S4YFxJKshYu+nA7msjXfHiP55LgrukoCXsyAEchdjSCD4QuyKe7YGPaHaetF
+fnDkxokNRW9WRmdz2EgO7tDdxkWxx6j/CuLHOe61UM1xp0grp6NB7PG/oxKU
+SjPut/zrvigtGOOsgq+rdajssuexzKY2NZYAWCkoINAxZb8MNPz81bcGwXm2
+SnL0xmwE5ziO8QgLRmBwStwf+KhzF+O0K5PxNJJUit09dQH8hGFB7ertiRdl
+CppAlXexmG0WERvIT0uS6UASt6Vu1Rrk5Pc4Y0+e0L3M2CShEpYUjx35fDSM
+nxhNdzivX6Pcv+F+Vi4SK8blj0q8qYQwOqXBRBQokgT/oIFflKLzAJO11raB
+ekAA0W8qpuu6BBJzw14Wal7iaGbMWJT0p3D5zjcmK1rOp5eqooRDGWKNwMZp
+WjbWivfHFN34kWlCQkIqzQ2EPjPwz0G1YR/sG/7T9EWqkZs/+iLRlNsvEU4D
+huA7BbviuFFcyHaIR18UZzR26FbsjIJYwFOcEA98PLv5xAVoy5BchkxAm1Vd
+qzWs59fWfV4AFVVkmZZmL46tVYxb1HamiyFxKSNAGGd8iDZLOKKGoEP0R32r
+ZKgubi7rKUm/SLIf32KkQqgp3R7us+njwQHz1F0Hjljnp2XyMnb1I7gwfWD4
+ZC8LbX3/65kcEKMTEV9bqCmSadr16L66Q6c8Pclf+irrzcK1wY6pxN+pOIbe
+9Cx9E6sv2d3Ydfv3veQ0p/t203YDaiea9CvE32C5zdiWUhuASzeq1abMnGWo
+967XyWB/7ymdafY68cYH8kpDkc0en66h7BhIJ+mPgpmKaC+tQSIZj3OYxMOD
+yN4JbtCUZsg+9bJe0pT4XzeFlD6KJQEfWAt8GCjjy78Tr2kwhNRme2jbgTxI
+QO+mmSJPSr0/ljC6BiElt166BaBXzkoPTuclEIpnwc/4vhFbK8GKS2pzsVLJ
+AqTMSUZG2IAQSV+CdS6z3RotWngrDuqtnJ0XGFTfQx9/h0p+Nc0iMAhFBoJT
+0YSoFmXEB30zTOY4eJKeKreraAkxFbumvidIbT9/6NFGvAowUX7KCI8B/69w
+pnL4D6BCKfuS942M5WfEeF08qq0D8LStELCbS9BTCjZ63IpbnCrcw5BjcG0+
++vDT2floLP/Nf/iIn398819/evfjm9f889mfT9+/Dz/YJ87+/PGn9/T3TH+K
+33z18cOHNz+8li/Tb/Perz6c/lW7h40+fjp/9/GH0/ej7UKRQoIsF1oDsmZu
+j7n07XBklN+ReXf0RPkXjo5eCIUf/+v50bec2kCmSvrdLO70n7ioGB9fNLjY
+6QzPinXVFUr/zK2zVnnAAvZk4VyrX1zoRPrxooMyoMm6a9YDlaupkNa4qJkA
+dQ3GEDorkSWVJeHLcnHcXM5cTVG+d1teWJXsPu/0xebq6k6TrU1F3jF9gacd
+u+zgke83s88Va6lr6emhtHnC/CrwYx6PejMbabks0Mg2nNQpDYjshqRzODNv
+n+R/idRH0+5Lp71iIYmoTqs/38/6jAxQV/Jtoa0J6JC/i8EsNnffvpqqqZtg
+DVCnL04iEG4uGjyICURGcNMG59woYRZG3EpfFhlSfjCjiHE9sfrBSUxGeBMV
+CVddWqV5dMqVH1KJCCTuFBnHZuVXUI7R0HEmi7mFHMEQdY2W8Axf6LUJN+ZF
+/nS1ElNqIxVQAzPXWs/VPHCjLesu3iiLss8AZ9ablmoAm3N7zR1Xa8u1GD1J
+rs2pwz4MHSSU0ho1K+vXUshqHcVSOj8NnEqPNtiQm7YwwqAknBMo1M71TnFP
+WbL1wtlHqeO/rmltGCXf+M7q9qFGO/TIOz6yDlnqMDmfNmFK4y0HbmenDQUX
+XDMhXeMI/MD7NbteoRzEpR0yV3zoGqSTFsPQC0++l4pfJoKm6c17SHdR+yuZ
+jMiPXqYib/TirCH5xE410PQvvEgFemdKDQNJQhU6dvrC00xISiyPKvF0+Yxw
+W7e5td1m69wEGVuqtc1ZAGhvN+KBIcDxCCmEM4rU8l8lM4n49XQn9/wg0zjE
+ywuw5G/AmoymoC7lx+dEE2xhBStyJW5Xnkl4rB1T7d8ZWhTTJYgHj5Qeiuwj
+MZhSEnlGL7oG3TiP5M4rt+Bg7CLq6zeyhtHR5xZLNmm8x6ihdfZyMXkWZBm6
+DjVMQS5pt2oVar/xbY6sIJHa1VcirpBEyx0nR9ItoMIke06IkKyrppK7lPVU
+FwxuP1Zcj5vZdUwjnVm1fhE6d6Ds6St6GkmvidjlWBroBRy16gLkE/grCAbp
+7hjzo/UyQ820cJLEFePl/4pmEU++Dalitgv16CjAY5sgH9+Sw3IrnusVCKRh
+6jhucX/kBwonEpv23EnPECf/GrJtCRmfew7nlIkklnehX8ZdvWliAYYOY9gk
+j0+LfSbSZkhckLcuRYx9OlxZNISAgm/iR9GTfBRziJl+TkJmKGZIdLHUDnHy
+S1COXn5dT4WLO3p+kORHnvzMl6lFNHNAIGi/7axH5zxNF0RiQ+xPi2np2CYC
+oet1mZkCDR6ocB20tcXm4q6Hwe6IVTyPTJEidRcNQCdcwpGm0CSh+W8NWpx7
+0bIuC/jLmUaFUCNWLE6iOR6tm2Gm+fQ200DOvMTitdoJAyVGxtvLSn12N+Me
+o9i9sR4mlH2u6rmGS0csbCM5bGj417V9kfFyIpzNnAMO0i65eKPiD9lFpmPN
+R6wpm5FINA8aLsyFhOeM2HpuR138GB7BSHPAI707CscOFzq0y4rGy185gGVJ
+2A9V0cFIvHadGRh1YyvH7ny9mPfwEtFWtVJ00+vCe1bMQ9ukkMemDRxFgRwp
+K6S9sJgnnSUc9FpST6LYqlJ60yCkh4oo0QlxnRwWxiXM+TPTKFJIA89cow/c
+MrYIlaClGa+7rEB8ryFx8Gu4vLj3nWIWE+aOMJTHlpssEjVISoEt+dK97F3j
+PwiIC5n9seI3vBEaPu1KRFPOerZpRG+42/a+1xwduRfNi+by6NjhFd6Fsphb
+EH82IDXll5gxJ2s8z0XxvT79PjdWnqDRCzLztQsBf1MORMfl0QgUqf0nIUQR
+Z0aCRCURHyS4Sdgc211+tnIsIHDv24SROQwKxFKPLhanGmUvKJf91M053+3Q
+xe55iA4OUNHz9bFQWmxxCqIu9KcLpQau8C1RMhiC/7RlU0dC1cNZ15F7qdIt
+jxzD+0hZy4B51vxG1LGhUeE2cseulxYcg+c948SaawYa5ahjHEVwoi8e5Yxh
+v6BTyDl0z8VlxQra/edys5qpzxjvRJKNvQ8QiWbMBLHS14J+/iBpsGY/EDZv
+N3cQvuLVeqMGKl28/PPUZhZmEVUEbIQwfLuH4ovzd/w4BZZqzyMgsBXQk3bj
+FZKlgRfyeRHsFJravkJf8BA20GhzKCFrnb8oJZWWwSpYf6OtmyT4ec99TFXy
+tD+t1+E3goVPYBT/htX4iBVNl6OQ2Uu6c7DFMFDyHLqNeW9WFOby++6I6fGr
+EGdirkwrmfBU+fBB/kA7svPtQ7DmhYcpEOIJocBI1j766f2AZnw0mjeHR8Qn
+xOxGfIqFK0yRlXPvpaitAYpBaTks+B9E7h11jITcWdC5ykTiAPGsOxRhHEJU
+I0KuExrCuXYRcT9ikxNjQlHgIFkSRngz3CnRVUXI2MHMEbsGaKDmd/QCEGDL
+nVjFcYhbeiBOST5wFUXwVCurQ4JWGOASqWtJHp2mDv0gktu415pBgDcK4tJL
+I9vRcEZqEXvc9knbmUxLddJqMPeIcwFqMvkCh3v3lRfMSo1ybsRQX7Fq5U2K
+S4Xz3ZSuwFwIh90uk72+gTUtBsBgiFYX86PAX8K17CmkC4C8NKgT5VqbM3b3
+3E62qHofbR3VrcgybFMDnIK6GZLMSmqzqri6iyR6Z6OLcMVtQIbUSrmRpLSt
+o3baZwQe4rLsFCC0KC9BWiuevTXMsFluh5gVUmv+Hu2xdvtIphnvkMCn5fp6
+Y8iautWQgBb8OF0NCBe4PkSD4SazzGRu5WABOxany1d3V17dbYnVcP/NMNJA
+HMNrOg73tJHP1ZfjlKpwu0nR4G2a5ZHFKu1hY1LdE36nDL/KBJIirlNvqVhz
+jwFvV0zuVyQy7UG1uqlndoJlbsXMCtbec/VsU/Eu3cH90RqnS9TulgWO9WYt
+O6BRE/W9Yb+bA3hZfZH9Jdf+RT/wdLoQqonebOGGuXSXcOQ4ww4KC5hHeUt2
+IURRMaJrVO9s1TOPB42Oa5awtFXnOMPWrIQE+5ilzeqCkItPt92Nvc4/l+Va
+c7PoJJ7xaCbWkxJ5g3HifimFsR3MSz6FQAXOhNyoXk+43XfWfi67SH0eGsE5
+XehZYkV9JcYqZ0MYpLbQfJfxJptpqxtegEM/ixbs1OfFuJ5eDJPYJiqEf9Bw
+Yq2Vv2jClNEJQ/npTTlRBk4AxjheX57A8Azfz//X//y/mYSj/V//8/+xvOUY
+nsOCq/0ztYP3iuZKLql9jW7MygopRrWO872mpAmvVDPsa1SbHigZEM6/b8Ao
+xotcrrko7x+sOC/VCIKCEUSszduUf0aiAhma5j9HJoAvwg7aFC1irrHwVavc
+N410F4Iog4VUImCmMPrvNwwz48TCSt0Xaj8+/JbZKCZg3ZbkFD/kXfJcPZUP
+ZZFOB6gNnF66r3VR1rN7L13RS1x2hJAsBCbhL00tQiKdEPYSi6KMYZZH+Yk4
+YY4/DqYlQaHZaYKZib7nqU1QyiIZxYu6aO4dHPIu6ChKvlIQkKH/Uu++2vGt
+5A57HbFtNCH2h/Q56jPufHVDW2yWa85qe8FQjVJ9WeGhjN2mera3OOjuF96k
+AbNom2Kd90hh9H3I/UySs34lneNEX9rys/bNWQH5UtWJU7byrQf3vNlu2Gv8
+a98oOSQrZ4x2enjQJS7t/dgoPSKHNFixoJAoDqleaZ4QYuSancne2/qzRiJl
+2fviODB9egPlGlfOTW2V10GADNe/4A5wjFjSb4FhZ4YMrPI/aqWmq6sRQBQ0
+WvT/h8LkL7izL7/8ZVLuKpevlSbUHPQBdiIwgc/0KC5L1Fjc5SNM2iOITVVY
+iEOXJXTwDZUqwR4Z6sa9IxhvL3kX7orc3xX+DteRGYpccSL3luYOAakOtS9V
+2SjbHg3sZX4KksV08d6FtiBIlbe6H0DbcIV0WltVxIWQLAnLtlVYx4e+sstb
+OgMAc8VWtOmTUG8gX+XwtzgvWjAXbovQJ9VY7XgJQqoEG3xXb7CxdIt3wQLj
+mvl8tN0Rph3Fgk2AE2PxgpWEx/37Aa266cJ7//35hyF8F0IVWokXS2TdeRk8
+KpqPB0FRqZU6EogAu7TXjbpereEopBeZPNa60/hz2ygoyv1ODhanzyDQu+IO
+No7dkR02e7SM8cfyyvciNHWoxor4dW2WKk59ppp1CQRlx3F/enQijP04b1I3
+htLrETeW6kb5ni5HfOZ+hD1KDWzZNGy7hzelsv+xCfA+hAYLEZlIbDh6mYq/
+Cb/lCBtdiPJLxRnKqgm/4Pe2GpbnqCCPuHVh/bPkPWklV1/U3oXH9hfcEo6F
+ZEa1LS3ObjxDWygAibr2tmf7qODx6v5DVEJdWTKAwH85ECdVyzEQtorup6uF
+USy0mPIdtpqs4Yd0RmJJviR7bSsuyY1hBVOxuNtXjh/0upGAy8OBVYAI0kBn
+MG53YV6lzD0yDITPuYdnhWfagZXwczXvrsf6ZNk/H3ZlOkL+5tSGw1vQxizj
+wHesKTF9a8ytF/BZyHla2CeEJyAKUU2OOyDKkX/m0OFjksITTdM5VCNoKSD4
+iD/Mkr02VmyWNnJMSmmtx9dbavGLfv25tNpcsvni9TqGexnglBeLmmx8QKCw
+eQzARGxu07BFHUAs3z09Zl0RGq11xZd6VS/ruwO+GdrQdQNubhjgrAxlfApx
+RzFE78xLKtk2VTdvOC3gr0DGbsttJuaCXZt6sSvozHJo0ZrwqT/NuAqbXeVa
+Tj2ZPpseD40A5/vZ9MgNZOhT75HckM8+mT4e+kzvNt+E3XvA+PjRkfeF5h/G
+v2mQ+sjnK+KUVMZOQ9CGTTVF/wgNpxrHSUC8n3T8xpv9vmG0Nj5ytT7+qhWo
+j7kTRRrI13ZSXFTaGp2BNHksRK9wklls1+Ew4m5gXIDDWhA0RD7GLjoUU42R
+AdAnzfvuT4rU9H4Qm2EKw8msDEDZl2QTwuIiwl0ZNqC4uuLy7y429q0ve4Q9
+QhWvfZTj2Ieyfv2GvX7LCk+UYzUoUq2VAJxhQfyL2utZ8NDn/RqWsp8CdRmN
+0CrM4OpcGP+jMhHEttOqOUBrvEAhk0xKbh2pW5XcysjArCNW0CO4mgEH5LDK
+o8RDLIaXy/i9k8USGlsWvaarmJOSqwLROABmLMCW8tbo8sotNPox5G/E9xpl
+Djrl82Ip+FAA58k20sD8lknTEl1ASVAMJKQ0CfGd0SG+EueGS9l3m50Xd9qI
+JB6Zqw2NmlZEGUB5anr/gfEcPtsKHWDQZ8HOKxfLaHrP2cfkkmYhyxsj6tGs
+akP4t1FA0AzwF8Hh7rJdH/eQG4wo0ggcUjqAVypufHRRLtjqGQmqss4TSou0
+5uXzUB9S+yhqBip2IyNNusSi0YFvkHn5LTDZ5RreSssRB2vPxSCbgn1q8HBK
+FXP+y9vXVT15dfbufBJ27FerzlRFG7cyvfrHkvcoQk/tru7u1mU2TFCC9r6/
+fLr79C59E0nQfw3neafcZKhHKBxzcMgxOtBV1oauTJHiMYnwtiW4k5w5vSHR
+2rXrT07y0dkotFuJdXdSpq+napw4IYo4XVdk6dTSeU4a3F4E+EUZG3/vQpGe
+uRZKO3DoQtFdMNPVVoQi9TdWVkE33g2zscNoBGAu14kVUn3I6xAY+2hdrOYp
+XAS+foNjhx248xTWpmudXGZSZe9UukEN6bAj6wtWFdn3dptdgtZAxqNZvbjb
+RWsUhkicVHKTTeWIBNwb26iFIn1JcLbH0Sn7eWHwtQTqdydNT0z5q1Yf+qBU
+8EgyYxUxHlOyaMLbw281FZB+x/JxIuBFNziaQTl+GoDQvd6Ww3Qqb6Fj1nwV
+b6IZetHUn9EAVyzfr2qQ5qAsaOwGFAnrIyRLjH3AGrzMpAW2y/JnDtgkTeX8
+3d+EtjBCuu3BUXo5aeEgXvQW1+xfJP0q4+s9zaLuoZ0TlllYeEjeQwWplK2t
+N1JVVTWZbZcoiGU1ny+U8s7iQQJz6L0vmkVtnRmpIvny+riQfsCrtBxPy+T8
+FTkFKDXrPTzipXd8jy8WLaQL78yKxl4o5Ux8b2pcJn1npn2agnx3Gp513x76
+omQLivm/aDPUR4/0cnv0yPe1QmzdbDETeHtXFvty6Ct6RsNUogAK1lEaksDK
+E0xuT3KsVHZuXDDLHj3K3KjmpVqP5Y6RGTeDZ8W3INA7VLwx60V5Y23LrXS7
+yOd3q2IZrzPoi6sNmnNc3GUpYsLjjKWmLIw75WFNm584Cel5FNqlznicwtxG
+RjIwyvb6A9w3pkeDoY40gxyYCUb8/hGWMYu/29s1l/1xuCj+dVODXh9kk7Lo
+Wai+CQwu6YHPtcsajxLF6DDWtC+TDGJgXpoDFvy63HEogYpDd9MxXkJSLG++
+KI4S5T3DavcZG43SkCbB+bPmDUHvu9KnA0SXju22ku63HAx9mPZvAKnTlChn
+U7cjYoO4p2S5BZ6Ce5SpmUInAp65HGAyG7nXq0TDEHrZtIOAoUk++s4Tvp73
+oKzKzZq/48y9Uvbk0gq4HUVKgiyPNShoYS6AfucLTh9+1/vTH/Iz63iZvkQ6
+UhfzOWNbECuOOGkH8x4bzhACxRcGKBGtnUSWp+y2UlMuYHdgZYSqWEgE6DoZ
+GLPHuT24PiwA0acLC+QGbDwoYTKxyx5/eTf+uU8Vgcmy7dQE0kah1rrgSqoo
+OXJdrLWiL+tX9DmZc/UQO6yUHeVaxWDllB5vfi8PyftXWMhI2ghCl6kn1nQV
+VXwAXUWVgci+tqCK455SDUwGXAx1lj31EEb9L4I55UPlnjZceGPLDm+1aKzv
+BlPrllIMUjHrAeL17mGc9CTP+KKYfZYtY6Dkg2E+WENx9zx40HWR8OR8Cayx
+V4QtkxJz4Pb6Tr1Pmc9V2QmbSlLm6fQd1ij0G48Qql5J/fHhc44R365olzUB
+tih8ZheTF0pHrokAYKQP3mITEW3qX0fLb3f7dNcOIjkxpPf+fnXRrv9BecMf
+bvV+X0v7bKAP/GCjdx5DOPp+DKcr7V7uwf7hkelYELSVdNNAB6MdJ/W5Reet
+p5mWMODrkpOziw+2Z8iI97HZF8LjW87SnlY5sI2BEH9ldyXHj/gbW1yhfKw5
+BVEPjijEjHzvn1jnFYqyIugh4HQvN6t5gYjHQvrBcCSROfstpyFVvz0wo81k
++KIO+/Ru1e+S0utTJMFd6UEvSfaO51RfZu6DrgV9GyGf0nI+6S2v4Sh5YTbY
+7W3qRqcNYICpXYG1Mpr5oXuGXmVoUMPBv9D2VEGRoTVgj+5PWiNsLiYOMVVI
+7BugPrAF4ni+vud4Hv2h42kHgMZDKzHX3jV0FrL7juXOMzh0QF6cSKbN+izL
+tTyS14xCw6lbzqjokqJIukPE026+kaFXRmmLtCldL+jI4gpRmfNZT7Te7Mwl
+Ma82ywPAzFsfpXKoGOyjiVIgmwZJehRObj/hvr6nDIA3JXdQQv8s6VJfLqWh
+0hXLoueyRhmioQ1vBPzEs3yw9/egfkO4t5Bat9AdGP28bNuSvSqSnjOZeCmd
+7mjgFlVFFcQYOihECTkcZn2i/SFB30wr/IrCrJ9EzLP3YMuAQZp3hTvFMYfw
+gXVIXVfFf3E4LtZLWvFa6MQQGn7wqHbqF5OBltuZcD4uA4ZjyQyodMy/PH92
+cPrjByaybvO9m9ZytJOLTbXo8tOzd6/yg/yHTz/R/3376fvTHbTczxi5NC+X
+2ho9FeOdJpks8rDSPAW7v/YDL9G5MqR2JFnsQtcsFKL7qyZRQRd11y1Ksq4/
+a4tfBhixsamAhQCgNoANbcSu2KmBKZCrBPldZgSKqj99hTUCpEn5UmDqS3n6
+duqw0y0KE99SKvajEbrEPH8T+u4aeijjyDEZ7TNVuI4S2xGYamWC8JGWQgYZ
+3yzwZu1BwB9I35S0BHdh+/U1Bwz8dIp91l12hIVSk1YdWOrSKg1rIebcrIXK
+SKz0EYYwyhBsRiZQOBWZ2NCagcGj10J8jc3Ywxi9SPYZE4r4I32xz2Xz+uIR
+54mQrZbOF6PwbkkPj/Lwcm69kh9n9705PMy/Os+/a6r5Vfxrpk02irTfhnar
+T6jZVEhDal5kPZuj5klDt+HjvkoFYiMPpwG8TycJflglzQsNVMxSkrS6di6x
+pwsBnpgTCP2xB2ytWqdSzSNCFpK46cL6EESIXrGZIrxivhrvLT7/Vp/mcidh
+B2f7+Y8bl6I1Wsnwgfl+/jPvFgKBTlqEZRfQgmrOQSBdD9lLAe6nwlK6JxXc
+AYxdkEutkAjPNOEWpkChHvyt3Gm/82ppJ4BcsSZ2i8j54RnGUgLRInKw7C/z
+fT9KwXsHUcGz5K6sgl3ci4Pxzw3yI6E3gY9lpjbs0nNiOP6Ocd7vLWRofNkc
+mWNW6ByTKV3cKT8YK4ICNNAG+DYWB6c7n/VV59BddORQtGg1Bi8XheXizZ69
+9AG6sUS6kCp2d9VfyzAvh+WRUqyBsIfFPHhXnQ0VvWpcDgE4pClFsur513CS
+OBhqG2DlpJ4qxvWb1fsm3QyEtgHK6lrDHyibDeg7lRzRnrZdfxtqWD1Ph9Bb
+dZsmwp6G+Gt2XePnCbTD9dSQJj1NAQSm9C4AjjFSaZHd2Vy0GfqxMa0+OE/K
+1VUXio3S2NZbPYGGMtvmOumFL7bxPvmH07/q0pRDVcUy/LV8HQzOXAW5QUm5
+ObygpXPtQx4oNzbCMEFrODpIllXphdEbTjIIEAT6WujbkoR51aZdTHcclGOJ
+TxcDFHgP5Q7rz1OtBNc83un79x9/llFrnGEqVeV34mQwRZnC517eA7XnoNYi
+wMzC6ZlioJLgEtyNZJ56cQYrdI9MKAIIaV0b6h2Uf44u72dojsCTB1A6w9Wx
+s/OaDDOSNP7Fq9MfmMOTfvWdvNEV8RZ36vMZSD1E1BxUV7A//Ki5bLaxZdpb
+2M4nq/Q7K4xi25cWnCkyjBsKPQOZoULKaaQairvLioT0D0poFWIVuwqtZ6X8
+DkMexTELnWAc8ODTYu2vpZdSLLKVP8194QZ8u81MOyWvG4j4aAe94aiXjZbK
+0LOAzZTekA/zJ3kiF+3KVgeespGFx2gW5Dy15Sjln1HSl5y5j7bgOD4m5+Ss
+J55RkeCyxL0hTVDFqpTWklnapa5uy9SPUAug4CYH9P/nYgmUqSWQqq0AvhNI
+Fvd5MGsHuAeAc2FxXOybSmGMjzK1uTSehYW0GXMkKbB6FdqPPS6hVWqRTFNL
+ASzNDva+UMki90BCwH7EGdx66TrPtiEvxYSDKQNWyTuFpCSqMjDh7rbnL8os
+6Gs8n/tbVNYtcxIcGjddyaSMBaZ4Ifb7h9NX0aoV8LTV6kTogDWHmGanawZD
+VF/yV9Pj6RMXy8LWLOA8GRuufovE6MdP4gwEueCvPTVrc4c+f7ytz0HvNAxK
+YxX+4TsIz4d/zss5hxsQZPwDDaEcoeKSBYnub3OHFDBsYWk10vlLpzDBORkt
+PSNbFEBuVmg9FpkntYgRGt3T6V9VvXqJiIsfOplIfcTMWFVGdpH+KKYZPNVa
+g0qhVOI8doUQ2jgHGgr+uCdROddWBpmFDHamkQPZ6wtWbDp1IeVz970aa5fS
+4yjhhczfeKwgPxIN82A51NrzanJ0NJxNVuvauqSypSxGulKflZbmirmqmJHP
+AuCCzXcphmZeBq16RmAAk8ep997pdh2rMggbVZJe3zBw+sifgA0Qb8uwzXwK
+z9URDJGaUM66M16cfMrs68ABHJr7qUkZvLM4uR1H8okvhpI6KMxJJA4Jr7S4
+6awsx7kwRsieVWV3OVmV3bKeT9rZNdmpi3JyR2bL5PBpzj08XnEwb5X/9fSH
+7/PX3Eb2A1fTY5PO5PMcbPDvCPKAZ9frtri9smfTZyd1sZzgIEwOD/uuUhlZ
+PWO/Ravfq6zzyWDmhr/vGatlOdxK4NwHrBTn1pGGJtfduxw8csn12MO0zDi3
+5QFwVvLdwd7Qk2oJYClRKVL7o5+7fjBCfR4AQmb1A6hHkuovY+HqBkmS/hUl
+wPpNBTRqQcF8LPeiHY2sX3jPHgr6xcS7RvuW5YHLpuFSI+Q7OKjS1dmKrit2
+OVigpUtouEnYs9dLnWkdqqG7lTVtpl0fcKY1OObw2KDc1/BX/HWGizFwjEuj
+I/bsMOIdR+YprWv98iEHRC4msP/D2xvl//hP+YhfNXq4i9A3nufrXpUAfLmF
+d8lICNwjgG5AOeAjUothmmCHtCRPdAVXupxOt1noo9CSdNau0pkn5sVCfxpA
+afbH0kMaF0f41FHaxGZfysPTobtHPu2l2valeQ4wdKvWwuUAbzW9LqDGyRRv
+fWxyJhYFRIZV856Ek8P9sB/ZXPyReevYpbSOVb16E9GibTdSjJtYGelqCa9T
+wbC/rK9bdi4k3f5KQMnpIWOAsm2L+XWJdYc86aid0W6OxHbWL60BxF+hI30N
+sia7VzaBnhkDDTdNccVYny6+Jgqc8XSZiTZ4eJ4FcM0lmZy+I2W9ishVcgju
+MQql9gBHV1oe1coJbR9CkRc/39cn3H/mYiCxgwq7KBOYrTCSugMSinOCjSY9
+9sL+WlBxT/wo7U0sq3kbeE90H3yWIpajoQkzCBKlrbvoT5aTADryaZvUcoeY
+iLtbtAoCTgbIIxn3oz0Z7okBlmzm8CTrH5nDdJ5JkBjHlz8LqhbQZtMaJTKq
+4MXvtHUG9P79kTA9S9awwRoVyszGWaAVd+Swvs+dEVxJSl0xzShmtQ1WVinJ
+U8vtJbdZ4rOeiaCi7Ck9PxYocXQ0gQJKQgybC8bFCg7WPiMtKFXYogeNgLMe
+ri2Bw1TUNUdDtFY1idbEhtfua/zvqtgO6eX9TQ4yfK0tsfv64b6tNEaP3fxs
+ui8BVSshdE/UY3ZOsRXUl3gNDEiT44tKctgShgmtintsjFm8Yqb5d8lXbA5p
+N7Lt+zTeV2s4M40Fxa+LxSVOVCheDB9N0x2DUjK0u3GtXL/Uiu4Y0tlglQoR
+DfgnrPEGKuEe6CXwiZs5s1IRXJG49Fs6ofW55s74HB7kuY+v+RnbCaRJYGmL
+t4S8QtuBCMS5L7sv73miO1iIP8TjpKd3VqyLGSMH9i5okrdcLA9KxXW7/xVc
+/c++5ZYLIT4Wo7LT/GTfbi7pgL6rlu6rG7w9e87scRwR9ywUOM1aMERmec/7
+YoO+uAGhSa4CEWAWGlCpkYag4xaQFckTsGpsx03m87pF0eO8TfLtPXQ4bk99
+sdxht/VnXf14AEKrRKtQQ8Leu1lxCB+UjUW1QhhLterzWQ7T4UR7GOqAJTpq
+LPmiIKZSkolUpgLVly84kKaLuwI0pv9FD3qHvOcUeS8IhnnqCWXc2zSc/WmC
+I+o9N7PQaOhrd2FlrtvFvzxn68plgYrWx6REuOzI2OW0Kq8W1RXGhzZ1DqNE
+blpgpd1HbBc0d7PB5lok0WBS8h7rzhZGjiLJpAF957y4O8fKeucB/hRFvDd5
+d832YOVfafydp04j72LCOgZdzWvUi8fEuivUa0kN1a2VefeNSQP3qGW4ugt+
+b8Sn4EJRYMS92van1tM9YsQkioM1ufFeveyzyPbMDLOxKu3aktxYYzVEJZui
+mZhVbdn9uxLuU/LXEHyMjG++5l6LahFOZJqd/NEjHsyjR/2Ugd9vnqoBzOC8
+B10kBtGsYQ1kb87ixTGOOYWqM5cHsJVQ8xSrN2WPwi2TSemoIIysJn97/WBg
+SbrANSOTJna8KplnMiSfc3/AnxrT7+UPsToPpO6Mu3XXoFq3Ur4lRhZIxsGc
+Gj615J55c/Ny19d3LX8CkKV9MDLQy+ht0Cg1o4x1/eBBOR8Yni9AC7bc2SKa
+gmrRRyMTFxsC4VL99AUhrhXn45B1Khi5CQMwU0KUrTgGGKNBZpcaEfcGab49
+VAg8qb5C+muCfNZfnKFXu5XhyepKnMlANNOvi/Pcq1HS+YiEWItMjxJk2Rq7
+4mzJ6nCZd1EtpBGoaF8jVUA8YOyM2NhClaMjO5bmCBE/pkRdJbbW+6rtvL0d
+U0S/Z7IofohVLxIgdzEW6HEJ07sAOS7CXnwlVAVK5kWwZ3f0Uu7iM+wFZIo7
+ZfljGZnYK8zra/NR+kX+FBcV1htt/DSiXdj6wH4SrAeL4/1hOaFYDq1G0uro
+sMdaEsHQr1hgfJL1Lpd43+gqDUfvhG+qWAgYPzG/Q2LI4ELYXxCp3+UjN+J2
+lMXqUmk45dm8krm5HgcOEjr0kdA+S2JL1jMXJLb8EvP+B7Itw2Br1z9pq9/N
+LpaINrMeqQwWzKX/JHqLh9Jw8Ki6CZgNJel7abnYgNdbuX6MBYB5+JmIYZuE
+NksIMFHNEebpZ/dDIINBp2E3KL4+Hz2y+NujRwnPRGGEzsgFagBOWgwFisMB
+RnDBGitviNUV+EUs2kAC2s+FjSMxgt2Je6KitY1BjIGIj6dCeKeUlPWAsSN2
+joYsyPZsDZwBqjSZ3PYwsTTKKEG26pDXulUaGs5x4mbsPsupN7LjPFsJJDvq
+LM++cYaoOzJq7rcgx1kA0bq+kdrf2tqBJCk++b5gJBRRfl8p1lZDJ2H5syB1
+bbQgCANsYRoia2ucvbDRRnRFeoZNagRmkW2XxwvSIropWp/Xd+TMFPXQ9eSI
+Ov8siT7xZ+OjaIUgMv0Wats7nGlQcjenV2ArSCBTCSiDkWdVuxQagwcFc6vk
+ycf3dxBbG7+Y7UvIx9k5LG6LwOFVNabztQ6QgySYm7LViOrZWgzT3yKHAfHa
+Jhs1AIA5v1uzdcnD9+OMVKW9OQGqoKzE0A4cSZunKyK/RZ3Y5SWzXgbt6wEN
+X/O6r9sU2RMHl0K5Hddp/eaHATahvzDD3aK8+ZUpKKXUU1ONhoy5bSqpnb+H
+gPzoMPIF83eRMNenseIJLmRxU642D0fkyNQ7DhEm1x7ZJoRgr2MwTDLpazL1
+zwprzwbWl/4ODNmFfkJfEQL79rEWqKHQrKaL9GpDL/5JyHZuLdTGZP5GxExu
+ExzyHtAvJT96x5TJf0LebfXZ2mhaEGQAXMmO6r/R0NeoYXu/4jnJJoKBYG9H
+Sw4YjERvmSjfDf8T1hJDr/qpwKdJKhBiHv+Oob5Iwtp47rsP7/5bvkwf7ihm
++MPPXpC/QGOTogYrkVBWO1RexAIOPNOG9HycHx3jgxFHlo6APvuA0e6f10sI
+73BhnvyBONPv2dWhdz6NFWSAfUzFw9X7yQ/GYZz+sDgJT8+AIEFTW3XxhfAR
+80kOUYJsO4wqfqFBQHzh3/10e3xaHG9jkOE4zMnwlgoMbMeejkkkxEYlo++a
+JzL0pcdT+n9Jsp+/x1WBNMmm7b3hqCc1JPSfcosykOgL0tJ942l8+PPjoxeD
+goboZCxj6wWY92KdTTjGLYKkOgzjHxhgJD376fw/i83wn83OcXSG/DUJ7r3j
+r1+v0iC1Y4tmA7iujQB5N47kbSwxRivE+8qSh78htm9KhnZJvrFAc+vK2HqA
+foWkNUm3k8PpIYTwaHo4Fh8H0tQCiqSMtJF5i3ERFd1XZlj0XQUG+9ACtCJG
+WkAdfRWu8LQCMwSz+t/vU+3EJzhrxsq1hi3qt2j8125ZTdz8ddfTR2CUb7NR
+gFMweKnskmojRhpLLQWKjkZqIp/9+eNP71/nkhk01EhEFWsJLrrcV5dB6pJM
++Va+QAoGXMZxyzFCRFtritrNpZFUshfHNPewIXYITCyiixu7Q50/Owm6tEDL
+SK7hS3H8e9W0lCZVoZF1qCHYP0ku/yPVzNKTncwHxDdl7hx2GhlZ4Ih/Pzr7
+8+n796PUfuBtingxVx90zwjJ2feP8P7BFotzbtu5PcbI/DVyKLf7xtcIRg8Y
+WD6EaLAu9f7VylfK+ifMqoZsXA3hYD+vVsKR5vIC3BI+ejt82JJnJHl88zuU
+moK1krDqNaWc0tuSLPyEIiY3XqfZdV1LLiTp0bBp2t9xj3rT8wT9/e604Xfp
+SkTSwg0t/H8DUpaT0LB296d3BT6Hj4BiOxLmT4kIII1g3f3Eug8ckfsDqb3B
+WNFYWyZJUAyRBUj4WG8k5GQjaZzy92U7BptEU2J4nwxaAQVLDYSyUbtsCgeh
+oAjTUrWWr98KTRrBYX1Vx2qHqfK6WlUbxzqGWj/aDIJ6lK/DY3V1mJW0f2Lr
+KLVG9Ov8+GHQSMIRIhmKVQC+0skcsSM6CrgjqYFLwvk7g9bQnFiSrwlcj9n4
+4oBo+JsQ+du1xM8QgvwWGEM7k/VaozHhe1mAkui3c4FEjl3oZGvvb01VH5KH
+sff08O920Ch8++1JfhY4QxC8vbQGuOQcrjZKEKVhKYUMH8yuC85ckDz8xtV1
+CWDbCHJiJxymnL7zhDq6+jz+5CWGBpOAGSvClyFL6NRM0obLiMkk+H4mvAwh
++dbetVB5FnFqU61eddt6Zi7xl2UlwLNYCLabbEfbPMZ+omNdQTl9gQTCuD13
+2HAKzLYmHUEmzE7chabzbHwOtJUpvGMesRwOk+UxydZQ+2Er0jXfFkSxhCxZ
+0JfVatPeI5M7BPC5HDMARwNr11pwOZXEgCycxaw2oUhHrvEDV5F800596cr9
+aB7atjlQEPkxLZhkkW9ZGXwmjQQUe8MGHUebOo6nVD6aciqIWAyC2Vxh3V13
+3bo9OTigK3j2mVUsWdO3U5rvQXHw9MWzo+Ojo0NHFvwVYP+jw8O/e3BZxwI+
+TeiJ/eWTZtzVdVlAvyNbirwybaEECmx/S+mMap7bsyRKIdoeeQqmz2u2ACga
+fOwJTNBj6uxmfVZlP+xxkiVLhC7AHLNQJxUkPbknsTBCwsjxemhjDtUw1qRY
+os+EwA2mu12r3amEHTuiSYVe+RCnCL03lPkkSnedQAfV0dl5inZVhtB7QwJD
+dsSV2vcHOUQUk9FG9/D4HLozVgXUJ9MvrUZRmu707QeOuIc+PrEJDwOq+tgD
+WLnquRTKJzvrlPOF1nfQsClRcGKtJccBamKdQbrgFKoYWKajEdP0rioXc+m1
+p5BvFtaAuZOjdntdx8uDu3osrLsBPtbc2LbJLS3BP42vpOCARS8PHpoIDaSi
+okkBWmxvVozTEF+2KxKzU3LEx9osSSwRyBfdnVmfSfQbLjt+tWICNY7O/FI3
+wlmSTsyRnPGGtR4iJaYCOtWZNAfh8a1Ohu6DF0mDHvbNNiRtTdVZ1wvrMSzY
+W/OLdW16PeuQsAzXCaeJi5X3/ujNJJDSsSHY6FucoM5GYOsgDGHgVjkFH1kc
+JHeBkr35iojk80OWjztEU3rNhaxlLzwzh9q+F1Zpg2JkvjjvHELnDr+MEmg3
+S9rkkWfIHEU5cOuoEjF6aWH8EJkc36MBVw5QFJ/VFNKvpsulkUSARklxHZjt
+IvpV6DliT3f3oNc/nR+wsYdQbcRRCQCsrWdCxCZ4G+f/BvE92e8bfq8YPAqK
+ZMmQyKx3mn1vzUyWNjxftkoIIoKLu99MunqCtgptDNdmSSbg8fQ4/8RDxfH8
+IJHUPrJdwqawT7XNdICeDmyC7BZznlxdd3pge10SKkGrVKvUCTQPTS27xUIo
+BRBq452TYj7cAJJKHigfsnrco+mTex6xd7lmoEAK1j+KX1i340Ez4PHh38mb
+d87+cPqtf4zXwt8FCG2lJGQQ2RkpGmNBKBXH7/rl8lpEkc2KLfRiq4Uy1qRO
+kYiyMNFGkVa1oKMN1zD6tNkXcDLlR7N371e+kYvGcoHIaIx7OEople19Gevt
+t2jdJvbQG+DsWBa+rqZ5+/NiESFzGgwUBxWzYu0sFs9FzyLeZ/a04Xvj+ZEc
+Sb2Bq3pezQ5cpe9YC3etCDgQKMTP9CuWYTsB1cADAQzOZ6ZBcDexcHhkPZW3
+exzeB3YgQNgmM7HYxspXZQ/pbdJvE9wI84QUV3CofN/CTLL4qdDehGvsSy+C
+mbIB6q1kCUepew7dnELLU7eCPtz5s2A7Vr2BmWkuTDSjZH6jrSKQUzSOODc+
+wxYEO2MuH6SnXCpl8C73+x4jWG6fgAe7LAtpBP+uh0pzrWnQU5hBsDtFWCuj
+UtiVhKUtDT0EIXcO38PHxPx/X7ednJpMu2H22QxiCx3BvjLViPhG3FNZCNxV
+LjxmbPAcHVs/wfoCzYgkDJHyyo5seH3+2N8FLN1KXFi01orWEvwgIE6CEeDd
+otvz/r1qDc9+p3B2DppxICa7X7eksqTC43iJ0ddLtKgs9IyRIOxEx6qtLJCI
+Ao7BCBYJ7qFH1ML6g8wDVoz+G2BlLbkeix50iruxCQSqEgyLBXK0HaE6XOKS
+9KgjxJcR1b8nwTlE5hcLxwEkLcw3oeLaRxy2V3iH6DD2o5baR7k0HzKsf1fB
+1UAUg5HIEoBRze9PhLCP9oRewoUZM8/QB5n/Wjh1pAQnqZrSwEqgNth1xJPL
+Urq7339BflSPeTf20EGJra+Y61eUaROBeQ/8d7JlJ4136xpuqjBoOd0PR4Zn
+01i/jiS00gnjynqxsdYOhkSWCaNZeRZzF0Zl78tVU3xzWKkhZPJO7Bn6hxs4
+KnSpGMIHSxzLt+HuPUpyeplm0hRyjirCu4HnZYmyuSzIoKrI52AfZJxJblNE
+al1IaHl4kqaZFPm246w9OeG2NM5NDsnTHfm1eHm/cskw6N1roatLHe3Qxnse
+s2c+Z6n5ziu0BgoLJzEnbaBYSx7QSl85qJr08t4678E+IHni1tcP5Ppcg9yH
+UFxa+7m9aSeCnA4m8h8HFtAewzGG+1ZdqhvJQ9wrRIuU8wkiyxCk/UzMQERI
+7oUSbzO7eP6MXieJwNHO4/qoEOtYyJQZ46wWOrG+6/sDeplgicdZnEZjaocU
++k6KSxBpue5XrRENxhZtVRpvlU7J96vM2Ap7sPwi65VfpJDiAKn15+3/AzWn
+fbJ/v5qz2Tk1t03XhY1/UFehHMIycE77RaWHpyEu4p0ZM5mgwtiVtWcMNsBT
+HfQ0gFHDG5VBLOT+1CkwzgK6bz9bGYFQ2fbSh7eCkFp9BqFSMikBlLY0/9XV
+NH8jLjZd/mg6OtRskET7LyTQn+tp/q5V8GgbmKMcxa6Xi5fITjo9J+n9oeoS
+4O2lRwaDeNkRG0tpnBVa+Wgk8H+33G+j6v4kcykuODYz68b0+HI10c55obow
+dnnNp9NpntRb8wWEgV3Qvk3az0IAwKWWk9vi5i5p21Yr7SSyNfOByNdrabC+
+BTaTTuZKILeTt22IoS22xNsL7QlQ4Wz/2kczVyV3AndBJG/rWWy+eOhMraGr
+ensrzOhUCIq1huEaDsTLBAfS1x0nvoSB50tv2a5MiplJz0DEpFaRwCuAzc24
+ldoPRrUVTUULACy0tYyh+9C3gLFonnamYprSlrRaaQAt6VYsUi6L7VZIJNf4
+UMwAsfWQPnDn12XSz6vHC63xGIkXxXUVYm02LNoUZWoZo18+3nA2hMRAYgpC
+YPupqelELNtf976p9e+T+nLSXM4m/PfJWv++L86lYja4lVMcom85DcYVVixj
+6ymwkPwgpzEXdyHeLiQqnd+wzNIEYCnhhUXRRHJDVa3vUIWoRF96rNe6JZiK
+zRdYd3ci/nqh4bxIN9CvYwH8QL9bBnwKb2vPw5A+oYrEYtKWa7Z1eDN3ssDt
+aG1SNl1hbHkK7sLzRAyWPJDNMjqPocMmKnPF/c1mC7ZcUbmMdC3rb1JBbNvs
+MFPRqVCg+7SfGnH99yjBdeKPZkZiaBq5rU2D9t8xNUrdmfXzBcLWuqBhWoGc
+iE3Fn9YcpP5O5+bZpHp0dZcBvQChYtUkjAbXvoNCtlM21IuMF8R48AOt0ANJ
+kAAyufuJPMV6NtuQQyvs3GaoojxMiD3xiF9gGb5K5/+ei83p6KLeNl2bCQrR
+98FCobl3L/phAGeb5e4mI0zlRhu1mDuxT1CKGv9+IL7zwDEYksZvWRpBXy3d
+u1j53JYe2BpLtfkrUDFawFOp8+brAmzr5ZKQ3G2bBDUkq5S2VTcp14STCI+7
+RwC4dHevcwnORdlES8CQKEXrB/Z1wVJflAYlEFw+S0MEeHD0ilR/hALFDN1b
+Qt1cz5HZLaLOe2q3ez+IEbAlU8hpxVOQ33sKxlmvY42dfIWX9SotrcALvBH9
+rP2P1tbePK9fogP0iq+Gim0dWNXnZ+eHhy/o+CwXjXwC3ZvwiQl/YtK1HX1C
+LCBXvRUvKqkLC5WKvWlk/YGCt17x52nt9vYCynExNfnwImZ7lojit7Fu21e+
+7JbjDtJtTdaB44ihvRRb7E5Jwi/srHyOvkp7r5ZpYRQFwho3L6X171xp82I/
+AQ+4M96Esbfe+JJq4WQi74nJba+AeLeo7PUEH4HBz+6LIulVL8miSQvC1Mx4
+/ne8PjqM97w+l9dngy++KBbsCc41PQPnKlMpTjZv58Z5tfwwwG9YKW+lXXcr
+XiYdW3EEpbrRdoCsWGyUtHS7C5k1NC2Hp//G9LQHjk+OruOaldoQO9mbVaGM
+PECZtMqC7JQSt/iKxTWx0riku9DBlDGYH2hO/71saoyHDnQX/i7nmOf8G/19
+whheOc4G0+fa503LwTAe2W0hFRZt6YlKNefGT0hJCdlvvl91WTLac/aFjHXv
+NtdvRqgC/0FoTLmwWX9LHgoN3PNYhS2ggfekyMvWmy8zTv79AelKrnwILeLd
+m6Wj0/pdl/8fl0F/E27WEoDFfVi7zEsS5MmGxVW2tLdc4tTRQIS5vcPJv43G
+ZaiX1ZIMc1SqSwXMyWtuAR1RYB095Z6jj5CmkEHggbDa+BV9KRDyUVUoSveE
+sIY2VNnCwiOUY4AclXbRgt4kjptcing0tg4Bjo4bwfGPiacjX1yUBfqcQaZl
+E1EI02hrCy660Eya6t6hUOn2ne1lVQooUzoVOuMy3gmeNuFoSiA8lIMoxxzL
+KEVUiev3M5MT/r5TwAlahF6VSAponD79lLbDXrD7YsrLpbqNIi567ujaqTgq
+2Z7f6S5u2YbG+MLo2Xq9FkrWkBYwbhIz6aQ3uUtCJh63CKh1pabHtVHWEdm5
+FroSdnbRkzI8LjEDegFQq6iQ4YakDKZs+aBeaAmH2Iz3YmXxDv6zH/X2jFOO
+HCse65HxaUeDhdYv9//YSEPBdb2GBiQPtkL1DSyogZyWuri+oopTn3yvKBM2
+uDHvZVjjtVgKnF1b7dyigZeZ7yEJWWmVEW1ElOz7/WAJ9YLFL35FRCYsQro4
+GXcqVNY24IBESDl4JZGF1D5NLqDdxzz75XspuwDZTbzU6XRfxT9M4m2+H0te
+aJwXtaXPzSxjJSdUD+Ggv0Or13+3KE+kf1Zmasln0wJfISKZHuZgQHz1aXa6
+ohzOh6UBUK+zhPkjALNCJVWCongouHGfZtGbT26httuOgKW0/+KrC4mjGM+X
+9UxI/fTTV71JRCqm5Kbbiq/AgAfIW8E04yyBA8g7GVWKXIx/rT+UyVKebVyc
+0fZLXhJ6EyiaZ8h4w8Xt53KP9El2LFDVqI3FVqiwVLdcf8kECmpSOgumvxLz
+slzK4ec0xWbFBUZsWWdbdps1Pi+Q+Fj15IjPi/vNfaUU7mN/GBnBZYRLQ64v
+oWEtZ5lqjC34xMk9YYRtJ05+lXm7z2Id3sTYEYR6LsR8ISTnou2oHS7uxPzC
+bUTKh1MHGoW7YzDRvOjq5u7Ajm5ahfxn0ps3zGiEZ/k0mxhbJTBdUyM+prP9
+mbvBNfllccO5/zIgi5LHGjio6qQMMsbJeESrMpDXOloui4L9ZDhbbkggKTsX
+D8v3AONpC6ET49Sg3shuYWYWn0O7730/5XMmJVh1DOkVFb4oVwhZ8alMmODK
+ZiUKK2qTl1vdPTBQdhVijVedBOdCujCE5txgfMyuH5BDEld7LnFTN8QFab4Y
+NJnVs7JVHGVoX279lHZzimxnJe9XEUFO7T7fYf0NtIAppC5ScRwpiKZudySb
+zfsM0AHmwl/1zBup2e2D9fxXdqANplkQ9+0uMnKF8sj6bFHY0TUMIivydCO+
+MPQQDujQ7beDofQ5cyXnVxs2gN5Z1Tz6N2tZK5uYnaaJ2cizTLfk4UIMWhiS
+enX1oSXCrXX/4HuEgcdwo8J5nN2dGODC5shlkfZzYTOZJrjNcWysdMuMvWzl
+h5ZgQfg9+keKta42VQvQcbHp6iXc277lChA16QgUEElQJsGJ/lAPvWqa/xzC
+8hHtnAcCB2FS8ln6gfy8790eWa/7e4nqmIo+4J4GC1TSBKXLzeejBw+GL5uJ
+R2RkcAUDBoiTjVohs/QCZCJpd9uUS200mabsX5n6OBvu0BB6TKb7Mbq3nCUW
+hQy4VvfplRaly9pmDMdWo1/OORkxqpf+Nvu8uBvZ3vQew5KM6O/9frxF3Gcl
+2fSz8A+x42P/Y29SqKdh3rozKO/lfu1/bpdpkgVtVSi3mnGi9jHX51Y5a5wl
+UU7k4SzaUnzrDCKHDBrW2DtJUiGim136+T9UeSfCFzyCyIQYhFnoqxjzksBC
+HENj5rpac2B0e990BXeyOwyUY341GaZ06+TYgxVAcKK1Uq7FLURL30Nnbd/s
+3HIxuCAVEQ2VCeozv6iuHKxhGxKCYML9ssF5XNVfoROBThcDyx4emByqb2g5
+vtzb4Mz+3q/QiCeA3tXfOiOTEgzPF+dI9zxDYLvE1HVe9FYorX+EHmDiDxzD
+Rqzo3hjWCzyrTcFwm7FHqy7u+pyOu9Zih1seV2bP+C3t+usJTLuf+UMHR2iA
+aTTIwkC6NQwp+PixRWC/5oVXRH3w7WBlmrQpuqEgAL+R3BgtInfYCTX2IeAc
++2NrMx2Z5t+EGT+JyvsmUKQwJHmgtQ5+rYBq5d3UiLj0mhPnVqv1eovRCef5
+XSbdxndq5x5QN9ll14dDgmspySwbQUsmBh4HjOIVh7m51GUPVaRfANGR2nY2
+mPZ5pyPNTmird3wo5Rz+W8a2zGHI+D1AhpN2fI/31YseCkEyb/EMzSAQTuA6
+uqrL9lwFscigpDfZBJiw5Qk/SkDPPSHYBSPWyjyOvGFbFmJuCmblR8MWOogO
+GQIGOZxs+NcToJL2Lf7Y9/izypqPmSBsRan7xX6pRCRBmUoAbEXX2/AQuQhE
+Ffy3cZa07iIdbHHTLuFwkOhSv1OCInUBtBacTFoaajoSbZl2kSA6/Xb61wGW
+wW84Znm/Lte//3FdngRF/3/V5buVcjKppEP2Ts38e9Vu0nTIxGWcDykk5LEN
+Mw5yfeF8ti5trbBF2SL1+dZcFo517I4Qryl/m/q/VfknSddh7e+XmWG5og8v
+N1JUJ+gQQYrQkl5qqwzUktGkm6a6EZed9+Nd05RXiERCFSuSHmHh8Jarktt8
+B+hidi++7229Ba3f0aPWPJ3wHgEUGSoAGhg3oUENQ9toWE7GNcr7KipdOruH
+AhOxrEPUuu3CkejjnCLd93aZvNQ8AvcwUURw5H5PJ4qwgtjaIWHjMjltcVku
+7nbpPX9y/mP0nuqTf4vey08DrFdR73KNW/liwrKcwKfN8RF+Qm6pCaSGJSgE
+4srTijeer2TSwgtX+uy4rNlWYKoIR3XlsdGrhB0kIJ+TGps0rcDTKlexcXm/
+AEVbitBLYoOhIYOHJE9j8CIvrOw0NLUqOIEgOyklIbEigUZ22irqp1ckJCab
+Egux+tVA9qNHQ/N49Chy43HW46o0WGERmOvlNvtzdXU9wcHAc2gAX406Krbg
+HAJBEn5b7sZR0PCu+QXokIfnPno0lkc3qAnrrKd1kiDJ9tiyoe/cWVERudED
+2BHWJvs2k/f1bTIRdP7qrIaeVyIMJXSHicMk/ZGMEkMIGKbfMYwz7OnuxYyp
+za+HGqZDFanRceZ7/dVEoq+/omF4bzek7t6XqysbUBykbzkr70jfy4CZyQJf
+pffiO/TUmEOUnKVA8tqtAJaXnAewebyXVgR6/xpq4uR3LyE6QOsKqjoYSDKq
+NihCaUPPq4r+zdihZenOuSwaxqJGNaKWzNcqDwk6mOoY0hwCDEX9ifQ7tlID
+ch00l68sO/iDhlUDX2KsD/SsoNJCGI/DakBzCAqh7ZcC+NK/WZltdcYAYVGB
+aiPfSRG76wyM3ZCg1OSyvUusEz6OqJBa+brS1J6RNSCnLnD2n+4A8M/qeUne
+4eKqBtVTiFFunaUAzNXFXNZKiuymfw98yGOkxupcPSBeWV+6bAjOyERnokjU
+YYK/D8m3EpcaRiOM4jXgVtanOCke1plsTXycqU06ADzJpYbswlr34LSGz6XJ
+33dqIjCgqPzSjfNRb4AjLbqSki8h/ZB+IHYrahtrA05n0mNOrGOa0lZi2htn
+H1dlGlYhqUJQJUBLo6ewZcX3Kn34ZDiJpBtc7CJ9PnkKV9JJQouymoBbRa9I
+BS2y3XnnOv3piRcUeE/k+TiRkq2ke230HOjRNRJ+dniB2UAzEWm96R9jDKpF
+hzZy0l+e2yau+tSQ2zjtWOrgp2AEz9aC1ZxEf27Y8mvTRegsAV7M0cX9Mol3
+W1Y+WYGrmk8GyL+FZ30L9Y75u5CSrCSb3784eCnUwG2Mh3gC0p0Gsu/TGjKP
+/M7RbTnSaOdmts2d+zqyAb2zsVdaUeWoW8mSfhlSpat6MP7gpvB7taf/7v+W
+yrP931t5/kcqzT8zJGS0NZj/s1Slkz9Slaftv6eeTIV7h5r8ofZlm9qBMNWw
+AtryDwtdUzkWcRaXI6k3ckU+OElhHyR4rubvIoZDkncok2oRgkSLGEHcEY95
+UOP75/+fpvD98v8bFP4q/yUNhScR8Hs0/v/b3rc2t3Ek2X7vX9GBCd8hdwGI
+pERa4ozXoecMYy3JMeKs717HhNwkmmSvADQuuiGKExv722/lycyqrO5qkHrY
+nr1rR8zYJIHqquqqfOc5b+iVch0+vsJm9PE/oDogFOJzikZ+pDLw3zPCn2Xy
+/CZbKk5jfDWWOMpRi8Zg75JNEZF3SJXrzEZMx83DILNEdg/38+nXUneukoYU
+23WxbBitFuntzmTdJ8LeUIJ+hsIl459L+YYPHIjEpNQ/e8LE763n2uRcfAUH
+v2/2yJJ5L26GCTLylmpgDk2pBpNA2xPuN2d4BfGx0LvCZ2vD1IdFKsfNr1Ui
+yGC7pRhdrm37PapePhkmXsxVGVwoiYCVqfeTwUish9TGRbVuKLmuc0aIXmdt
+A9H6VAZgWHvCUeXgRcGPQkjaDGWo1PuLvq6X0iHZDs51rZUyiWrcMYci7f5W
+awUveMoE3oDpFmkPrB5rrox7WQR/ALIrUXwxubDdCFN/8Tu8+34Kb1t1cOrz
+vnBcG4S6qrkLVxBfHL5lg5abWf9tvi5qVv1dh0q7KpTCqxt6bLhTL7kgYZeh
+DAK4f1BHG1WpcGLD5Bi7iAyx1LJ1TP4FY4zk8xFMxZMt7BPHZKWW5y450KGX
+9evkBT4rDxrWcgd/JDqokZnWSEB/fZk8p8UCgTpUYnXSZ+mTMuaNJpOhk9fq
+WXR3O+K3eSQ44t2THYeitQ0wtWHJg21aXjVp0qzKgtjOFbpzyzGlkylHdeix
+XJAYfxpmVeHreplF7Y4Xinycd9KuRFWGoq4BNclli8FMk0ayaim5bd5gbfmP
+bN3Tq40P1SOTIX1Z3A7obWppU9cq4QtsLL/h9KG70G/ShCgDGGpjbr25ndP7
+3/HmWpTD0AS2pdAz8WleuzhhjEHrcSKSB704dypwJrhxIU5giBtCkOEJilf9
+Y01DGscZtnRzDcsZfRJcnyFVxgngplbBlF78NHsi/ENhIeJQFoo60sIZZ05k
+BlC285njiDScrGEiroF95i1GEbHbHoKAqetluc44Axctw6T2BnrKhhYUauyz
+IAFOUxio4y7qI1XaZ1tePE8s1QIVIOepIDg4b3zf+TXBhSrX7znlTkxWC8Hi
+cr7vBS7IcA0n4X4wKLjziQO6BNzRUwEMabCfgjx2w3VnAo3rIwwkeoi58rpm
+mCYFXNDpZCbCsjWachoaFPtFEwpTzZxF9ClOi0dgHzienEMK3A/GvxjcjO1l
+45/eFhpavVNSc5vs+DXkZp8c4BNFqPU/smyIfcxgQYUkog0aipYaSaXOSLvA
+ufPCRjmM0cllK6qvMEQTeilDqQK6jskMIHkGhT+q1p0nNZmlV+ZWGHjgv2MT
+r1dANKQlEh/9LBjnhHFDRz9tzKzLRM8AO2+Dx8/evgAOkGxferQH9G4C9Cfq
++Pzk9wtn+q+LapZC+OcSPgEt7NBjo6Eojcl5Z3LuR/vHFrTTOVYElqgYS3PT
+P6mZ3ybCp43CbAGz0XIQhOCUhKIqwN6xL+5U0cSdQzZXOVq7afTHgHNpAT5v
+RyQVnM1p/vhyXZa8lJfu+USWLs0/14VdbOHhSs1zFLiUehCI5KGKGI1Gt54q
+M9RdztcobJlvXzygqzBBC+P2TsUv0V5oNEuP6bhqVnNnNKQuzJQeTn+FUh9w
+SSOIqhi+OVEf510JigQPwCLYwoQ+bIQPsseREuuc0cKIGa9xz2wupANV2LRg
+6/dE0DTzAhkWEFJe3bnXzAfXRWaI3ULZdHapZtrAondi2OqVcsJ0UEMrRs/K
+NHZbbw8p4pHau3VHozdOpfsBSI27HybAwNhFX+6Pb3SQp2EQmuv3CifivqQP
+moQHAffFQ474mqJkqemQpkjXpQ7oiiklbDpotqZxfeASpWtSuc1z6StqvT7S
+Pi0nu/UcZ3zGbpKpvmmnJfBiwE3uhR1UjXMQ1s+paiUINyoV1GmUPk47xVxK
+dZjcywnAOZURCXRu4dvLmrQKZRjc2O/3BZvk8e+OvYVJ0dwBBYpmwnaan5hv
+jrNak5zaOpycQsAnq5qeyUL1Y3o8YuQvqQzHbiNp52GURcuJqdOrR844CAEM
+bsY+aAaOoEVLQIVyybrcN90hlSACsFoHKVAxGATlMVCLbwzY1WZNZeiRrD65
+iD4ikrrhCvlbpuYtXHgripqBuAhRjhbrdxydcS+QKajrRdjRkN+IPeaOPCSd
+I7H9ZPNjhP0RzJ6LeXHZCLKVeIjuKm+0ULyJqBsEd0u7tTGDtYqTOwiRu4iO
+7DlYYMzZ0qRW0lrl+FNy/7fGG3RF+jIUZ8/0gQcEq6sympA+yz3b3ccYFuQW
+8P9QaGe3QVq0wm7UZ1RbzAyUi2LVDKcqODtypzzFQMqPqTLmgJNghiZqDIQO
+ieAgZUDfQysAPr7/AM2VlC+QXr7ULETccUzEUzi5Q+kW2yCU4PtLq/UWm2Kg
+Q010em+HO6S8fnWVmF0U2aTyjGKZMYEm7gPgu5J77nvAehxU2haUdAEO4JDA
+Gwk26BsLuL60kPMMlekJDCXYo43wpDGcWKHwgWWYRNAgGMSnQv4Q0t0Brtoy
+s72gBCDDd4VKfLe6cGakIH3m+0Yra3hDlgm3S5JT3DMzp4+FBNQZBoFx5FL4
+Q4ImxseDcBQ24ij5CxnEphxUP06aJ5LRrflDUlS1tEaltiaJRRjrryEwuzwK
+m3pmPsHx6bfs30KyFJ+wW9iWlE0sMCWH7yo7rgq/4bxJQlBZbd9KpK/fh/jR
+Df149UpYFxQXeK3710tT5FzEMozi7ClS2prYCPqoioMBJEJqoyIkChDfhAhJ
+CTvQHy5ntzBVMwi41i3fuTd8RT0lOLNJUKksee7uByWJfPksl6fLbX1Z3Niz
+CjBuDGz8hTGUnbZZ4Q7CNAr84czI0O0GHF5qOW9KgBY5QbBYXRVN9Xftywuw
+BGRLMykS/cVzwvG8Hy9vhBIRh90ZSA2zVHMVTug3DVxyqS4l93Iq6sQFpSTX
+T4TzUti/ShRKCvco6gAeDGa2DtQazXGemZM+7pyjsT8fioDARz58n9NLyprA
+ov7HaJ5KFOF8LjvDiRJE7Aapka5uQAKBdlwM00DEkUm3ABOR0D32oR/TDZVS
+MfcDZE8L/okAw2PWInUebYzWVeh3wi7EmFlriXUrNBbKRFYMbyP4W+sSL0gZ
+1i/oMoloSSBp/aWjmJBldNeLOsZ0G828pXdK962RWei2JdTaY/cCTsimKyhA
+ReSidGLMW5a52ReYxStmlSTdT5Ysxb/UKRgh2DVbvqdngfLSjCP0IEEkUwjM
+bZETV+j/YajDOIzNxNvFIF+obeCJF8BRY+VklcBho6YVl0WFLRhnyuoM00ng
+7yA6CcFxjaZ7EFJP/Jd02/TyUEjhqZvZZqkxdLfitj6v5z0rRmMv5h1gl8uL
+NtssPRF3x1nnMpGBeL1YRE02ehqLrcZPcKT1HyHJrTtnvuPeID28sTqSVOa5
+hhMCRZ/n/fHViORWz2rccYFRHyL0y8R/omqwm0hIJlR2r3dXVN/MmafzesW3
+MdhKqMgtFgsmjuF+yyFZQYx7V4XC4S6ba4qnFaqY3KFeuEOKNuXKzeN4NxIG
+35VE+gTM/PObc0KAAoa31tzNMBZqgt+XEo6pYnGinaYRp2ZCSPx5sRiTMYfa
+2yA/cZZFRtiotdhQRE9GrjlzCHNFK+dLgD/lDp2UFKVEhrNDxF+gOTlL4BK5
+XnRurZ2wI3T1xQ2dF8D9k9zt3V/PI0RFLZuV1MD2c1J6SrZZf+EkAXVjszwX
+K6nko8GnGZeEiwG5gLHIfzwNrHEE+E4/TYDltBv61tRqYgUvX9FPM/AG22dN
+GJHtMD8kk13t3u5tIpdqovvgb9EbZ1cpgCEVw3NRJrNfQLz23DfGcHS3cdVY
+DaclAnxvPDuB2sAeFEn4HJShlmGTsh2JPnQBkxhUiAxhEZOxgAy0YLR9foOJ
+3HW9mGxW+Y5E5OXbEkVHVzxQ5WabEDPFi4mRRjzqBDvIll1ji+gJ/QeoyI8a
+IVA/H9h0FJ93Kzcib2mq4qCqpWqYBVGIkjNqfT9zQvLiyjNwCONmx2XTxPf+
+nqeIJJyYjK1Tpu5g4GGsYIdcRQPehXXuGqL2bLbx7AHgOtAQBjNlugPXyetw
+SikKUKvAEiNOEZQyQwcP/SorYpCpRhLwUPDAR22SlUzJkLi3pMOJofWtMAsp
+5OUX4472krqLmxIEPHyRz8uKYG68/tpRqC86EaT+NdlYtFfOhL2o0Fcx75SF
+O8Wydg5nmeLllIy3BFczr9ROhNIlsOl4embhj1DNG9dkclynDBlSxM/dVAdw
+LDz3OXLUZCG+Z5z9zCZ++rPpL2RpSuFmva76qEFFxYTc6mswxUPQz8zhgUbF
+sB73NoZNyLQvI1QOuIffSMoMkO+Xm2oGRxzMz0GgGShy4Dhg7RyPGGeANgO6
+OjSF+gTOMqbTTdFHCRiwBvfyQ49tN51VUPcMBZJ9sNjQ/USbKJU478py5WRP
+5nulov0P/hgqhzgQiHiYhZegOATxn3XrYP7s5E7+PctY7Mx31aIi5XTl/jBZ
+hT9M5vSH3V4A5o6Rl23al3g2wDpI94v7eMz4MWbfWKqDypm7f5xdbX3rffxR
+n2DN/CvhTA9YWJtE6tXqZxMnhebNYKr5Qk5i5morYQZI5Ce5b6hFCT3noFbO
+wG2zjoUquVxsjtbEdYNVGjyiNCbzpg8V8jhj6tDtPADFMatvo0KMhMV20lrC
+MxSbLuvrO2bu+4V2OnXqASA4QmV/pigDMIIqS5+C19UMIdA0niqncP5Z6U3h
+CEGgyUTkBlwcqq255Oa0btonzNDZ3+4/0KpPKE6L6u8qIAfIRDKT3ChtCZ6v
+h5Rog2R2TJsfGyRxQsxsHdITCfAdhqr13kmSd8IEW7WAkOxQ0fodq69bwMB1
+bOp+L4qW4OYoIQGoCCV76IWbOSGtCUuGemR3b5sVLhbjViMc6SG2yLjzKYPc
+pnOHyqmu9w2DvSsvcuNpA1mikOJ3UkVeDVoijBBhYgUJmh8GX1oU4F41QD8e
+Vu8osqPulC8LBrkPGMTebrbTiR9Qir7r4/rAgdcH1dpTvsUwgt6W142Kr2nc
+JaohEV3h19ECvVcBYt64DXgJ26dxfs6qsWsfnWgGScpnkb8iisRgv1NDGpMs
+rDdLT9viYThIesfIiFQPQk/SQZ6x2UU2C+noWrKOijemUfYM9iaLCiU4oed3
+KxtC0SEYGTS1yQkc5iBCoFco4cGyjK0923BRjbeTSom0xp/LWF/VysOAB9Ce
+kMHNlOLFsuFMusSBxEwJRlwW6lJ5F4rwYSrR2lLg+YqZRUbGjaQg3ShUSFzX
+63dTFIL+301NUyXTZBAkT61sKRW7sL6YXC8qFFyigLdpggj/62kHtFfE0VhK
+F2ipJkjKNdUjgeseddAyBQqPS3QpF0d7L2Ud0efwFkEwFOEYd47EECyw3ozD
+6f50X2/H4f7Dvb8Fg7XxL+o/NouzWtyhjMC5acnFUitykX8Y5xaMplXUZAvv
+2pzXqzKTw/5w72B6Pz95/vx5Djh5N06nJkVOcDi9mb4Ter7Cncfb0sYY8Xbi
+LD6hWErh0wPSmXw0Opn8vU4QoPFsj/6muLXYR2j1PdqzGxuPK87fla1crKzn
+cK/qFtkefLG6vGJrijDl8RCnxq/Q6UzMfeZ5uL8+I4xw75nIWw/tjI9PeMEz
+FjddU6KHS22qilVotMS3y75hqrJO6Z0kka7lWmS8+ZaKuAjG123BRwSbb4fk
+c0RptVE+ya82C9QVGDT+CbWU1+/iGC2UJE12Xjit7MyZTAoqZMrG8UXEhmc5
+zZ8jw79EjuF9Haq/2zWMcy4EuyCKs3GkwnjYP+QEAI/jzckn+P5kXJyvyUdU
+rIWMQ9E7EfUOw0fgJa+Vr9A3sUrBz64oHknuZTWLSlymQKhkIqYpW/pIYm6p
++C6sM4nvxqwrks0xwVtwmYu1JUSzAqfJAdy4XDrfMUW9GkHWjOeuseHps5M8
+8eHS2Ye9VGFqgV8fa9qVI4IIEy/qmfLPmo4kSQowvBWdotm30apfS3eEFLwy
+omklYAAcX9JDcH5j3JLnzCJIKlynb/J86u01Khb9S7yzL6NmwOAw3YRqIBTI
+mFdPg+ZKC9/kgWbAgB2q4veuAYtOtkei2KSHY6Tffu8jlh3u4DBORpnipRve
+qWb0gLAFiJPIthDW6IlxhG6krqUwzvQeZH3bu5Hq4htuvem0aljztOfm+USw
+Qt9XYW+6q+gB/p928qVdIpPijGDUz3sRVTSZicrxW9qJdvJx1ga1qEYMnkHB
+UOBUVpkVsTzUNdmEN8T0hQrh2lgPQZh3oPH7Tfs4e+TNnLHf2bGTxt3QQBzM
+wHDbggPrUtKWpBGizeBwHh2TC4anD4j0OUE+bpbR73az8AqD+W9MTtl8TcXF
+p63I08UH44yD9gx7IKU+KFRpa8l8s6hoB2rH3ap8F5feyGyYedtNPMfMt3Bv
+ry/OJ/SZwL4d6sLixr3HcuZMfqE13DvxFqAIBJI/rhhyt2+gMIOEDPXWsHK4
+8w2C733bBmQdPItkUV5APU/WDUphZuegdCfqXeMHXdd4sgUF5Jv8aI9eqBu5
+0Q9GWDypDxg8qW/yva/01xGnK/6QVHsP4e76ZAaq4tbV5aXvpHFSfxxpN3rP
+1yUr/X7hl89juMMM248ziPN5zNrEBReGbuie9xrKgLhEzLb1plM9EgpNIszh
+8GijUVEO9a17ZyRrWnTiyKVBJbcuo3MFnUuyRritkNiLsxmNjvXdPvennkCS
+C/LPKNg/K4upsz71XnI0h6gsFwvCT2ZHJKxnXnGtE7y2DYo+GIhTgF3DrihV
+T05ZmVyOlH03p29O9/YejfMXz6bunT99c3I6DsRKNbwuGVZxxCTNtFljTDNU
+s2kkUOGETlXPnBMDn6lq1hup1DlpOVyhtKlCoUxpDsAhmdFMqQqJbmekCZZz
+tTZ8YOIPaVjCWIKeEIy6qUxZHRd4Oier3d/vWi/du5FkIoyr/tcWkVzr4iIh
+MXDnp9njWaiYTgLQBqlixxuUIlNZQHT7qXgL1kEwhxAFY4OOnh8Qi3tAvw2b
+GYkankhTB24hRpxur0pj4rHvXTVxefzpQP+TtR9SjSSCD+sd9sxTp1qcax3E
+d07TDyllDDRixpjv0WtHEOQUamLEeI/YBnsR3bDurjRzUN0xSrAvZmUH32N7
+GBhmoVdhs+w2XcSXNMte+0JeWzQPpygszs38iTtN6xs/B3R50vNxiRSpGJlg
+5kOIIrP7B9P70/tQQfzcv8kp4FNVMH0DEstaVNtIa1JZGMRsWefYuaorCQfj
+rPFUGFy9pve0vvG4Bmtj6qGoea1GwLKuKAPSNKo4TGoNV99ZPwuyATeCHqdN
+8PMytx3wSHK/1r4j0KDrLo/+7nRetURg72bEUSBKnASVZAl/IhOR1jZaFB92
+1rtujIORDswW/3bjIQtO10boJwg6ABNuYexGVsTxZ5sD+wefaA8ckkEgFBdI
+9ptnCeI2RaD8S9ZgmghI2qRwxAVlIQsoC8pJOOekrERrCj0nnYOVXQuhIemT
+i6q1oJX2JrtX9YTdjzBtD/Fu4N3DzHzBlhwxBLk4dK0ntmKR45lk/bynCRFM
+1+O6OkeNQZQaSjLk8lWo1rPcgKcbUHrKrQshAokN51y2V/XMCZ7LG35RuKF0
+5J+V1DlHghRFsVIAmLToHh1r2MWU0FKyQZi8faN61Wr94rq8sGngZRtXzqFv
+XYtgyPtlsk2f/S/I34dlhSFvyjYRkXhM0UWpXwN1JHxWbRKBxq2dBBR8DzEX
+bg/cdAg0xxrjo+3vZvnonVVnaydOx9OpTFGBHJl2tokLQok5AxFSiiTeq+KU
+oU9xNWaoTtX/YzRvU0OH+TiyCAgDXasPyD2p3dVGEZeSg0P8jn64onK1K6qp
+CXIsf7Kp5rPA950tYblStDJeV1AQ1mmMPKrIVYy+vYsSWBT2F421RQJ8LV+l
+8yzoMQ46TFgdn0GjTbpJYZzz62L+rlGdjkW8K4leC6f//Kp2V6857owgzN7n
+Tegsn6w3c2hkblUeZyQXJhWw3irNjiiGLVp3xiJoZI4aoB+znp+wmKIgTFte
+VjhnYpYoRgpXeXrbRtD5nYdQX5atEHY4AyE7j64wDX/tA8ALc/8J7ACNbuRq
+M7qsZBqazE8dIhRDyYzciV/XZEZEuDHTHNE8MSXeaCTNY/2cRTZG59bQXoUa
+OQpDvZP0ulYotU7yYppngux7SvqgWLdjaz6xKEb3rq/MoinHBEuA3yyF6MxL
+SlWmCtDTIQm0CHCSpCmNZNcvRVQ8MSAgP3NJsBEGhY7sUrq9ihAEYA+12hTC
+2q/eIMX6Mo7eh4A5zh/iNxbQydHlE21lxmHicKVTtWOPL1A2SlHlbh8T73/q
+2f6x+IA/tpyFCffWewNKVnuNfXIKWMnU/ev1pMT1KgHRqx9yfw2NuBoLl07q
+qKwutauhoyC1mxnFrVJ74d2fFwb2b1VU64i8LnEUEYZkAe0O1KStJ9zTlAQp
+7T6Pay1gFSQBI/qXBnB3ZrME6qcNLHweaDpDhS0g94R32PB8yHkXIJ/8VrCH
+YQQjXzesuDK4vd7VYyBvT6SWWVRV/65MOaFdvsxXuksy2yItsABeegfrvQQq
+YIk+1KhAYVmS+kd4xeMqROUN/YohyJSpXXuK2DEGNBGSSyetuexV+9s4DB7P
+OuOGlf4HTQzA9BG+5nrJWyeztQjqOOrN9+BTiSeOKTctEZcLs9UyT4wzCyUF
++CXV9TDjp8TlTBcMgO64eExb/v3dWdbiemcDVycxvxCuMe2zTUewkB/JtcbW
+gdRCW9E1WgOnrgph4OPVtI14dgK+a6QqXxD8fKI2A/vNZxXqwpOW9/7e3jHR
+s89qgmfSHHhTMKadomzXZ9yA94fdZP/arZhQiftkIulu2kF6R2o9q3yfPPcq
+dzp1w9euCoMlcInL7I+SeklcaSaf6Tk3FI8QIR8iUkA54PAMfjE1V08tG5mN
+hhi8nRMnk7SabdNsEDXjLWFpKFHFJDWZ5hrf8LS5AsHtGOp+WZhlDBtxpiwC
+1oTlSxWbSk0+ouLg0djXXjJBpVZ3XWzW2IY+OwJJ6gjTllJTai50lV7Eu2Y+
+Z/TfVPwFNL04VytCTR+UK4J+bbQOuRxkuxEWIRn0vpAyFe/QYJe+p5Dt616e
+ARUzcJH2j+OPmZD+63e3XhF/mpLnh5rA3lVxaM87IZlEsRe+ocNSdp31Y3H2
+msB4Whg0cj8ORHhM5ccRDDIPNFC22NA9pDjNikF2aNdX7vUQR0q9PO7sKYlj
+ax4O1NWkmM3CIxWEg0nZW1JwOJR5saDua04baH38yVLy56Xn+5B8rnMXK/NH
+dpcmWn8yzf7szikMRnAW82GWJ3j+Ci173UZI6rfFsgXSIjh01KXVYKDnPjPF
+VBUZQs/8ZxZ01pudI65wVhBltoaPUI8BqFJv5lmPQeJcXTgv3/mn6OB8U7Uh
+mYAttdmIbrfTYzcTFtLdwmoIZ7JCuLMwSvUTcknRcjMRB6vpRSK4G9GPcMha
+6D/amlNgPtD3bfaMjH5K6JQyfS1GsVLpWvgcqIcm8RXOtPlguQmTA8lZfOCS
+W4K8DPuWlWy3FyTQogyAUXc8QCVTMTDPtyBYc1WVt7EN583u2HAH9i6ghY+H
+oJRAB9hy+kXxvpSddIaYwAzO0zjzlkuPpLu79yjpNPFwuEuKn0egmKrBjLBv
+YgRdYTooUHoo5Bf2MceIjHagAM3LopcdLHsLTRZ/ThluLuabd+U4YE1wZynQ
+xTyVhRGBRoVxSwAQaRnDwHbM1xfZR3KdE+hdKkNlv2HZwCRLkoQww1HxjgXO
+SI9xgTuWtjEr54ohPQRYDa+btPVNxheKMfIZEj8iUjJdnPop9mjlo4UeMrp+
+9pu5XBSu+0TL+V1A9t3m/CChvsbiJmwBujeEy/KIbOsj0NYK8x3Abrvh0HFH
+XAT+lnlAicKdpfrSWdfzG43YJOJTagZH2lzbA6nseFEK9lvCxeqs8Ulwuy2k
+Bfcx21srrWiZwIMsOyhV0lOXHtnXOTDsVcaIUhHdmZcsDHBJOFh1bCrGA3dU
+VKTExhlRjRKPklApsbFP4an4FHBBKJ0eLtJttImwaAONufJFSfeFauNAQC4Z
+JVPIJ5WhwuuVXkD0+IBRUfjsELmFUsm8I7L9VMqxGe+6ZSqf2W7GkUnfU43A
+Qkjk0W9JDK20gQTmcUx3Os6wqutKqA44NXXhdixallYlYmWb5X9sGkpHo5mQ
+8D84QjVsq6uGbcSV1KyLJ1khY8Sqykb6eAl+70yDNfT2VaMzZEZDAZ53JUka
+zlcaui3W5Owgx5UTFC9T+1psIRxuKoAxSV0P1B0+p2VD0LXGM+KIJKm/jEtv
+52IAIKAXGZz5DqdVoMQo89eLjO4y561wtBADVLVUmPmckTbRNcXqZbPkwk93
+cbiX1qKup32Ug2Mq9OXT3DBCJSPthFzZqubq6qXU1vTzcrIJQtJNHXgYz3kc
+AVQmgWXx+t1Y5Nb/clOdF+epGmA+moJhJR9j9kLsWrzksfs5NEp0Paqo+8lU
+cCikZAJUXnqR3DV0Sna2aSfVctJsWqkvtJzt3H+MogACZi3ANGYqBBrt+OhG
+erRUYJq94eKAjyg52DYe8rbhGpJ/odEcjv4ptgBEOMASupUSwB29XCp2BKiQ
+6VRQI6gcPjdmsQ6To6WXy1lPCJjcNUz5SOxfVFQ6ohiEkGNZ0eloqVfa2NKW
+7t1TrsnfA3r6KNyLEeaHZXA155SFrAwRDGGU4PTKChRq2GbSiOWvxZOjVCGy
+zpAjAXRtB753YyGbs/iDvPbnCpNryNS5cGE3C3pQEUaCRPAoI0gG8uyyrqv2
+YydypGuy3wFJiDqcXa9iC8ese7f/p1ecxALd85c2wFPXNXSXECqkyI/Gg2LM
+6Cgz9ENM7Rj4yNn55KpCW/pQiP63Hb3uKrB+DCg4UmGPoDBbWT0APS1qMNF6
+1HTBMcKgV2wTh7YeSz/XodA4vQqY+mh++7H3KX1PGrR2smax6y0m9oZZy9CD
+d0x53K5qr2tolKEzJsYNlXGyNUB5JE+C410cbvbWQvp+AYngQLL/xCaSx8Dw
+Rc/hSGmqrOs/ZF0qRqmZLdrhOF3MfYiqEj8zbUi33qPP/vRAYylKJAlBCqMP
+FsC4xw4UQvsWWkKbPK9WaASN0wlNQfgHDb+7eOpibaMYfTsgZPmBAJg7EpGq
+TtxZBosjnQ6BdpYc1FiSnLRBJCxNoZ9PiZg5oAOOkWevys2aEmXnjZSpyQ1R
+YDJVRqT71qW2ctJv/Myl5CVhbtxHbgGxLoXf4LIBdph5oayjuFxlEdrMafiw
+LyphjT3xvNdupAawlJ1I6ECUNkXaewQz22OyzIs+nN+iPSKEBEALgU/2RlCi
+Ua5NxzOzwBaSoe2w4fQzIl3MNM8t1DAroQ9/p8KkmECGWpCyiemVhc4lNko1
+B0AmnXPoEhW35hz7b4Voc5NtnckYdeuWeCcq5Ul97zv63gBpN8bcneq7GXp0
+iD0CcosqceScgf4UkPpLlveeyVFwwOnk6KthAlV1XbrV/p5T1f0HAwkOdv+I
+m+fErVpkaTLVjE8NtzGY0VjG6ntSjKOzG62vx4og0k4kt94ZXqNbEvoXqkKU
+t0kIoFBi7WRZZ8ThQYDnSfolYn5lCB9t8AroTKEkIEK/kTATcNmCkBO2Q56E
+cUcNm3YvyuzLJilYyO5CjHMk7V8AZETpupZmeoaZcORt7k7w5DQRmN3B3DIc
+zz7hp2Upi6I9v8KloQP0e0xoU6lkveayx9Zy/yilbcQExUhaPVwL/ff96VHo
+cT84eHj4t+B9+75lCdlpeheyuc4DgxbhXNObENJB2oeeWwyILN2bhEGH2BEQ
+som4YT7foCCNpjgJwEdXVXlh6YqLGZnvdK4zw+dld8UAIEw0kmjfYIF4dWZd
+JcQshlTVA+4pQqVoCyTwJgIM/9iUNwXR0thMWfZcusDP3PpBccITdjZatVmw
+u9ZoS7y245u3b9bERhkjl3mYUdK3sMU5rWzwAYmNB7sQ42ka/ARu6SfcqIkU
+G+PzWpHx6uRpvlNo5099znGu85KDz2S1mOFNPOrdEkRA1PWvndPEgyd1TGNT
+2im/olrO4sNEONfTr+ywb13Aye2YVPQBMal4Q86LFQt4KuKxbVoStj/5/sXJ
+/773749f/YmjIhDtGEF6/RJ9zHQ8qHDWjy3PrFT6avSqMznTK21bmXrgwIqE
+q7xnHu+i5rwMtj0FEPU0OSFurGJ7mg5PvCYzEe2d8fEcS7WFEnqaYuWFqBZJ
++0ph22X+9PXpGzpCPq2sMBTSG901eRVHdxy3Zd2EirFL2EP82tylmtksx7q8
+pMzu2sM71ucVoA3CYO+cIijnueryFRHuzdaEEdhrv5caflvESeaUdcWvFVY3
+sPeRa6/oC/WasWHci9IivWufAB6FtzjyQIpOMFMejAV8ZH9xGs8UytNjwxBI
+XThb2/lYtD8L9xhUxTJexwroVrjcO/t7e/nLi1Wz6xxoszReVshgdEtT3Ef2
+podfTakbgw7QFTKrVtSQMiRblz7WCwmEhUuJXDbEV+vOn5vi9HDv4BDzRBex
+AjxFahLbwFEKj870e205Kmfx7mTAaWeiRdK978tp/lh5CxQPLRk9dObJmmrz
+INqI2svqpwlpZo9ftC7nN3wX5iXqURKppoyXOeEaQHQ0IGMJl2zM850U1+jV
+CmI/KRFt38IZgVeERgWT+IYGuS7ncwJO0rYghK/DtYBwVYwMbX8gvhwpDLJ9
+D375Eri7iO6qaXOLblPwddiE4aZrCNp4QFZwP6ab8QY5dzmAJ1ZBkhRWAB9D
+P1u/91Bth3IrSzDbZsEKhf3JKH+lAAeyGw53m8Cem49mDj5Zdu+3b/yKLgdf
+X9wtDvXRC2SSK/oc3/SuHTnOl9V5OfdEspokYpUd1LkkBq05829Uiqj5BqRX
+h+xAliSKDoaF9CwzJ+Z7tRcm54K1gVRY0ARDVWnyDWdi7lPCKp9x6cyzTvD8
+fVgA+QI3MRqNE1BNiyZPZ/PM52kB5StTbD3Krq+lJwJeO0ZXEmKNaR4+6nvS
+yGlCBu8SfRlrGLRP5Fvoa3UvvGVIkCPJUQUP5b2tgU5Jm4BIFpkq6mCPrktg
+fDEmRPf2vxGn1T+f62XPyrn0lHQmhWop8SuvtlK8+fqZ7rJCcjbK5RfbB0Pk
+V9w/jlx3z3XGV7wS5g+rUcY+ZyZrxfEqA9A9jrXyX/dKxDvAvo2k981+IWzG
+odxlzfPLhiLrKubYq+CsTwciQ3n6BnYE0Gdz9zY3kfgk1D/NVxHbCzVasfAR
+kkpp4UP8sAL2KYryi4YrZvTKNQw7wE1gyZs2QYaxOt+AVk/rgljUQl1qXEDD
+wjdUntRYCRIGkwh4jFZFi2m4qdHrJwSjIL2es8zttLhtGoQUuPbH0w6wGuDm
+ZpRzFMjTIT0axkYw6zhLBrAYLSbxIgACjCUqxOU8XzXlxlnoBPwU82EFdyvS
+OY8HomtQOJBdH6OfQv0TN6gDykrq+X27MmlJjo1S4DfgtGjaufIkWXKawppQ
+FcLTETRf+9duhO9MsGSxhYGjPWPUsdK8beeraGQcsNSMvAq2NOi7MOVuq3pr
+TwCDYXD3v46oLMRUnyBg39IW7sVDvoQxrjV88BkoAlB4tDit8B+IkDINikIc
+rtzRk84HIpFpb0S5zQXxtS0u2eneOdld4hjtnO6WC2dKU2tpvvN6V8t/kr72
+0XH+kpUMN9ujTAlXD51/Trgv3zMVie6LRcIQd08AMVCCJdyL8ByR5vVjWT/4
+hSK9oW95Dr7MhdBFARAcX3ZONF7Ct8ESPsnBekOGtZxOH38jNcPkZEDhsI+2
+YCK8CqXRwWhcCkxGtLrU16XUxzSrufBo+5mawTDLS3dvVhpF48Qg5c+vmDDE
+E2sQjO9ipSWxqHKb9xGdJ046AAY79guPaQcm+SnhgARzowcB7M6AL3LKEW/k
+BjPuIHeDP750672koxjHgfUBL0z3RCgjIAJ7d6p8ZSHz2Q8lJ7E/AZKLOw9S
+w2qq/WR32nu425qf4dl+1O6jORUSr9g8kSawNecvUX6JrKDUuTNi+onRMjsP
+/JTnJRfo3vuzkqIfsHsm3D/Djbry2t9szkwVgGw3gYK4C7Qx2eOo6MovRX57
+XpcU0axI+ps31v8iX3FBdKL14uFYjCIrTPInxZzrYELLsxaryKT5E0xAFp+R
+3tZibNEPVXhy2Zt4WFDi6CXOSNifMzsbHuZ7ROXe83P64y2KD9WCjzMcDNIn
+4ZNTlgVMfuzJmEMfHDgAG96JcEAG72z6BurWrOayilV/ytPOI2ik7uhlegLx
+2MmLmHjA9RUBwHeeIOa/WLtlbzZ4VBTViss26BEvhQx6cPLhyeG99h+FN/Pc
+GaaE3hrVz+jJ/J4sjQXbroNv5E57tghT5jW8Rl8jRo4qd+44cOfqJZY+7a1g
+6DmroVXe+Ul0wgPSCYcCVNNB497eYEEGjlbZ1t29CSIlxD47NUpWME1ve3Lc
+pREevOrtVvxksk4+++mETXaOLaUn+xLfqWCjJn07sScHjIrPsym+U5uCKols
+lO03m6L/6LtKY+y23n48eEhqdlW79t4Tf2hDbFifq9tDhSUv5y8lkZJG8pMf
+ybFPM09RjZG6TSh5GvSp7x1V7JD+yPt7e19hpSctSGU0/OzJuaXKlTdd+v8T
+IEfCteNMbzSUAtsY3dmMWE0plzPpZMA0Q5DlmP0PNwNKZtttSFxu0uVjcOmK
+20AYFPz1NwS/1V+x9v5hmr9v+lkVOVI0xLOSG8oxdjwXAcjsDJMwHvXkqHUx
+EFuMXlAcXB46SP3FyWtOiydzI86HnhOOEXbgLGgK38D4u2EByN2/PUcbWbah
+z2vhtrvrKR+dYL3nxYoCHJn2EPogB/b1By7wVDFOtGaqQFIBsDHqFs2EstxO
+qWLUjSWH5go5zFwfag4R43eZPJkbpV+0wQqGoBupDEJOb6j5PFaNFvPXLIv1
+Oqrv7Y893hZe5LghFbtvVr7+wQ2V5XH6Je7ooSLRv57mTyRp2iBfcmtJP6f4
+iPWU0+GoZ2feIyrGC8S700S8j7mtiarZxvl8nR1NR3O4EvZhtIoSNUcv6AVZ
+DHAwS9EkiDNzLFNxz1SwadBJmzZmK6eaaUaBg0/uc0ZIQ3JuPBFGgflxO7IE
+Aoha7tRM3AcmMVwV90xoMr4JbZScXgnJmwhG6SNjoejhuuFAuFS2heG4ykn4
+OQaKH63B0ukokxo5ZJzceSvW0TuggHnEIlUtGkbOeC9wNznSdOU5UH+5fFPa
+1LuoO+PsUhIi8jGkw8qZdJvTByXA3IbWE3vGNNvJiGwM3C/Hl8zgEX2WSHzW
+7Sg+a4I0BVQ0ZSOiMkV5vnmqiM/nGIVgeGYVV62427YoLinZia/2uEe58wwd
+80xpwRNxN83d8qU+CQgnoSfMF6fUZ4D8pV1bl3OEMxN7UQmeq/2LyCzK9rmb
+O5bWSsHqhpxxsoyozeS2LS/m1BgDkMy/nk4zbXWV6ZLlUiwqN4U1R/D8Zefv
+M8jtOBw3dM/Uy9LTo6mfyOORuJvV0qtfcbqfhIRuq6zsYsO08IbXrtEmCbAc
+UlHXphU0rnWQQZw3Nw8rqByjCQ9kvlSOWvMA/hXDvvs+fDDzbaa2JJXzLCwL
+u8Wiy5m5SLwBSMOa0+Ep/fh82M47n7QP1W1mIaAZR2pD30C3MuYHBqFGxoVl
+Q0gcJTsNxhrTECpZqgtTMgMgYKO4A0FcL2f4Eh8ntsvWHtjt3im0wEQD6Lyq
+3VDB4xtPBwtmjFZMpfmUmCJ+H8+UuKOUVxnkcH/66sJE4AI9sUiitZuA70Qd
+tcgJoDFyVlcgqVtKOTSXEVJpDBUSkppWnCReB+GArSmcfiLwSFmWmG+FIp8b
+oFgClYDCb6HpgWaKv6PjsmitxS+63cpSyNElJZQA1suFC8ibk9ykrxMAPBCc
+tHySvj4vi3dUId7UmzVI9rh/fLNULqHGnf9CQd9p9HLWqehhiYMi3TAVlGiD
+UYNbStyjYIHAkzAKQtjLQjKOI4fV0tewc7U6rURpGX392tZmGLyHl9V8Zt7C
+my6OdDOndnXqCw6Nuah65NfFDo6aC6xfWD233EhCBQwwhcgOrmfgdcNKFhUK
+SJflBWfFFf0RuwHRmyG3w6SacCFJ8o5lYewFr9x8mmM6E1w11vP5WNBc1rVn
+TJU626qeNcriypT19cWFpuKvyuL9jZBS0YMYsABnOfTrr6mymRxyM/CZ+0Db
+6N6eeHal8+7WolaDUuVMr+WvY+IWFE1/WeY4eGGJ+1Zd2J3JPaxTvuNOm+3m
+JSlIwqKkvltJDbsvJ8oBQ8segtQ7PvA0tgNmspdhzJxBp3Z1N9RxTe2EuM+J
+1ePBg+sXWR7vwYXB9uqXdVhMOuF+8/0FI+qudy7KiNPNTRvVFAkMNDQrZkEM
+ZkbWNexeRKIuP3n86nEPKPnUFt7a/oF33GqE3HaDq0zfx0DO79kg+9sd7Iky
+Z+MKkKcPPZdObS/KRc0dYGqf1Vlbnl9xlpQ6v2ny7nr4qj93q5yndY/lk6BP
+sKWAtlPnnVdoMsnmzrBZM/kzaXIn3tHXwUbVjOwf3GPBCsq5JtcSJDtjFU6K
+uBYBv0LrqMl0el9KycGZXbaWGbf1ihfiy1CXEccZcDeYcYdQWMDu9ur1KXO7
+gFacSw2IXNv39pBkNdSAgoEr1ftkO2SFsAVjy3QuhBhXNc6haUv/aZ9fb2hY
+6ojDCZCvkAfJhZjjeHmVt6IY945M/3pVuEMy4nqbsZakZqEkNcJ64HpjqKzy
+A3fJ6GTk7cq26rt+8+fXf/3uGTaHaR/L/KIsyGaVXUH9siDvEp5D7RyNQOLe
+OIMIJGio/ETkIxyB0YXzSibA7h3JXtQzcTOqJuNAhnYvLbkQxhdgDNUxPDrO
+31ARFsOKy2FUfwH9YKFuAfePb/2Tlz/8KWr/b/iiuJuwakuhxR2d1ZWzjFdU
+Cjcy2f/zekXHFHEZbQPgAeVu9+v2Y9QEwxjCHQnXpcRLCjJ1QoOjm9nZ4vpS
+EMtoYuKN98rZiRCpskBvEIdyJhsVIlSXpfXB9hToez9z2tfX7VSKonXG1s8y
+cdwHWK/39wmF8urmW2rogLcCVAdu56CKxA85wLGrpZUrIj+W0hmW23/O1Jjq
+UMFWVDO2mtc3JXcznAgifDGDg1KEGnDbaSr9pLQs2W0aiN6heXP/WpbA3aFo
+jQc4KvhNK9mnO8cfJNKg8DNkI9uSl3+vN/xyS/RVuBd45l4RKLiJEuro4f6h
+U9PK8bs75t8+2H8Qfvtgus+/N8M+2tszn3go6ERql2OSJ2iMK9vJMzprbglu
+DU6kbNbonUHdjxnQyYyKwbu1C5itXmY+boJTw1KSOXPkHtNWuxdjRgOg38T0
+dLC3tXRf8+XwwA9wSmGC81lK78miXLOxabtWzoQpSAUNWJIm5Ak6E/Ac1V/q
+gayURY/m6o6B0zdLOzPfecMeOepb6TWPoICKeSyDYbxOR2QrMwluh1Eo9PFo
+vSes9BYo+T68dH7D+hCvhUKH5SyeFMefEe9g8nlCnPfXlgmbgxzcziK5v79/
+7O8Bbhwf0jpYnvHd8t1WbVfBakg8NH4VM6phRh1cNAb19rAepaVQZBm21Ur5
+gsShoubaZbFpnSlIPng8BgkesmFi+s78dbA8own62SnpWwN/If6y80+JtVY6
+bgFVJKKh5WY6/F0IbFnZpgaQ4897ql2/z57Vb9ArOfNdB0p+HY8xR3kgS0xi
+X70Q2J1QjO8trtzQ2HV72aRNmRIRF+wcY2rjvGzPO3tGRBdjr1aS7xfS7HJZ
+4ya2zBNcnXcWTwIy2KnLasV14XSF1bqZ5o8b6Puxs+Cr98U5efDxMExMBjaC
+9dLI4dP6FpkrJXRKtsqWIHkYUDG9Fs388Tl1OThdcimMmdkbudSoPnAOHlKA
+l2QjtGTcsCFaLN/5Ikis7/E8f+mcFWledWonoxOL5KM7zlV5TZWO0Fm41ghH
+hJZLZ2GcuRej+AtcM++OQHa5cdp1TkWr5A27p0I5uIdpbAZ4Qe47c2PI0OvG
+H2/c7N3L/ndiSnvpVlevm6tqNc7wC1Kj1XJDXb+b5TXhTaNbe3nu/qj2JgpZ
+KEhIREBXN1x4tCzXl9XfSbD5GiB3rrA7xJeC5wkAHMq0qTKlZOQz3bY/0WTz
+79zD6I7+qXIndbUq8xeVc53reYHnQPJB11PXOaVRkKdwHnpmWUn4fcRbPc5p
+37gpfubU6xwnZINHS6gBuSmtky0YiISgGzko2qUWfgnGUj/9x/PyPP9zzfA0
+fI1ff//qxb/l7n+4LSTc/oNij9xEoPPLeH6MFeXGXmPzOWQfTiwp5Q3wuqUw
+mSOFbDZmas6lpjmZTHI6R3SwU2mUp1RKnVYAB1Ldy1QE61IaP5oBBOPkZ809
+fUaV1Gc3RKEMTakzG3nXrE8ZU2i9ufp1vh1dkaR8Pjde15SN2W6EUqV8KI4Z
+Z1EAVutTfjQR4b/t/E7oYwQiM0DRnEZdXojB7UBZooMXaeBdm2cCHvH7opoX
+vRA1E+ApctJAyUyCpvKNraU/m9ckMjofshuJ4g5bRs+tKgxlSv7Zde0RSRp3
+xdiRClXjVJ8+y34K1VJvq+Ytukx+wjH9yZQzhT/ZThLpROCkdpjIWJQZI0RQ
+PubGGUDnrUS5AOOXfX/jBmGTW96kJNNCG5mztewpSJ5sQqDRtA902orHpXkY
+gO6P6ew/sUTCwPuMK+1lg7UZwUnz0sBBuC1W4OTslsqxXr6AT0caK+m2WV24
+wxbNY3u93DTrmFHpYrO/arEZoSbRbdCHSncDIosUxyulO+MCWVHtsYB8I2FB
+Sm6avQKE59jMnM6w1q04O1IMPpx3982f3jY/YQvIwHLLdj9vFj9xvqk/Rigt
+NCOxC01j8Z/f/pTZAf133v6kwDih8YTNbKnU8r204cHUm18zQHL0d3pLYNH7
+iaTMW01VuMUco+xiAIBLxUDvFEzDWBxvf4uv2tGioq27jEa6/y3HUuliv6Vw
+MSb4ZnsRXKM9VdJDmwuBZBdNRpDH+g2OSSLS3LeP9oueyFi8JSvmvr8j07IF
+iL7blOe62185jfsPvPRPW/afqZCwv25c05/hXfP1787yZ37NnDtIv2Be6M/w
+ar/QSj92lan3eYpg+Qz9x+8apn9C9znyiYofJsnZZd2piulX7/7Xf/2XaM7s
+j09fP3ueP3n+p5NXb/4lE4Fjezq+yXuiKL+X7+xP9/JJ/0+7me+IeNu473aP
+Yf5Pef8Z3vd4y/0H+CbRn+5N94T9zr7gfJKbh+xmIurtJ77JB+59/s9572E6
+gP1Kdwizgn/O+w/UIeBn+umnxh3nHS2xm7H+ct8Lv3QDdEb8p/5eZ9Zii1ff
++fJkYC1ZyuZzXx8a+I/f5InJZgmbMszhDt/nM/j81TN3At3RHLD9HoBLnSK2
+q3khRa9FIEjX+I7p5xvwd55VDfr/CE0n8Mgz6A2SOmQ24bqxVS4WhLEvuVWe
+/8p1VsVlLyY/WJ3KnltUhhi8JJRUcaKSMH1rKeraVrJLVGzsuQnxgcEP+wSo
+Dfhffc+L0e2160nbvWNjiOcpSB8Us/ccjckW8J0hp2g3oyCbD8roPDzu7/Yu
+/4QJ9P+fu6dQAeLqFU2w1X8yr/ZteLW/oi93uy9BjgNq8X9zKn5zKn5zKv5B
+nQq7bi7JcUt+vHS6s0R5nYrO2P40AvOLLRjoO32Qp13f5wOyBd/n090WypdQ
+D4ub+CL/CUv5ifO7JGDJQCwDl3S4l4D+8NCpQUNIzxUPNMVhsNer91IKozku
+FBPhlKuL7Egib/T2R8MlOnxUiehIg24D1LJxGxhfjl+BE7VZ1BO4pQtw0I+4
+syk9cJOy2JDuyMBx9DVnQYeOqG/yYDAP+yjeCA2vyn3zFS0drwPLr5Z5/7wf
+wwasLvLUEG4bXwGDbG16tP4ld17Msc+IpR/dPTj+42GciX6qcyjw0XLelOEZ
+Z+7Yv8vC76qLj52P27ksbUW4Pypu/1sEEf0+J0bavaNpf3isvCQEChUdJYg2
+Mwr3Va1tfoKynewHfBsb+lSo13LZb9X+vsFduNXot1UMn2f+C8KQJztPVjB9
+7VFPFx6QRb0CLplQk3Bb9obyWB4NaGwL840t8jiA7phFas7VbQVkARdSXrMh
+tqyaq7ycgRuIl0rYv0shi7HFEFTXJfSnl84RqSxVDO+Xn+BTGn+aP+bMIIez
+3UPNTAvlUVKUHQK2IXLIfq7Jiucy2nDG+J4XUvBIF/PqZkXVq1zgRFBXyLvL
+x1HLElg4Gd5eG8giPtIkpyc1br03bVfI9RGJAJIIoHAfo0ybfLorQUZjPCKI
+5zF37AS6aHZL4v5sr3G4QybqHKMBF005f18GlkOkpg0AZcHw7rjBWienc9Li
+IacmlnQXwF8sFiDmydCFgSaEnYWghzpKnYGhWJ33ytBfK6WrSfQBStG9zvNN
+MxZuZDJCK0aTHEQ9nWaxuUFF221MeBs2ZMHrRvpQejqU54jfhThGfBcE6IhL
+SRnxs6be+4gCeuDpFqP8WutnZHRtNehQfc7r+p04sdZN7lH0xmyk3Db5uMWf
+WD0vaq7CVZDKHneuYmwV4gHqvkuFtf4RtplQBXlXvvOa+WhLweG1VGr/gXDj
+fOlIMW8Dr7pnpp0p4bcerm1EGhRuEGqGkjaVyjT0wIKmNGuoKtnJiJK59yZ4
+xgirGrnXThKratC6zVRCBdM8cTMg1+1hjZneBy7r40xZaGft0O1cUqVXK3TW
+nuGQiuGktZnK8/3mMoMZCykIja4Eu6BijlRGUPxlpRPQ0Ess8abMHA2YQ2bQ
+5mCY+NxczwNyaPj6Wav2YdRsiMUnmXxDX+fjRgIz4tKP+pg6o/D+zf2xAgvg
+gtoUswIdA2dauYYyw8P3pQufFnCcC4QvfjPoHhznR3tN+Ij1kjt/Mr3++d5X
+4ffWFfZ/6S+R/oRY+B7+vffVrrOOnHWFj9PG0uMujvZme/O9UtZzEK3n9M3p
+3t6jz1jO/sHHr+dwy4IOzYom9NOuWIPRktxT3ZoO/aLuR4vab5hS9bZl7X/K
+qqaHP9e69rGs6aFZ14NoXQfuuwbn5Uufvy0rO9iysoNoZQd6CA8O+6fQre1g
+rxSZA3pIcmhZdhXWzPJInvMKTTDSdMYSg9UbffMcxWnoEfN0bjAbYknXQZ+J
+JV5DNUJi6sTmzJ0E3inHbGOSUgppRjbYWJtiYMYikBpP6hrJsxnB7DSV1POi
+dhVEGhjMrQnSlTuuP1O82g7gnnzNpKRM6cwqsndCYb7uT5izm9r3mFUsLPkV
+cwMlSu3oT5e1uLakkcJddQfIHP9dfJT3I9ood5EeiYoXNrV9xZ6nSbsjiM65
+cPIOH33Yb/bmOsGDwQmeFV9gfsewdLgH0M/M/dfQtP553/1r388uFmPz4stO
+Lt48v2+0i36K+Ckxz6O9aKIPhib6xV6z30ezjf23i2n1p3eYeMvA0RzYR9aC
++Y7VLHeZ29hNQHkXiZqg/547E3Tv28nfaZjp0dBMhzbyC031zhPl/6AL9FRr
+StErT5NXB9NN3sm1Y4NBA3FE5bboQxjhbIwMdy/3RswURMnNg/AbOoY9cTX4
+d+9xsS17GAnl+3F+i81a6t4JnERFP1Dr5flIT+YIO83IYsxkS1meEb2METrC
+aEkfN3u7437+6ancFyeCp/1d7UPG2+aZ3mw6OaPBraYpxDkF90y4fPM551o4
+lrD9XdJF2v6MTgD/Ix+il3X7mcGr7TxI6rrJohiJsT6CahjxzRl1j0a8GX6/
+ybxgQ0PnOoCd/DSknJtuUv0KxEtO3Q7S2eWccgXO1NmNT3xinxRNWZ14ZS/w
+XD1099Q0AJcC6+AsatofdbXwSL9DZQD8VnyiWVV71RjNCXsQL6j7z3+qPyQ/
+iXDSP3qhzz8a83Uy+M9/bvnplh95ojB4+hMNfpD85KUnTdTopPBZsVaHcQT/
+M99r7DPsT6kfo6EiUL3bhjp81PlrlkIOTn35DrNKQAInH5yYRneSWQyOG7+C
+eFrbx9p/MP36sMkMIm7nn+5wk3imneEmMl4KGjc93q27NgR++1lDeZDbu82q
+fyxScLb656M9u0Pk4g7/SF6ixa2N/7llrKP9pvPZbBia9uPHGsKixRZ8FW2I
++Sn1YzaIN0sT2Qufj3+ind+bPrx//yv716wj3yVizh/whe29n7b8OJCxeeiT
+VU4d+gq0eXlJ1On5yaxcBF2LIo8ocZO/QEcsCNquAASPcan5bV0sZkTSQwBc
+lhn3h9evTl+c/G9ya0NQEWhYioChZTdO+EoVixMpnOO7attVc3zv3qX7yOZs
+6pZzj+DJClLx997JI++5o+ZU3L3DR/dTzTwejLlpO6bnNVedziUIqzyFQPgv
+ei0FQUkebFGSsG0+Skd6N+43VflpqnK/JxN/IVUZS5feNDqT/O+qKuNVTg4f
+blWV929Vlfvb5vNxqnLrUP1jsU1Vbh3qqHtofjFVGZ+L/pc7s/zFVOX+9Ojo
+a6/BurrRqLdfWlWa5OJHqMpnTMmWZotT2YtgCuAioCiABkawogwLVykgGTUp
+8xIN4B5VdkFZoBbUDX6jmLiaXOVflsLG68H3nN/mIwEeSG4aTdhu6BgP6Lq1
+lMxeonK25VwVwHGg6UB7jRo3UJIxJpnN35L+kh1A4zc83+urG/P12+DuzRD8
+LXRjT4MWNWXjyOVXl0u8X4I/IkWt+Idu22fKWM/F4kd7Sg2sIFtGT9/v6Ok4
+JrpNTcexfKusfbDwN239adq6L0t/IW19tE129if531Vdd6T8VvX64HB6cIu2
+/oThtmnsrcN13klvuL7W3jrc/n70lnrD/byaO15LZy7dmUqA4ZdS3pHG7Wjr
+w9gL/vpw+mD/639UDR4+O+jskWDvxL+ZqPy90YBSHmPcPV/itR8pGBT1yVsX
+cEV6nnsf+rFe1px7iagijPUJV0vxgwQ4JdoN/pQUQI+Cehj5Mj7qqQoUNMZd
+jfVu5nnPqbvsmgoBJbF8gKD2/UhZPhhWlrcFfvvaspda+01b/urasj/UL6Ut
+P3asu2vL7WPtf5Sq3O6JPvgoPXn7WHdXkttf44OP0pDdDYrVyINfTz0ebJ3m
+Z+nGvpP8Eboxdmx7uvGz9GI01ufpReuw9fViKSXI/DZan6o0avG6CJndcXZV
+zC/SJU0+4ckpaS3RXRTt+VXZYFF2F5SvBLlMuHLOh3OTCCXNgccl0kaHyRCr
+z+x+jjIyZRS/aaVBrbRVZn9BrZQQG7+ppX9MtWRf1W1H4ja1tHWs/YexQN+7
+RS/Fg3X1UmKwuyum7W/yloDr1nklNuzueqmjPO7fn943PtaDg+nDw8/x2QbG
++1LqyfyIYnzPmE6SXCQt+WkczNzS2auBwdadCGLZczoEu3ztLv0VSsDi6KAS
+8XA969gjZ6OMn+niqa80ReyRaAfotgW5lSS4CJhajwuGorQnLbYbvswELkPo
+CACdXRNZjkfq9yvyjFtPNu2WEiE/b3F4izb3MKPud+sZaZXZOCOuhYHj50mK
+0ERzCGJDZSRC2WKvQxguqedei3zYHrlBI0SCmyWjDbjHJlvYWa9z695s2G/O
+tI0YW+K20xy2yMI4GrYwPtvfTRRA/mZq/BKmxtahuprg4Ddb43+GrfFwixo+
+eBAfiqPbbI2PHezutsb2HfsoW+P2Ddtmaxwdhexuz9Y4OIx//Hp68PX97bbG
+p4x3d1tj2Ljo2RpW4HdKd6EDpMqXg8TkEl8X3Dx+QbyGp16tQJVf26Sn1kyh
+kOgPAsaEknJqji+BYboecz+472gmVCKYBmTGoESfzAohFkJMlzSu7/UllmSp
+fuYYMrizutTimyVamlo72ZfFB/z37lbi7bgY+XEf3ilqQOflcRd1KJbGzjkt
+Wc6ZzzUdmR4Tb5C07VfLcl5deovIzG1ieH3dE4kyD3Ob5k9uPFMkT0C6uswG
+tW6XpeSZKbEwjyzKbf+BMu2mi5OpOTxQWLUeoL69LjPlourgmpkpW+o6ZjZE
+pwExyO+cKoYt81AYLhc3lf18b28P/+s3i++yBcO9mBzaGXiZlJPXjlSk9wVZ
+hk4e2Uj9TgI1KAkSX2yQwSZBGt40KGTcXk1V787mPjhQgDFGIGbup45ddyrp
+esKxoXa0ng3pRtr7yr5d+oWzNlH4P2xORBObZlvI1Ld+kt7DQfTXmJK8atgQ
+7lKMRyTgzvj5sH9AMDDxgxIE4mnycCIGZ7tUUkc7WgFCIn3MA+x9tXuc9cnO
+qybB8E2OALF8K7OlEn1/Os23b02hKX8hsm8e6FV53ScuJ002wf9/4zTiVPYo
+conSm+RODm1TYpeGudDjXdrCK06qYlHjNZ1zgSto+epc3uPgFz06xDAZOSqH
++EQMcpJXd+YjT3GRbxv2FuyKII3uS74yhJTpVg8LENufRT4gKU8vOQIexe0C
+xD31Bp8DAh8k3DvO7Ep3nLt73mmW1jXtY8Tf940XHv098Wdt0rMfsONHf24k
+hyvHMgUocCXTZmnX+e4XE3POZt0i55zR+gXkHEzfb7pC8xMk3dG+96zDRSao
+hvJ/nsDjZdNGPgbBhdsb7cCdRa2CH/YJjgzBBoP6szlrQY0oFgC5CGPCSQMR
+RpCeRwcT5ii+TXj+Jj1/Lun54Ge05frhtyQqYWzR5f8TLbqHd5B0Dz/A0/4m
+f3T0m0n3RUy6R0fBpLt/9Ak23a+/TzjEP/9O3Tc7NTl4gK16VQvbMd/PHSbj
+u4OQ3g1S+jMktF7fz5bRjx5NH/06QvrRo0eEjLZpwL2VkNJSa9+4LSae4T7E
+zZjfbPoRJFgjDLSBmQgQQcCn0dbxXhufdNQdEKb8qSdOQOaC+dnwqdO6Zj44
+T9g62d+/lYExAB6+qD64975eoloDWkf77W4d5EdWMOdX5fk70lHTv/Hv/0jo
+l//yb9+HFRyKLnpKH3XqClWXIPgjKmg6OH16iILvS1vPaqVX/uM9jNx5yl9K
+BgZgWtx7zhya3DPglIK4j9IVCfdQUIn4yxLjvfzX7+/ndupHRNSEJQoe3AUE
+fr3MPYM6wW9fEtFw29aLyWY1ME/a74P9r4/zvy6potOTHXs6M3f0buZlvnNG
+MJVlSxRZxERN/5blMN4OM7YBoHx3YA0HuX/gw+P8abGq2mJumLNJ7pyW60Vz
+rKA/eAEKosnShZjj3E/s/lBx64kkCN/XLa6KE0CE63keDy+4npzNcm9jUQlR
+7ll5XhB0KvmAfP7/8uJpQ6OWQtfsR3IbzbfxrIR809dZtfOhY+Df2KPj/ORC
+gA/cEivayHdluRIiPonzEnOz7DfDHDKF+OMTs6X2+P8/f0ixZMuWAgA=
+
+-->
+
+</rfc>
+