Steps and statistics

Learn how to create more steps and how to gather run statistics

In previous quickstart you created a benchmark that fires only one HTTP request. Our next example is going to hit random URLs at this server with 10 requests per second. We’ll see how to generate random data and collect statistics for different URLs.

Let’s start a container that will serve the requests:

podman run --rm -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples

If you prefer running this in Docker just replace podman with docker. You can explore the handling of requests from this example on GitHub.

Here is the benchmark we’re going to run:

name: random-urls
http:
  host: http://localhost:8080
  sharedConnections: 10
# 10 users will be starting the scenario every second
usersPerSec: 10
duration: 5s
scenario:
- test:
  # Step `randomItem` randomly picks one item from the list below...
  - randomItem:
      list:
      - index.html
      - foo.png
      - bar.png
      - this-returns-404.png
      # ... and stores it in users's session under key `my-random-path`
      toVar: my-random-path
  - httpRequest:
      # HTTP request will read the variable from the session and format
      # the path for the GET request
      GET: /quickstarts/random-urls/${my-random-path}
      # We'll use different statistics metric for webpages and images
      metric:
      - .*\.html -> pages
      - .*\.png -> images
      - -> other
      # Handler processes the response
      handler:
        # We'll check that the response was successful (status 200-299)
        status:
          range: 2xx
        # When the response is fully processed we'll set variable `completed`
        # in the session.
        onCompletion:
          set: completed <- yes
      # For demonstration purposes we will set `sync: false`.
      # Next step is executed immediately after we fire the request, not
      # waiting for the response.
      sync: false
  # We'll wait for the `completed` var to be set in this step, though.
  - awaitVar: completed

So let’s run this through CLI:

[hyperfoil]$ start-local
...
[hyperfoil@in-vm]$ upload .../random-urls.hf.yaml
...
[hyperfoil@in-vm]$ run
Started run 0002
Run 0002, benchmark random-urls
Agents: in-vm[STARTING]
Started: 2019/11/15 17:49:45.859    Terminated: 2019/11/15 17:49:50.904
NAME  STATUS      STARTED       REMAINING  COMPLETED     TOTAL DURATION               DESCRIPTION
main  TERMINATED  17:49:45.859             17:49:50.903  5044 ms (exceeded by 44 ms)  10.00 users per second

[hyperfoil@in-vm]$ stats
Total stats from run 0002
PHASE  METRIC  REQUESTS  MEAN       p50        p90        p99        p99.9      p99.99     2xx  3xx  4xx  5xx  CACHE  TIMEOUTS  ERRORS  BLOCKED
main   images        34    3.25 ms    3.39 ms    4.39 ms   12.58 ms   12.58 ms   12.58 ms   12   13   12    0      0         0       0    1.11 ms
main   pages         13    2.89 ms    3.19 ms    4.15 ms    4.33 ms    4.33 ms    4.33 ms   13    0    0    0      0         0       0       0 ns

main/images: Progress was blocked waiting for a free connection. Hint: increase http.sharedConnections.

There are several things worth mentioning in this example:

  • The command run does not have any argument. In this case, the benchmark name random-urls is optional as you’ve just uploaded it and CLI knows that you are going to work with it. The same holds for stats - you don’t have to write down run ID 0002 when displaying statistics as the implicit run ID is set automatically in the run/status command.

  • The test did only 47 requests in 5 seconds, instead of 50. It does not execute one request every 100 ms sharp, it randomizes the times of requests as well; this simulates the Poisson point process. Longer runs would have lower variance in the total numbers.

  • In metric images the test reports 1.11 ms being blocked and there’s SLA failure below the stats. This is happening because in the default configuration Hyperfoil opens only one connection to the target server. All (possibly concurrent) requests have to share the common pool of 1 connection and if some request cannot be executed immediatelly we report this as blocked time. All practical benchmarks should increase the pool size to a value that reflects simulated load and prevent this situation.

  • The test took 44 ms longer than the configured 5 seconds. We terminate the test only after all responses for sent requests arrive (or time out).

In the next quickstart you’ll see a more complex scenario