This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

Collection of quickstarts aiming to make the Hyperfoil adoption a lot easier.

Embark on this journey with our collection of 8 quickstarts, guiding you through a wide range of potential use cases that you might encounter in daily operations.

1 - First benchmark

Download, set up, and run your first Hyperfoil benchmark

1. Download latest release and unpack it

wget {{ site.last_release.url }} \
    && unzip {{ site.last_release.zip }} \
    && cd {{ site.last_release.dir }}

2. Start Hyperfoil in interactive mode (CLI)

bin/cli.sh

For our first benchmark we’ll start an embedded server (controller) within the CLI:

[hyperfoil]$ start-local
Starting controller in default directory (/tmp/hyperfoil)
Controller started, listening on 127.0.0.1:41621
Connecting to the controller...
Connected!

3. Upload the minimalistic benchmark and run it

As you can see below, the benchmark is really minimalistic as it is doing only single request to http://hyperfoil.io.

# This is the name of the benchmark. It's recommended to keep this in sync with
# name of this file, adding extension `.hf.yaml`.
name: single-request
# We must define at least one HTTP target, in this case it becomes a default
# for all HTTP requests.
http:
  host: http://hyperfoil.io
# Simulation consists of phases - potentially independent workloads.
# We'll discuss phases more in detail in next quickstarts.
phases:
# `example` is the name of the single phase in this benchmark.
- example:
    # `atOnce` with `users: 1` results in running the scenario below just once
    atOnce:
      users: 1
      scenario:
      # The only sequence in this scenario is called `test`.
      - test:
        # In the only step in this sequence we'll do a HTTP GET request
        # to `http://hyperfoil.io/`
        - httpRequest:
            GET: /
            # Inject helpers to make this request synchronous, i.e. keep
            # the sequence blocked until Hyperfoil processes the response.
            sync: true

Create the same benchmark in your local environment or download it. After that, upload it using the upload command as follow:

[hyperfoil@in-vm]$ upload .../single-request.hf.yaml
Loaded benchmark single-request, uploading...
... done.
[hyperfoil@in-vm]$ run single-request
Started run 0001
Run 0001, benchmark single-request
Agents: in-vm[STARTING]
Started: 2019/11/15 16:11:43.725    Terminated: 2019/11/15 16:11:43.899
<span class="hfcaption">NAME     STATUS      STARTED       REMAINING  COMPLETED     TOTAL DURATION               DESCRIPTION
example  TERMINATED  16:11:43.725             16:11:43.899  174 ms (exceeded by 174 ms)  1 users at once

4. Check out performance results:

[hyperfoil@in-vm]$ stats
Total stats from run 000A
<span class="hfcaption">PHASE    METRIC  REQUESTS  MEAN       p50        p90        p99        p99.9      p99.99     2xx  3xx  4xx  5xx  CACHE  TIMEOUTS  ERRORS  BLOCKED
example  test           1  172.49 ms  173.02 ms  173.02 ms  173.02 ms  173.02 ms  173.02 ms    0    1    0    0      0         0       0       0 ns

Doing one request is not much of a benchmark and the statistics above are moot, but hey, this is a quickstart.

In the future you might find editing with schema useful but at this point any editor with YAML syntax highlighting will do the job.

Ready? Let’s continue with something a bit more realistic…

2 - Steps and statistics

Learn how to create more steps and how to gather run statistics

In previous quickstart you created a benchmark that fires only one HTTP request. Our next example is going to hit random URLs at this server with 10 requests per second. We’ll see how to generate random data and collect statistics for different URLs.

Let’s start a container that will serve the requests:

podman run --rm -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples

If you prefer running this in Docker just replace podman with docker. You can explore the handling of requests from this example on GitHub.

Here is the benchmark we’re going to run:

name: random-urls
http:
  host: http://localhost:8080
  sharedConnections: 10
# 10 users will be starting the scenario every second
usersPerSec: 10
duration: 5s
scenario:
- test:
  # Step `randomItem` randomly picks one item from the list below...
  - randomItem:
      list:
      - index.html
      - foo.png
      - bar.png
      - this-returns-404.png
      # ... and stores it in users's session under key `my-random-path`
      toVar: my-random-path
  - httpRequest:
      # HTTP request will read the variable from the session and format
      # the path for the GET request
      GET: /quickstarts/random-urls/${my-random-path}
      # We'll use different statistics metric for webpages and images
      metric:
      - .*\.html -> pages
      - .*\.png -> images
      - -> other
      # Handler processes the response
      handler:
        # We'll check that the response was successful (status 200-299)
        status:
          range: 2xx
        # When the response is fully processed we'll set variable `completed`
        # in the session.
        onCompletion:
          set: completed <- yes
      # For demonstration purposes we will set `sync: false`.
      # Next step is executed immediately after we fire the request, not
      # waiting for the response.
      sync: false
  # We'll wait for the `completed` var to be set in this step, though.
  - awaitVar: completed

So let’s run this through CLI:

[hyperfoil]$ start-local
...
[hyperfoil@in-vm]$ upload .../random-urls.hf.yaml
...
[hyperfoil@in-vm]$ run
Started run 0002
Run 0002, benchmark random-urls
Agents: in-vm[STARTING]
Started: 2019/11/15 17:49:45.859    Terminated: 2019/11/15 17:49:50.904
NAME  STATUS      STARTED       REMAINING  COMPLETED     TOTAL DURATION               DESCRIPTION
main  TERMINATED  17:49:45.859             17:49:50.903  5044 ms (exceeded by 44 ms)  10.00 users per second

[hyperfoil@in-vm]$ stats
Total stats from run 0002
PHASE  METRIC  REQUESTS  MEAN       p50        p90        p99        p99.9      p99.99     2xx  3xx  4xx  5xx  CACHE  TIMEOUTS  ERRORS  BLOCKED
main   images        34    3.25 ms    3.39 ms    4.39 ms   12.58 ms   12.58 ms   12.58 ms   12   13   12    0      0         0       0    1.11 ms
main   pages         13    2.89 ms    3.19 ms    4.15 ms    4.33 ms    4.33 ms    4.33 ms   13    0    0    0      0         0       0       0 ns

main/images: Progress was blocked waiting for a free connection. Hint: increase http.sharedConnections.

There are several things worth mentioning in this example:

  • The command run does not have any argument. In this case, the benchmark name random-urls is optional as you’ve just uploaded it and CLI knows that you are going to work with it. The same holds for stats - you don’t have to write down run ID 0002 when displaying statistics as the implicit run ID is set automatically in the run/status command.

  • The test did only 47 requests in 5 seconds, instead of 50. It does not execute one request every 100 ms sharp, it randomizes the times of requests as well; this simulates the Poisson point process. Longer runs would have lower variance in the total numbers.

  • In metric images the test reports 1.11 ms being blocked and there’s SLA failure below the stats. This is happening because in the default configuration Hyperfoil opens only one connection to the target server. All (possibly concurrent) requests have to share the common pool of 1 connection and if some request cannot be executed immediatelly we report this as blocked time. All practical benchmarks should increase the pool size to a value that reflects simulated load and prevent this situation.

  • The test took 44 ms longer than the configured 5 seconds. We terminate the test only after all responses for sent requests arrive (or time out).

In the next quickstart you’ll see a more complex scenario

3 - Complex workflow

Start creating a more complex workflow

The previous example was the first ‘real’ benchmark, but it didn’t do anything different from what you could run through wrk, ab, siege or similar tools.

Of course, the results were not suffering from the coordinated omission problem, but Hyperfoil can do more. Let’s try a more complex scenario:

name: choose-movie
http:
  host: http://localhost:8080
  # Use 80 concurrent HTTP connections to the server. Default is 1,
  # therefore we couldn't issue two concurrent requests (as HTTP pipelining
  # is disabled by default and we use HTTP 1.1 connections).
  sharedConnections: 80
usersPerSec: 10
duration: 5s
# Each session will take at least 3 seconds (see the sleep time below),
# and we'll be running ~10 per second. That makes 30, let's give it
# some margin and set this to 40.
maxSessions: 40
scenario:
  # In previous scenarios we have used only single sequence and we could
  # define the list of sequences right away. In this scenario, we're going
  # to be using 3 different sequences.
  # Initial sequences are scheduled at session start and are not linked
  # to the other sessions.
  initialSequences:
  - home:
    # Pick a random username from a file
    - randomItem:
        file: usernames.txt
        toVar: username
    # The page would load a profile, e.g. to display full name.
    - httpRequest:
        GET: /quickstarts/choose-movie/profile?user=${username}
        sync: false
        metric: profile
    # Fetch movies user could watch
    - httpRequest:
        GET: /quickstarts/choose-movie/movies
        sync: false
        metric: movies
        handler:
          body:
            # Parse the returned JSON that is an array and for each
            # element fire the processor.
            json:
              query: .[]
              processor:
                # Store each element in a collection `movies`
                array:
                  toVar: movies
                  # Store as byte[] to avoid encoding UTF-8 into String
                  format: BYTES
                  # Every data structure in session has maximum size.
                  # This space is pre-allocated.
                  maxSize: 10
    # This step waits until responses for all sent requests are received and processed.
    - awaitAllResponses
    # Wait 3 seconds to simulate user-interaction
    - thinkTime:
        duration: 3s
    # Set variable `quality` and `movieNames` to an uninitialized array
    # of 10 elements. We will use them later on.
    - set:
        var: quality
        objectArray:
          size: 10
    - set:
        var: movieNames
        objectArray:
          size: 10
    # For each element in variable `movies` schedule one (new) instance
    # of sequence `movies`, defined below. These instances differ in
    # one intrinsic "variable" - their index.
    - foreach:
        fromVar: movies
        sequence: addComment
    # Schedule one more sequence
    - newSequence: watchMovie
  # These sequences are defined but don't get scheduled at session start. We activate
  # them explicitly (and multiple times in parallel) in foreach step above.
  sequences:
  # Sequences that can run multiple instances concurrently must declare the maximum
  # concurrency level explicitly using the brackets.
  - addComment[10]:
    # Variables `movies` hosts an array, and in the foreach step
    # we've created one sequence for each element. We'll access
    # the element through the '[.]' notation below.
    - json:
        fromVar: movies[.]
        query: .quality
        # We'll extract quality to another collection under
        # this sequence's index. We shouldn't use global variable
        # as the execution of sequences may interleave.
        toVar: quality[.]
    # For high-quality movies we won't post insults (we haven't seen
    # the movie yet anyway). Therefore, we'll stop executing
    # the sequence prematurely.
    - breakSequence:
        intCondition:
          fromVar: quality[.]
          # Note: ideally we could filter the JSON directly using query
          #     .[] | select(.quality >= 80)
          # but this feature is not implemented yet.
          greaterOrEqualTo: 80
    - json:
        fromVar: movies[.]
        query: .name
        toVar: movieNames[.]
    - httpRequest:
        # URLs with spaces and other characters don't work well;
        # let's encode it (e.g. space -> %20)
        POST: /quickstarts/choose-movie/movie/${urlencode:movieNames[.]}/comments
        body:
          text: This movie sucks.
        # The sync shortcut actually sets up a bit in the session state
        # cleared before the request and set when the request is complete,
        # automatically waiting it after this step.
        # You can write your own handlers (using sequence-scoped vars)
        # to change this behaviour.
        sync: true
    # Set value to variable `commented`. The actual value does not matter.
    - set: commented <- true
  - watchMovie:
    # This sequence is blocked in its first step until the variable gets
    # set. Therefore we could define it in `initialSequences` and omit
    # the `newSequence` step at the end of `home` sequence.
    - awaitVar: commented
    # Choose one of the movies (including the bad ones, for simplicity)
    - randomItem: selectedMovie <- movies
    - json:
        fromVar: selectedMovie
        query: .name
        # This sequence is executed only once so we can use global var.
        toVar: movieName
    # Finally, go watch the movie!
    - httpRequest:
        GET: /quickstarts/choose-movie/movie/${urlencode:movieName}/watch
        sync: true

Start the server and fire the scenario the usual way:

# start the server to interact with
podman run --rm -d -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples

# start Hyperfoil CLI
bin/cli.sh
[hyperfoil]$ start-local
...
[hyperfoil@in-vm]$ upload .../choose-movie.hf.yaml
...
[hyperfoil@in-vm]$ run
...

Is this scenario too simplistic? Let’s define phases

4 - Phases - basics

Deep dive into the basics of phases

So far the benchmark contained only one type of load; certain number of users hitting the system, doing always the same (though data could be randomized). In practice you might want to simulate several types of workloads at once: in an eshop users would come browsing or buying products, and operators would restock the virtual warehouse.

Also, driving constant load may not be the best way to run the benchmark: often you want to slowly ramp the load up to let the system adjust (scale up, perform JIT, fill pools) and push the full load only after that. When trying to find system limits, you do the same repetitevely - ramp up the load, measure latencies and if the system meets SLAs (latencies below limits) continue ramping up the load until it breaks.

In Hyperfoil, this all is expressed through phases. We’ve already seen phases in the first quickstart as we wanted to execute a non-default type of load - running the workload only once. Let’s take a look on the “eshop” case first:

# This benchmark simulates operations in an eshop, with browsing/shopping users
# and operators restocking the warehouse.
name: eshop
http:
  host: http://localhost:8080
  sharedConnections: 80
phases:
# This defines a workload where users just look through the pages.
- browsingUser:
    # This is the default type of workload, starting constant number of users
    # each second. Note that we don't speak about 'requests per second' as
    # the scenario may issue any number of requests.
    constantRate:
      duration: 10s
      usersPerSec: 10
      scenario:
      # Browse is the name of our only sequence. We will avoid steps generating
      # random data for browsing for the sake of brevity.
      - browse:
        - httpRequest:
            GET: /quickstarts/eshop/items
# Workload simulating users that are going to buy something
- buyingUser:
    constantRate:
      # The length of this phase is not synchronized with other phases.
      # You might think that this is too flexible at first.
      duration: 10s
      usersPerSec: 5
      scenario:
      - browse:
        - httpRequest:
            GET: /quickstarts/eshop/items
            handler:
              body:
                json:
                  query: .[].id
                  # This is a shortcut to store in array-typed variable
                  # `itemIds` holding at most 10 elements.
                  toArray: itemIds[10]
      - buy:
        # Pick id for a random item
        - randomItem: itemId <- itemIds
        - httpRequest:
            POST: /quickstarts/eshop/items/${itemId}/buy
- operator:
    # This is a different type of phase, running fixed number of users.
    # It is what most benchmarks do when you set number of threads; here we use
    # that as we know that we have fixed number of employees (operators) who
    # are restocking the warehouse.
    always:
      users: 5
      duration: 10s
      scenario:
      - restock:
        # Select an id for random item to restock
        # Variables in different scenarios are completely unrelated.
        - randomInt: itemId <- 1 .. 999
        - randomInt: units <- 1 .. 10
        - httpRequest:
            POST: /quickstarts/eshop/items/${itemId}/restock
            body:
              # We are using url-encoded form data
              form:
              - name: addUnits
                fromVar: units
        # Operators need some pauses - otherwise we would start another
        # scenario execution (and fire another request) right away.
        - thinkTime:
            duration: 2s

Start the same server as you did in the previous quickstarts:

podman run --rm -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples

In next quickstart you’ll learn how to repeat and link the phases.

5 - Phases - advanced

Delve into more advanced phase configuration

Previous quickstart presented a benchmark with three phases that all started at the same moment (when the benchmark was started) and had the same duration - different phases represented different workflows (types of user). In this example we will adjust the benchmark to scale the load gradually up.

At this point it would be useful to mention the lifecycle of phases; phase is in one of these states:

  • not started: As the name clearly says, the phase is not yet started.
  • running: The agent started running the phase, i.e., performing the configured load.
  • finished: When the duration elapses, no more new users are started. However, some might be still executing their scenarios.
  • terminated: When all users complete their scenarios the phase becomes terminated. Users may be forcibly interrupted by setting maxDuration on the phase.
  • cancelled If the benchmark cannot continue further, all remaining stages are cancelled.

Let’s take a look into the example, where we’ll slowly (over 5 seconds) increase load to 10+5 users/sec, run with this load for 10 seconds, again increase it by another 10+5 users/sec and so forth until we reach 100+50 users per second. As we define maxIterations for these phases the benchmark will actually contain phases browsingUserRampUp/0, browsingUserRampUp/1, browsingUserRampUp/2 and so forth.

name: eshop-scale
http:
  host: http://localhost:8080
  sharedConnections: 80
phases:
- browsingUserRampUp:
    # This type of phase is similar to constantRate in the way how new users
    # are started but gradually increases the rate from `initialUsersPerSec`
    # to `targetUsersPerSec`.
    increasingRate:
      duration: 5s
      # In Hyperfoil, everything is pre-allocated = limited in size. Here we'll
      # set that we won't run more than 10 iterations of this phase.
      maxIterations: 10
      # Number of started users per sec increases with the iteration; in first
      # iteration we'll go from 0 to 10 users/second, in second from 10 to 20
      # and in last (10th) we'll reach 100 users/second.
      initialUsersPerSec:
        base: 0
        increment: 10
      targetUsersPerSec:
        base: 10
        increment: 10
      # Nth iteration of this phase will start when (N-1)th iteration of other
      # steady-state phases are finished. First iteration can start
      # immediatelly, of course.
      startAfter:
      - phase: browsingUserSteady
        iteration: previous
      - phase: buyingUserSteady
        iteration: previous
      # The &browsingUser syntax below creates YAML alias: we can later
      # reference this scenario and it will be used verbatim in another phase.
      # It is possible to use aliases for both scenarios and sequences.
      scenario: &browsingUser
      # We'll use the same scenario as in eshop.hf.yaml
      - browse:
        - httpRequest:
            GET: /quickstarts/eshop/items
- browsingUserSteady:
    constantRate:
      duration: 10s
      maxIterations: 10
      usersPerSec:
        base: 10
        increment: 10
      # Nth iteration of this phase will start when Nth iteration of ramp-up
      # phases is finished.
      # Note that there's implicit rule that Nth iteration of given phase will
      # start only after (N-1)th iteration terminates.
      startAfter:
      - phase: browsingUserRampUp
        iteration: same
      - phase: buyingUserRampUp
        iteration: same
      # This refers to the alias created above; in steady state we'll use the
      # same scenario.
      scenario: *browsingUser
# These two phases will be very similar to browsingUserSteady and RampUp
- buyingUserRampUp:
    increasingRate:
      duration: 5s
      maxIterations: 10
      initialUsersPerSec:
        base: 0
        increment: 5
      targetUsersPerSec:
        base: 5
        increment: 5
      startAfter:
      - phase: browsingUserSteady
        iteration: previous
      - phase: buyingUserSteady
        iteration: previous
      # Again we'll use the same scenario as in eshop.hf.yaml
      scenario: &buyingUser
      - browse:
        - httpRequest:
            GET: /quickstarts/eshop/items
            handler:
              body:
                json:
                  query: .[].id
                  toArray: itemIds[10]
      - buy:
        - randomItem: itemId <- itemIds
        - httpRequest:
            POST: /quickstarts/eshop/items/${itemId}/buy
- buyingUserSteady:
    constantRate:
      duration: 10s
      maxIterations: 10
      usersPerSec:
        base: 5
        increment: 5
      startAfter:
      - phase: browsingUserRampUp
        iteration: same
      - phase: buyingUserRampUp
        iteration: same
      scenario: *buyingUser
# Operator phase is omitted for brevity as we wouldn't scale that up

Don’t forget to start the mock server as we’ve used in the previous quickstart.

podman run --rm -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples

Synchronizing multiple workloads across iteration can become a bit cumbersome. That’s why we can keep similar types of workflow together, and split the phase into forks. In fact forks will become different phases, but these will be linked together so that you can refer to all of them as to a single phase. Take a look at the benchmark rewritten to use forks:

name: eshop-forks
http:
  host: http://localhost:8080
  sharedConnections: 80
phases:
- rampUp:
    increasingRate:
      duration: 5s
      maxIterations: 10
      # Note that we have increased both the base and increment from 10 and 5
      # to 15. This value is split between the forks based on their weight.
      initialUsersPerSec:
        base: 0
        increment: 15
      targetUsersPerSec:
        base: 15
        increment: 15
      startAfter:
        phase: steadyState
        iteration: previous
      forks:
        browsingUser:
          weight: 2
          scenario: &browsingUser
          - browse:
            - httpRequest:
                GET: /quickstarts/eshop/items
        buyingUser:
          weight: 1
          scenario: &buyingUser
          - browse:
            - httpRequest:
                GET: /quickstarts/eshop/items
                handler:
                  body:
                    json:
                      query: .[].id
                      toArray: itemIds[10]
          - buy:
            - randomItem: itemId <- itemIds
            - httpRequest:
                POST: /quickstarts/eshop/items/${itemId}/buy
- steadyState:
    constantRate:
      duration: 10s
      maxIterations: 10
      usersPerSec:
        base: 15
        increment: 15
      startAfter:
        phase: rampUp
        iteration: same
      forks:
        browsingUser:
          weight: 2
          scenario: *browsingUser
        buyingUser:
          weight: 1
          scenario: *buyingUser
# Operator phase is omitted for brevity as we wouldn't scale that up

This definition will create phases rampUp/0/browsingUser, rampUp/0/buyingUser, rampUp/1/browsingUser etc. - you’ll see them in statistics.

You could orchestrate the phases as it suits you, using startAfter, startAfterStrict (this requires the referenced phase to me terminated instead of finished as with startAfter) or startTime with relative time since benchmark start.

This sums up basic principles, in next quickstart you’ll see how to start and use Hyperfoil in distributed mode.

6 - Running the server

Learn how to start the Hyperfoil server in standalone mode

Until now we have always started our benchmarks using an embedded controller in the CLI, using the start-local command. This spawns a server in the CLI JVM. CLI communicates with it using standard REST API, though the server port is randomized and listens on localhost only. All the benchmarks and run results are also stored in /tmp/hyperfoil/ - you can change the directory as an argument to the start-local command. While the embedded controller might be convenient for a quick test or when developing the scenario it’s not something that you’d use for a full-fledged benchmark.

When testing a reasonably performing system you need multiple nodes driving the load - we call them agents. These agents sync up, receive commands and report statistics to a master node, the controller. This node exposes a RESTful API to upload & start the benchmark, watch its progress and download results.

There are two other scripts in the bin/ directory:

  • standalone.sh starts both the controller and (one) agent in a single JVM. This is not too different from the controller embedded in CLI.
  • controller.sh starts clustered Vert.x and deploys the controller. Agents are started as needed in different nodes. You’ll see this in the next quickstart.

Also note that it is possible to run Hyperfoil in Openshift.

Open two terminals; in one terminal start the standalone server and in second terminal start the CLI.

bin/standalone.sh

and

bin/cli.sh

Then, let’s try to connect to the server (by default running on http://localhost:8090) and upload the single-request benchmark:

# This is the name of the benchmark. It's recommended to keep this in sync with
# name of this file, adding extension `.hf.yaml`.
name: single-request
# We must define at least one HTTP target, in this case it becomes a default
# for all HTTP requests.
http:
  host: http://hyperfoil.io
# Simulation consists of phases - potentially independent workloads.
# We'll discuss phases more in detail in next quickstarts.
phases:
# `example` is the name of the single phase in this benchmark.
- example:
    # `atOnce` with `users: 1` results in running the scenario below just once
    atOnce:
      users: 1
      scenario:
      # The only sequence in this scenario is called `test`.
      - test:
        # In the only step in this sequence we'll do a HTTP GET request
        # to `http://hyperfoil.io/`
        - httpRequest:
            GET: /
            # Inject helpers to make this request synchronous, i.e. keep
            # the sequence blocked until Hyperfoil processes the response.
            sync: true

From the second terminal, the one running the Hyperfoil CLI, issue the following commands:

[hyperfoil@localhost]$ connect
Connected! Server has these agents connected:
* localhost[REGISTERED]

[hyperfoil@localhost]$ upload .../single-request.hf.yaml
Loaded benchmark single-request, uploading...
... done.

[hyperfoil@localhost]$ run single-request
Started run 0001

When you switch to the first terminal (the one running the controller), you can see in the logs that the benchmark definition was stored on the server, the benchmark has been executed and its results have been stored to disk. Hyperfoil by default stores benchmarks in directory /tmp/hyperfoil/benchmark and data about runs in /tmp/hyperfoil/run; check it out:

column -t -s , /tmp/hyperfoil/run/0001/stats/total.csv
Phase    Name  Requests  Responses  Mean       Min        p50.0      p90.0      p99.0      p99.9      p99.99     Max        MeanSendTime  ConnFailure  Reset  Timeouts  2xx  3xx  4xx  5xx  Other  Invalid  BlockedCount  BlockedTime  MinSessions  MaxSessions
example  test  1         1          267911168  267386880  268435455  268435455  268435455  268435455  268435455  268435455  2655879       0            0      0         0    1    0    0    0      0        0             0

Reading CSV/JSON files directly is not too comfortable; you can check the details through CLI as well:

[hyperfoil@localhost]$ stats
Total stats from run 002D
Phase   Sequence  Requests      Mean       p50       p90       p99     p99.9    p99.99    2xx    3xx    4xx    5xx Timeouts Errors
example:
	test:            1 267.91 ms 268.44 ms 268.44 ms 268.44 ms 268.44 ms 268.44 ms      0      1      0      0        0      0

By the time you type the stats command into CLI the benchmark is already completed and the CLI shows stats for the whole run. Let’s try running the {% include example_link.md src=‘eshop-scale.hf.yaml’ %} we’ve seen in previous quickstart; this will give us some time to observe on-line statistics as the benchmark is progressing:

podman run --rm -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples
[hyperfoil@localhost]$ upload .../eshop-scale.hf.yaml
Loaded benchmark eshop-scale, uploading...
... done.
[hyperfoil@localhost]$ run eshop-scale
Started run 0002
Run 0002, benchmark eshop-scale
...

Here the console would automatically jump into the status command, displaying the progress of the benchmark online. Press Ctrl+C to cancel that (it won’t stop the benchmark run) and run the stats command:

[hyperfoil@localhost]$ stats
Recent stats from run 0002
Phase   Sequence  Requests      Mean       p50       p90       p99     p99.9    p99.99    2xx    3xx    4xx    5xx Timeouts Errors
buyingUserSteady/000:
        buy:             8   1.64 ms   1.91 ms   3.05 ms   3.05 ms   3.05 ms   3.05 ms      8      0      0      0        0      0
        browse:          8   2.13 ms   2.65 ms   3.00 ms   3.00 ms   3.00 ms   3.00 ms      8      0      0      0        0      0
browsingUserSteady/000:
        browse:          8   2.74 ms   2.69 ms   2.97 ms   2.97 ms   2.97 ms   2.97 ms      8      0      0      0        0      0
Press Ctr+C to stop watching...

You can go back to the run progress using the status command (hint: use status --all to display all phases, including those not started or already terminated):

[hyperfoil@localhost]$ status
Run 0002, benchmark eshop-scale
Agents: localhost[INITIALIZED]
Started: 2019/04/15 16:27:24.526
NAME                    STATUS   STARTED       REMAINING  FINISHED  TOTAL DURATION
browsingUserRampUp/006  RUNNING  16:28:54.565  2477 ms
buyingUserRampUp/006    RUNNING  16:28:54.565  2477 ms
Press Ctrl+C to stop watching...

Since we are showing this quickstart running the controller and CLI on the same machine it’s easy to fetch results locally from /tmp/hyperfoil/run/XXXX/.... To save you SSHing into the controller host and finding the directories in a ’true remote’ case there’s the export command; This fetches statistics to your computer where you’re running CLI. You can chose between default JSON format (e.g. export 0002 -f json -d /path/to/dir) and CSV format (export 0002 -f csv -d /path/to/dir) - the latter packs all CSV files into single ZIP file for your convenience.

When you find out that the benchmark is not going well, you can terminate it prematurely:

[hyperfoil@localhost]$ kill
Kill run 0002, benchmark eshop-scale(phases: 2 running, 0 finished, 40 terminated) [y/N]: y
Killed.

In the next quickstart we will deal with starting clustered Hyperfoil.

7 - Clustered mode

Learn how to start the Hyperfoil server in clustered mode

Previously we’ve learned to start Hyperfoil in standalone server mode, and to do some runs through CLI. In this quickstart we’ll see how to run your benchmark distributed to several agent nodes.

Hyperfoil operates as a cluster of Vert.x. When the benchmark is started, it deploys agents on other nodes according to the benchmark configuration - these are Vert.x nodes, too. Together controller and agents form a cluster and communicate over the event bus.

In this quickstart we’ll use the SSH deployer; make sure your machine has SSH server running on port 22 and you can login using your pubkey ~/.ssh/id_rsa. The SSH deployer copies the necessary JARs to /tmp/hyperfoil/agentlib/ and starts the agent there. For instructions to run Hyperfoil in Kubernetes or Openshift please consult the Installation docs.

When we were running in the standalone or local mode we did not have to set any agents in the benchmark definition. That changes now as we need to inform the controller where the agents should be deployed. Let’s see a benchmark - two-agents.hf.yaml that has those agents defined.

name: two-agents
# List of agents the Controller should deploy
agents:
  # This defines the agent using SSH connection to localhost, port 22
  agent-one: localhost:22
  # Another agent on localhost, this time defined using properties
  agent-two:
    host: localhost
    port: 22
http:
  host: http://localhost:8080
usersPerSec: 10
duration: 10s
scenario:
- test:
  - httpRequest:
      GET: /

The load the benchmark generates is evenly split among the agents, so if you want to use another agent, you don’t need to do any calculations - just add the agent and you’re good to go.

Open three terminals; in the first start the controller using bin/controller.sh, in second one open the CLI with bin/cli.sh and in the third one start the example workload server:

podman run --rm -p 8080:8083 quay.io/hyperfoil/hyperfoil-examples

Connect, upload, start and check out the benchmark using CLI exactly the same way as we did in the previous quickstart:

[hyperfoil@localhost]$ connect
Connected!

[hyperfoil@localhost]$ upload .../two-agents.hf.yaml
Loaded benchmark two-agents, uploading...
... done.

[hyperfoil@localhost]$ run two-agents
Started run 004A

[hyperfoil@localhost]$ status
Run 004A, benchmark two-agents
Agents: agent-one[STARTING], agent-two[STARTING]
Started: 2019/04/17 17:08:19.703    Terminated: 2019/04/17 17:08:29.729
NAME  STATUS      STARTED       REMAINING  FINISHED      TOTAL DURATION
main  TERMINATED  17:08:19.708             17:08:29.729  10021 ms (exceeded by 21 ms)

[hyperfoil@localhost]$ stats
Total stats from run 004A
Phase   Sequence  Requests      Mean       p50       p90       p99     p99.9    p99.99    2xx    3xx    4xx    5xx Timeouts Errors
main:
	test:          106   3.12 ms   2.83 ms   3.23 ms  19.53 ms  25.30 ms  25.30 ms    106      0      0      0        0      0

You see that we did 106 requests which fits the assumption about running 10 user sessions per second over 10 seconds, while we have used 2 agents.

Vert.x clustering is using Infinispan and JGroups; depending on your networking setup it might not work out-of-the-box. If you experience any trouble, check out the FAQ.

Next quickstart will get back to the scenario definition; we’ll show you how to extend Hyperfoil with custom steps and handlers.

8 - Custom components

Hyperfoil offers some basic steps to do HTTP requests, generate data, alter control flow in the scenario etc., but your needs may surpass the features implemented so far. Also, it might be just easier to express your logic in Java code than combining steps in the YAML. The downside is reduced ability to reuse and more tight dependency on Hyperfoil APIs.

This quickstart will show you how to extend Hyperfoil with custom steps and handlers. As we use the standard Java ServiceLoader approach, after you build the module you should drop it into extensions directory. (Note: if you upload the benchmarks through CLI you need to put it to both the machine where you run the CLI and to the controller.)

Each extension will consist of two classes:

  • Builder, is loaded as service and creates the immutable extension instance
  • extension (Step, Action or handler)

Let’s start with a io.hyperfoil.api.config.Step implementation. The interface has single method invoke(Session) that should return true if the step was executed and false if its execution has been blocked and should be retried later. In case that the execution is blocked the invocation must not have any side effects - e.g. if the step is fetching objects from some pools and one of the pools is depleted, it should release the already acquired objects back to the pool.

We’ll create a step that will divide variable from a session by a (configurable) constant and store the result in another variable.

Java

public class DivideStep implements Step {
   // All fields in a step are immutable, any state must be stored in the Session
   private final ReadAccess fromVar;
   private final IntAccess toVar;
   private final int divisor;

   public DivideStep(ReadAccess fromVar, IntAccess toVar, int divisor) {
      // Variables in session are not accessed directly using map lookup but
      // through the Access objects. This is necessary as the scenario can use
      // some simple expressions that are parsed when the scenario is built
      // (in this constructor), not at runtime.
      this.fromVar = fromVar;
      this.toVar = toVar;
      this.divisor = divisor;
   }

   @Override
   public boolean invoke(Session session) {
      // This step will block until the variable is set, rather than
      // throwing an error or defaulting the value.
      if (!fromVar.isSet(session)) {
         return false;
      }
      // Session can store either objects or integers. Using int variables is
      // more efficient as it prevents repeated boxing and unboxing.
      int value = fromVar.getInt(session);
      toVar.setInt(session, value / divisor);
      return true;
   }

  ...

Then we need a builder class that will allow us to configure the step. To keep related classes together we will define it as inner static class:

Java

public class DivideStep implements Step {
  ...

     // Make this builder loadable as service
   @MetaInfServices(StepBuilder.class)
   // This is the step name that will be used in the YAML
   @Name("divide")
   public static class Builder extends BaseStepBuilder<Builder> implements InitFromParam<Builder> {
      // Contrary to the step fields in builder are mutable
      private String fromVar;
      private String toVar;
      private int divisor;

      // Let's permit a short-form definition that will store the result
      // in the same variable. Note that the javadoc @param is used to generate external documentation.

      /**
       * @param param Use myVar /= constant
       */
      @Override
      public Builder init(String param) {
         int divIndex = param.indexOf("/=");
         if (divIndex < 0) {
            throw new BenchmarkDefinitionException("Invalid inline definition: " + param);
         }
         try {
            divisor(Integer.parseInt(param.substring(divIndex + 2).trim()));
         } catch (NumberFormatException e) {
            throw new BenchmarkDefinitionException("Invalid inline definition: " + param, e);
         }
         String var = param.substring(0, divIndex).trim();
         return fromVar(var).toVar(var);
      }

      // All fields are set in fluent setters - this helps when the scenario
      // is defined through programmatic configuration.
      // When parsing YAML the methods are invoked through reflection;
      // the attribute name is used for the method lookup.
      public Builder fromVar(String fromVar) {
         this.fromVar = fromVar;
         return this;
      }

      public Builder toVar(String toVar) {
         this.toVar = toVar;
         return this;
      }

      // The parser can automatically convert primitive types and enums.
      public Builder divisor(int divisor) {
         this.divisor = divisor;
         return this;
      }

      @Override
      public List<Step> build() {
         // You can ignore the sequence parameter; this is used only in steps
         // that require access to the parent sequence at runtime.
         if (fromVar == null || toVar == null || divisor == 0) {
            // Here is a good place to check that the attributes are sane.
            throw new BenchmarkDefinitionException("Missing one of the required attributes!");
         }
         // The builder has a bit more flexibility and it can create more than
         // one step at once.
         return Collections.singletonList(new DivideStep(
               SessionFactory.readAccess(fromVar), SessionFactory.intAccess(toVar), divisor));
      }
   }

  ...

As the comments say, the builder is using fluent setter syntax to set the attributes. When you want to nest attributes under another builder, you can just add parameter-less method FooBuilder foo() the returns an instance of FooBuilder; the parser will fill this instance as well. There are some interfaces your builder can implement to accept lists or different structures, but the description is out of scope of this quickstart.

The builder class has two annotations: @Name which specifies the name we’ll use in YAML as step name, and @MetaInfServices with StepBuilder.class as the parameter. If you were to implement other type of extension, this would be Action.Builder.class, Request.ProcessorBuilder.class etc. In order to record the service in META-INF directory in the jar you must also add this dependency to your module:

<dependency>
    <groupId>org.kohsuke.metainf-services</groupId>
    <artifactId>metainf-services</artifactId>
    <optional>true</optional>
</dependency>

The whole class can be inspected here and it is already included in the extensions directory. You can try running bin/standalone.sh, upload and run divide.hf.yaml. You should see about 5 log messages in the server log.

# This benchmark demonstrates custom steps
name: divide
http:
  host: http://localhost:8080
usersPerSec: 1
duration: 5s
scenario:
- test:
  - setInt: foo <- 33
  - divide: foo /= 3
  - log:
      message: Foo is {}
      vars:
      - foo

There are several other integration points but Step:

  • io.hyperfoil.api.session.Action is very similar to step, but it does not allow blocking. Implement Action.BuilderFactory to define new actions.

  • StatusHandler, HeaderHandler and BodyHandler in io.hyperfoil.api.http package process different stages of HTTP response parsing. All these have BuilderFactory inner interface for you to implement.

  • io.hyperfoil.api.connection.Processor performs later generic stages of response processing. As this interface is generic, there are two factories that you could use: i.h.a.c.Request.ProcessorBuilderFactory and i.h.a.c.HttpRequest.ProcessorBuilderFactory.

There is quite some boilerplate code in the process of creating a new component; that’s why you can use Hyperfoil Codegen Maven plugin to scaffold the basic outline for you. Go to the module where you want the component generated and run:

mvn io.hyperfoil:hyperfoil-codegen-maven-plugin:skeleton

The plugin will ask you for the package name, component name and type and write down the source code skeleton. You can provide the parameters right on commandline like

mvn io.hyperfoil:hyperfoil-codegen-maven-plugin:skeleton \
    -Dskeleton.package=foo.bar -Dskeleton.name=myComponent -Dskeleton.type=step

If you add io.hyperfoil as a plugin group to your $HOME/.m2/settings.xml like this:

<settings>
  <pluginGroups>
    <pluginGroup>io.hyperfoil</pluginGroup>
  </pluginGroups>
  ...
</settings>

you can use the short syntax for the generator:

mvn hyperfoil-codegen:skeleton -Dskeleton.name=....

See also further information about custom extensions development.


This is the last quickstart in this series; if you seek more info check out the documentation or talk to us on GitHub Discussions.