Load Properties

Constant Protocol Workloads

The load properties section defines the type of load, number of concurrent workers, rampup, and rampdown durations.

  • Load: Amount of IOs sent to the System Under Test. Three types of load are available:

    • Max: For the configured workload, generate the maximum load possible with the number of concurrent workers. The “maximum load possible” can vary even for the same exact workload and same exact load settings, depending on Test Bed selected and the state of the System Under Test (including lab network).

    • Throughput: For the configured workload, generate up to the specified amount of throughput with the number of concurrent workers.

    • Actions per Second (IOPS): For the configured workload, generate up to the specified amount of Actions per Second (generally known as and generally equivalent to IOPS) with the number of concurrent workers.

  • Worker: An entity that executes the configured workload from start to finish. Depending on how the workload is configured, a worker can represent an application, or an application’s process, or a VM, or a host, or a collection of hosts behind an edge device, or more. Essentially, whatever you are looking to simulate with the configured workload, one worker will run one instance of that workload from start to finish. N concurrent workers means N instances of that workload will run concurrently. When a worker finishes executing the configured workload from start to finish, and if the workload run duration has not run out, then a new worker will run the configured workload from start to finish. Therefore, it is possible to see by the end of a test run that multiple workers have ran the workload, even though you have specified set concurrent workers to 1. It is important that enough concurrent workers are specified to access the number of LUNs/Volumes with the number of desired users and IP addresses specified in the test bed. Each worker only accesses one LUN or volume and only uses one user or IP address.

    • For IP-based workloads, “start” is usually observable on the wire as it is usually marked by a new TCP connection attempt, and “close” is usually marked with closing the new TCP connection.

      Note

      NFS can have multiple new TCP connections back to back at the start of a workload from each worker. For example, an NFSv3 worker will first open and close new TCP connections for Portmapper and Mount, before opening a new TCP connection for NFSv3.

    • For FC-based workloads, “start” is usually observable on the wire as it is usually marked by a Test Unit Ready command, but “close” is not usually easily observable on the wire since there is no “SCSI layer connection”.

  • Rampup and Rampdown: Rampup enables you to specify the time taken to reach the load parameters specified and Rampdown specifies the time taken to reduce the specified load parameters to zero. Editing the text boxes can change the Rampup and Rampdown values. To change the time unit from the default of seconds to minutes, hours or days, click the down arrow and select a new value.

    2019-11-14_14-38-24.png

    Rampup and Rampdown: Rampup enables you to specify the time taken to reach the load parameters specified and Rampdown specifies the time taken to reduce the specified load parameters to zero. Editing the text boxes can change the Rampup and Rampdown values. To change the time unit from the default of seconds to minutes, hours or days, click the down arrow and select a new value.

This example generates the maximum load possible for the configured workload with a single concurrent worker. Rampup and Rampdown do not have effect in this example because the load type is “max” and concurrent workers = 1.

To use a different load type, select the down arrow next to the current value and choose a different load measurement method.

2019-11-14_14-39-28.png

In this example, WorkloadWisdom will attempt to generate up to 10,000 new Actions per Second (essentially IOPS) across 10 concurrent workers, which means 1,000 new IOPS per concurrent worker. A total of 10 seconds will be used to evenly ramp up to the 10,000 new IOPS from the 10 concurrent workers starting with 0 IOPS, which means every second, an additional 1,000 new IOPS will be attempted from the 10 concurrent workers. During the test run, the Results Dashboard will show the following ramp for this configuration:

2019-11-14_14-40-14.png
2019-11-14_14-41-15.png

IOPS and Workers ramp up behaviors for the Load Properties settings example

However, whether a perfect RampUp can be achieved or not also depends on the SUT’s performance, as well as the Test Bed used. For example, using the same configuration, here is an example showing that the SUT cannot sustain the RampUp, so it is important to set a RampUp value that makes sense for your test environment.

2019-11-14_14-43-33.png

The concept is similar for Throughput. Instead of using Actions per Second as the load target, it uses Throughput as the load target.

Temporal Protocol Workloads

The temporality aspect of the load is automatically generated based on the imported production workload data (via Workload Data Importer). What you can set is a set of related parameters that essentially allow you to “scale up” and “scale down” the imported production workload data.

2019-11-14_14-48-09.png

In this example:

  • Concurrent workers: Acquired, 6. The value 6 is automatically set based on the resulting analysis of the imported production workload data. To understand where this comes from, you can review the Workload Components View tab from the output of your imported Production Workload Data csv file (using the Workload Data Importer feature). You can change the default value, and when you do, the Acquired label will change to Reset to Acquired to let you know that the current value is no longer the default value.

  • Simultaneous Reads / Writes: See Asynchronous I/Os section.

  • Scale load value by: This changes the amplitude of the temporality aspect of the generated load. For example, if the default temporal load profile has 1,000 IOPS in the first minute, 1,500 IOPS in the second minute, and 1,100 IOPS in the third minute, and you set this value to 2.0, then the generated temporal load profile will have 2,000 IOPS in the first minute, 3,000 IOPS in the second minute, and 2,200 IOPS in the third minute.

It is important to note that depending on the default temporal load and the test environment you have, it might not be possible to generate the load you specify with just one Workload Generator Port. For example, if the default temporal load profile has 100,000 IOPS and the workload already contains some large block sizes, then it simply might not be possible to generate 1,000,000 IOPS if you set the scale factor to 10.0. Conversely, it is also possible that there is not enough resource to recreate the expected load. For example, if you selected a time slice of a temporal workload that is doing 200 IOPS, and then you set the concurrent workers = 100, simultaneous Reads / Writes = 1, and scale load value = 100, this would require each worker to send 200 IOPS. However, with this setting, each worker can only send 1 Read and 1 Write simultaneously, which may be insufficient.

A general best practice is to allow for long enough workload run time, and scale up / down proportionally across concurrent workers, simultaneous Reads / Writes, and scale load values.