Access Pattern

The access pattern has two configurable parameters: The Command Descriptor Block (CDB) length and the read/write ratio.

CDB defines the size of the header block. You can set CDB to 6, 10, 12, or 16 bytes. The default is 10 bytes, which is still observed to be the most common. However, if you are running the workload on a Test Bed that includes a LUN with size greater than 2TB, CDB 16 must be used. To change the CDB Length value, click the CDB Length drop-down menu next to the current value and select from the list.

To change the read/write ratio, drag the slider to the right to increase the read percentage or to the left to increase write percentage.

Configuration options for writes and reads are described in the Creating a New Workload Test section.

You can specify the I/O region of the LUN using absolute values or a percentage of the LUN for the overall size and offset. To change the method, select the drop-down menu next to the current value.

If you specify an absolute value in bytes, kilobytes (KB), megabytes (MB) or gigabytes (GB), only that sized region starting at specified offset is used. To change the absolute values, edit the offset and region sizes or percentages in the text boxes.

If you specify a percentage, the starting offset and the overall region are calculated by the workload.

If you want to use the same parameters for reads and writes, check the Use the same parameters as in Writes checkbox in the Reads section.

The fixed number of Asynchronous I/Os parameter allows you to specify how many concurrently outstanding I/O requests can be sent for Read and for Write, respectively, for each worker on each Test Bed Link. For example, if you set it to 8 for Writes and 6 for Reads, then you can have up to 8 concurrent asynchronous Writes and up to 6 concurrent asynchronous Reads, totaling up to 14 concurrent asynchronous I/Os for each worker on each Test Bed Link. If you have 2 concurrent workers and 1 Test Bed Link, then you can have up to 28 concurrent asynchronous I/Os. If you have 2 concurrent workers and 2 Test Bed Links, then you can have up to 48 total, but up to 28 per Link.

Data Parameters

Block constant workloads support all WorkloadWisdom data parameters options, including the deduplication option of data reduction.

The data deduplication functionality produces unique data content and duplicates of the data content that should yield the user configured data deduplication ratio, intermixed with data content that includes a block of 0's which will be compressible by most algorithms to yield the user configured data compression ratio. A deduplication ratio of 3:1 indicates that for every 3 blocks written only 1 will be unique. Similarly, a 3:1 compression indicates that for every 3 bytes of data on average 1 will be non-zero. The zeros are typically grouped together within the block. Both ratios are rounded to 1 decimal place. The number of unique duplicates determines the size of the pool of reused blocks making up the duplicated data.

The following settings are available for deduplication and compression. The following deduplication option settings control how many duplicate data patterns are used for block workloads:

• Dedup Ratio. Ratio of data patterns to be deduplicated versus data patterns that will not be deduplicated.

• Number of unique duplicates. Number of unique data patterns in a “pool” of data patterns that are used to draw existing data patterns from to generate duplicated data.

• Compression Ratio. Generates a data pattern that causes the system under test to yield the compression ratio that you specify.

When the workload is running, the generated data pattern is either completely unique (no repeatability) or drawn from a pool of unique, but repeatable data patterns (seeded random sequence) that has been “seen” before to yield the specified Dedup Ratio.

For example, if the Number of Unique Duplicates is set to 100, a pool of one hundred unique, but repeatable, data patterns are generated. Throughout the workload test run, whenever a deduplicable data pattern is needed, it is drawn from this pool alongside completely unique data patterns. If you are not sure how many unique duplicates to use, use the default value of 100.

Pre-test parameters

A pre-test sets up the testing environment and normally includes preparing the file or object system and LUN, so the workload can be run against it. The pre-test section specifies when or it the pre-test runs. For block workloads, it is recommended to set Do not run pre-test, and use the preconditioning workload instead. See Preconditioning

The pre-test setting is different for block workloads. Specify the LUN region offset and I/O region. It operates the same way as described in Writes and Reads. You can specify the block size that is used only during the Pre-test, as it may help in speeding up the preconditioning process. There is an additional parameter that you can use to repeat the action for each LUN in the test bed

Fibre Channel

The workload for Fibre Channel supports Multi-Path IO (MPIO). This enables you to test workload load balancing or recovery from a failure scenario.

MPIO

MPIO is only available when enabled on the test bed on which the workload is running. There are three MPIO algorithms currently supported:

• Fail over only. Ensures the port is redirected to an operational port

• Round robin. Distributes the load across all ports participating in MPIO

• Least queue depth. Uses the least busy port to send traffic

To change the block size approach, select from the drop-down menu next to the current value.

You can enable Asymmetric Logical Unit Access (ALUA) reconfiguration by selecting the Enable ALUA Reconfiguration checkbox.

Run it on

Run it on is slightly different for FC workload than for other protocol workloads. It provides the ability to set the client port queue depth. You can use this to determine what is the maximum number of commands that should ever be outstanding on a storage port at any one time. You can also use to find the ideal port Queue Depth setting to use for your OS.

The FC Client Port MaxQueue Depth is a per port limit of the maximum number of commands enabled onto the network at any point in time. This setting is a limit at the Workload Generator port and applies regardless of MPIO configurations and number of links in the test bed from the port.

Latency is only tracked for commands that are enabled on the network so as not to artificially penalize the storage environment for commands on which it is not enabled to work. You do not have to worry about setting the number of concurrent workers or asynchronous commands too high. You can use the FC Client Port Max Queue Depth to control the queueing.

iSCSI

The iSCSI workload model is identical to Fibre Channel, with two exceptions:

• iSCSI workload model does not support MPIO

• iSCSI workload model does not support the Max Queue Depth setting