Skip to main content

Common to All File Protocol Workloads

The following sections are common to all file protocol workloads:

Access Pattern

The access pattern can be specified in two ways, a simple configuration (below) that specifies the read/write ratio and data/metadata ratio, or as a specific command distribution. The simple configuration is common to all file workloads, the command distribution is specific to each protocol.

2019-11-14_15-50-08.png

To change the ratio, move the slider to reflect the desired percentage value. If you move the slider to the left for the read/write percentage the read percentage decreases.

2019-11-14_15-52-14.png

Simple Configuration is the default. To change to specific command distribution, which gives you the ability to individually control each supported command specific to each protocol, select the down arrow and select Commands Distribution. The sliders change to:

2019-11-14_15-54-22.png

File System

A file protocol workload specifies the workload that it runs against.

Use the file system section to create a file system based on a flat or tree hierarchy. To change the value, select the down arrow next the current value.

2019-11-14_15-55-17.png

Use flat hierarchy to specify the number of files to be created in a single top-level root folder defined by the Root folder(s) name. If you leave it blank, the files are created on the default share specified in the test bed, without any folders.

2019-11-14_15-56-55.png

If you want multiple top-level root folders on the share, select the checkbox I want to create multiple root folders on the share location, and you will be presented with additional options to specify the Number of root folders (all at the same top level on the share, not nested), as well as a Postfix to further make it easier to identify the folders and files created from this Workload Test.

2019-11-14_16-00-53.png

Note

Multiple top-level root folders is only available for SMB protocol workloads.

Finally, the Sample path to file provides a read-only indication of how your files will be created based on current configurations.

2019-11-14_16-04-26.png

Use tree hierarchy to specify the depth (levels) and the breath of the file system (subfolders per folder), and the number of files per folder. The total folders and files are displayed automatically as you modify any of the values. Like the flat hierarchy, you can specify a specific share or leave it blank and it runs against the default share specified in the test bed.

2019-11-14_16-02-13.png

In addition to specifying the file system structure, you can specify the size of the files in the file system using either a constant or fixed size, a random distribution of file sizes or specify a distribution based on set file size bins. The expected size of the file system or range of possible sizes are given based on the file system information above and the file size information.

You can specify constant file sizes in bytes, kilobytes (KB), megabytes (MB) or gigabytes (GB).

2019-11-14_16-07-52.png

Constant file size is the default. You can change select the file size type from the drop-down list.

Use random file size distribution to randomly create files between the first value (minimum) and the second value (maximum size). You can specify random file size in bytes, kilobytes (KB), megabytes (MB) or gigabytes (GB).

2019-11-14_16-21-08.png

You can specify up to eight custom bins. You can change the ratio of size distributions by moving the sliders up to increase the value or down to decrease. You can set the slider maximum by selecting maximum percentage value the sliders can have in the top righthand corner of the bin distribution section.

2019-11-14_16-23-04.png

You can remove the default bins and replace with your own custom bins by clicking +Add Bin. You can specify the size range by filling in the start (From) and end (To) sizes in bytes, kilobytes (KB), megabytes (MB) or gigabytes (GB). However, you cannot specify bins that are overlapping in file sizes. When this occurs, the “Add Bin” button is grayed out and an error message displays when you hover over the Add Bin button.

2019-11-14_16-24-05.png

You can also specify the approach to the file names by either choosing user-defined names or sequentially generated names for files and directories.

Use user-defined names to specify a specific pre-fix text string, concatenated by a sequence of automatically generated symbols, and optionally concatenated by a specific postfix text string. You can specify values exceed the length of the text boxes. The generated symbols are not editable.

2019-11-14_16-26-14.png

Use sequentially-generated names if you do not want to specify a specific text string for file and directory names.

2019-11-14_16-33-20.png

Writes and Reads

Use the writes and reads section to configure the block sizes, the percentage of random versus sequential behaviors and the number asynchronous I/Os.

Data Parameters

File workloads support both the data deduplication and data compression features that are available under Data Parameters.

If you are testing with data deduplication, WorkloadWisdom generates the unique files and copies of those files that are required to emulate data deduplication functionality. The number and types of files generated depends on the deduplication ratio selected by the user. As an example, a deduplication ratio of 3:1 indicates that for every 3 files written, only 1 is unique.

For data compression, data content is generated that includes a block of 0's. Each file uses a specific data pattern to emulate data compression, to yield the user-configured data compression ratio. For example, a compression ratio of 3:1 indicates that for every 3 bytes of data, on average 1 is non-zero. The zeros are typically grouped together within the block.

The data deduplication and compression ratios are both rounded to 1 decimal place.

The following settings are available for deduplication and compression:

  • Dedup Ratio. Generates unique files and copies of each unique file over the course of the workload test run that causes the system under test to yield the Dedup Ratio that you specify.

  • Compression Ratio. Generates a data pattern for each file. The generated data pattern causes the system under test to yield the compression ratio that you specify.

2019-11-14_16-52-15.png

It is recommended that you use at least one hundred files in the workload test, as with a small number of files, some deduplication ratios are not possible. For example, it is not possible to achieve a deduplication ratio of 50% with three files.

Pre-Test

For file workloads, the Recommended concurrent workers value is automatically set for you based on your workload’s file system configuration, to optimize the file system creation time. You can change the recommended value to a lower number, but you cannot change it to a higher number. This is because there will be one worker dedicated to each folder.

2019-11-14_16-53-13.png

Therefore, if you are creating a filesystem with only one folder or no folder at all (e.g. flat file system with no root folders) then this value must be 1. If you are creating a filesystem with 50 folders, then this value can be 1 to 50. If you are creating a filesystem with 200 folders, then this value can be 1 to 100, because 100 is the current maximum value.