Test Specification

This file lists all test cases’ specifications.

class Nested
Description:
This test suite is used to run nested vm related tests.
Platform:

Azure, Ready

Area:

nested

Category:

functional

verify_nested_kvm_basic()
Description:
This test case will run basic tests on provisioned L2 vm.
Steps:
1. Create L2 VM with Qemu.
2. Verify that files can be copied from L1 VM to L2 VM.
3. Verify that files from internet can be downloaded to L2 VM.
Priority:

1

Requirement:

supported_features[NestedVirtualization]

class VirtualClient
Description:
This test suite runs the performance test cases with Virtual Client.
Platform:

Azure, Ready

Area:

virtual_client

Category:

performance

perf_vc_postgresql()
Description:
This test is to run PostgreSQL workload testing with Virtual Client.
Priority:

3

Requirement:

disk

perf_vc_redis()
Description:
This test is to run redis workload testing with Virtual Client.
Priority:

2

Requirement:

min_count=2

class Drm
Description:
This test suite uses to verify drm driver sanity.
Platform:

Azure, Ready

Area:

drm

Category:

functional

verify_no_error_output()
Description:
This case is to check this patch
Step,
1. Get dmesg output.
2. Check no ‘Unable to send packet via vmbus’ shown up in dmesg output.
Priority:

2

Requirement:

supported_features

verify_connection_status()
Description:
This case is to check connector status using modetest utility for drm.
Step,
1. Install tool modetest.
2. Verify the status return from modetest is connected.
Priority:

2

Requirement:

supported_features

verify_drm_driver()
Description:
This case is to check whether the hyperv_drm driver registered successfully.
Once driver is registered successfully it should appear in lsmod output.
Steps,
1. lsmod
2. Check if hyperv_drm exist in the list.
Priority:

2

Requirement:

supported_features

verify_dri_node()
Description:
This case is to check whether the dri node is populated correctly.
If hyperv_drm driver is bind correctly it should populate dri node.
This dri node can be find at following sysfs entry : /sys/kernel/debug/dri/0.
The dri node name (/sys/kernel/debug/dri/0/name) should contain hyperv_drm.
Step,
1. Cat /sys/kernel/debug/dri/0/name.
2. Verify it contains hyperv_drm string in it.
Priority:

2

Requirement:

supported_features

class Fips
Description:
Tests the functionality of FIPS enable
Platform:

Azure, Ready

Area:

security

Category:

functional

verify_fips_enable()
Description:
This test case will
1. Check whether FIPS can be enabled on the VM
2. Enable FIPS
3. Restart the VM for the changes to take effect
4. Verify that FIPS was enabled properly
Priority:

3

class MshvHostTestSuite
Description:
This test suite contains tests that should be run on the
Microsoft Hypervisor (MSHV) root partition. This test suite contains tests
to check health of mshv root node.
Platform:

Azure, Ready

Area:

mshv

Category:

functional

verify_mshvlog_is_active()
Description:
With mshv_diag module loaded, ensure mshvlog.service starts and runs
successfully on MSHV root partitions. Also confirm there are no errors
reported by mshv_diag module in dmesg.
Priority:

4

class MshvHostInstallSuite
Description:
This test suite is to test VM working well after updating Microsoft Hyper-V on VM
and rebooting.
Platform:

Azure, Ready

Area:

mshv

Category:

functional

verify_mshv_install_succeeds()
Description:
This test case will
1. Update to new MSHV components over old ones in a
pre-configured MSHV image
2. Reboot VM, check that mshv comes up
The test expects the directory containing MSHV binaries to be passed in
the mshv_binpath variable.
Priority:

2

class MshvHostStressTestSuite
Description:
This test suite contains tests that are meant to be run on the
Microsoft Hypervisor (MSHV) root partition.
Platform:

Azure, Ready

Area:

mshv

Category:

stress

stress_mshv_vm_create()
Description:
Stress the MSHV virt stack by repeatedly creating and destroying
multiple VMs in parallel. By default creates VMs with 1 vCPU and
1 GiB of RAM each. Number of VMs createdis equal to the number of
CPUs available on the host. By default, the test is repeated 25
times. All of these can be configured via the variable
“mshv_vm_create_stress_configs” in the runbook.
Priority:

4

class TvmTest
Description:
This test suite is to validate secureboot in Linux VM.
Platform:

Azure, Ready

Area:

tvm

Category:

functional

verify_secureboot_compatibility()
Description:
This case tests the image is compatible with Secure Boot.
Steps:
1. Enable repository azurecore from https://packages.microsoft.com/repos.
2. Install package azure-security.
3. Check image Secure Boot compatibility from output of sbinfo.
Priority:

2

Requirement:

supported_features

verify_measuredboot_compatibility()
Description:
This case tests the image is compatible with Measured Boot.
Steps:
1. Enable repository azurecore from https://packages.microsoft.com/repos.
2. Install package azure-compatscanner.
3. Check image Measured Boot compatibility from output of mbinfo.
Priority:

2

Requirement:

supported_features

class LinuxPatchExtensionBVT
Description:
Test for Linux Patch Extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_vm_install_patches()
Description:
Verify walinuxagent or waagent service is running on vm. Perform
install patches to trigger Microsoft.CPlat.Core.LinuxPatchExtension
creation in vm.
Verify status file response for validity.
Priority:

3

verify_vm_assess_patches()
Description:
Verify walinuxagent or waagent service is running on vm. Perform assess
patches to trigger Microsoft.CPlat.Core.LinuxPatchExtension creation in
vm. Verify status file response for validity.
Priority:

1

class AzSecPack
Description:
BVT for Azure Security Pack.
Azure Security Pack includes core security features that provide security logging
and monitoring coverage for the service.
This test suite validate if Azure security pack can be installed, uninstalled
successfully, and check if the autoconfig is configured successfully.
This test requires your subscription is within AutoConfig scope. It manually enables
the AzSecPack AutoConfig on the Linux VM. The steps are:
1. Add resource tag for AzSecPack
2. Create an assign user assigned managed identity AzSecPack to the VM
3. Add Azure VM extensions for AMA and ASA to the VM
If the subscription is within AutoConfig scope, the AutoCoonfig onboarding method
is recommended. It needs adding resoure tag for AzSecPack, creating and assigning
UserAssigned Managed Identity AzSecPack AutoConfig to the ARM resources.
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_azsecpack()
Description:
Verify whether Azure security pack can be installed, uninstalled
successfully, and check if the autoconfig is configured successfully.
Priority:

1

Requirement:

unsupported_os[BSD]

class AzureMonitorAgentLinuxExtension
Description:
Tests for the Azure Monitor Agent Linux VM Extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_azuremonitoragent_linux()
Description:
Installs and runs the Azure Monitor Agent Linux VM Extension.
Deletes the VM Extension.
Priority:

1

Requirement:

supported_features[AzureExtension]

class AzureKeyVaultExtensionBvt
Description:
BVT for Azure Key Vault Extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_key_vault_extension()
Description:
The following test case validates the Azure Key Vault Linux
* Extension while creating the following resources:
* A Key Vault
* Two certificates in the Key Vault
* Retrieval of the certificate’s secrets
through SecretClient class from Azure SDK.
* Installation of the Azure Key Vault Linux Extension on the VM.
* Installation of the certs through AKV extension
* Rotation of the certificates
* Printing the cert after rotation from the VM
* Deletion of the resources
Priority:

1

Requirement:

unsupported_os[BSD]

class NetworkWatcherExtension
Description:
Tests for the Azure Network Watcher VM Extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_azure_network_watcher()
Description:
Installs and runs the Azure Network Watcher VM Extension.
Deletes the VM Extension.
Priority:

1

Requirement:

supported_features[AzureExtension]

class VmSnapsotLinuxBVTExtension
Description:
Test for VMSnapshot extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_exclude_disk_support_restore_point()
Description:
Runs a script on the VM
The script takes the responsibility of distinguishing the various ditros into
supported or unsupported for selective billing feature.
The test would be passed in both the cases, just that the information helps in
clearly classifying the distro, when the test runs on various distros.
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_vmsnapshot_extension()
Description:
Create a restore point collection for the virtual machine.
Create application consistent restore point on the restore point
collection.
Validate response of the restore point for validity.
Attempt it a few items to rule out cases when VM is under changes.
Priority:

1

Requirement:

supported_features[AzureExtension]

class MetricsExtension
Description:
This test is a BVT for MDM MetricsExtension
Platform:

Azure, Ready

Area:

MetricsExtension

Category:

functional

verify_metricsextension()
Description:
Verify whether MetricsExtension is installed, running,
and uninstalled successfully
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

class CVTTest
Description:
This test is used to validate the functionality of ASR driver.
Platform:

Azure, Ready

Area:

cvt

Category:

functional

verify_asr_by_cvt()
Description:
this test validate the functionality of ASR driver by verifying
integrity of a source disk with respect to a target disk
Downgrade the case priority from 3 to 5 for its instability.
Priority:

5

Requirement:

disk

class AzureDiskEncryption
Description:
Tests for the Azure Disk Encryption (ADE) extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_azure_disk_encryption_provisioned()
Description:
Runs the ADE extension and verifies the extension
provisioned successfully on the remote machine.
Priority:

1

verify_azure_disk_encryption_enabled()
Description:
Runs the ADE extension and verifies it
fully encrypted the remote machine successfully.
Priority:

3

Requirement:

min_core_count=4

class WaAgentBvt
Description:
BVT for VM Agent
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_vm_agent()
Description:
Runs the custom script extension and verifies it executed on the
remote machine.
Priority:

1

Requirement:

supported_features[AzureExtension]

class AzurePerformanceDiagnostics
Description:
Tests for the Azure Performance Diagnostics VM Extension
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_azure_performance_diagnostics()
Description:
Installs and runs the Azure Performance Diagnostics VM Extension.
Verifies a report was created and uploaded to the storage account.
Deletes the VM Extension.
Downgrading priority from 1 to 5. The extension relies on the
storage account key, which we cannot use currently.
Will change it back once the extension works with MSI.
Priority:

5

Requirement:

supported_features[AzureExtension]

class RunCommandV2Tests
Description:
This test suite tests the functionality of the Run Command v2 VM extension.
It has 12 test cases to verify if RCv2 runs successfully when provided:
1. Pre-existing available script hardcoded in CRP
2. Custom shell script
3. Script with a named parameter
4. Script with an unnamed parameter
5. Script with a named protected parameter
6. Public storage blob uri that points to the script
7. Storage uri pointing to script without a sas token (should fail)
8. Storage sas uri that points to script
9. Command with a timeout of 1 second (should pass)
10. Command that should take longer than 1 second, but with a
timeout of 1 second (should fail)
11. Provided a different valid user to run a command with
12. Provided a different invalid user to run a command with (should fail)
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_private_uri_script_run_failed()
Description:
Runs the Run Command v2 VM extension with a private storage uri pointing
to the script in blob storage. No sas token provided, should fail.
Priority:

3

verify_script_run_with_timeout()
Description:
Runs the Run Command v2 VM extension with a timeout of 0.1 seconds.
Priority:

3

verify_script_run_with_named_parameter()
Description:
Runs the Run Command v2 VM extension with a named public parameter
passed to a custom shell script.
Priority:

3

verify_existing_script_run()
Description:
Runs the Run Command v2 VM extension with a pre-existing ifconfig script.
Priority:

1

verify_script_run_with_invalid_user()
Description:
Runs the Run Command v2 VM extension with a different invalid user on the VM.
Priority:

3

verify_script_run_with_protected_parameter()
Description:
Runs the Run Command v2 VM extension with a named protected parameter
passed to a custom shell script.
Priority:

3

verify_sas_uri_script_run()
Description:
Runs the Run Command v2 VM extension with a storage sas uri pointing
to the script in blob storage.
Priority:

3

verify_script_run_with_timeout_failed()
Description:
Runs the Run Command v2 VM extension with a timeout of 1 second.
Priority:

3

verify_public_uri_script_run()
Description:
Runs the Run Command v2 VM extension with a public uri pointing to the
script in blob storage.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_script_run_with_unnamed_parameter()
Description:
Runs the Run Command v2 VM extension with an unnamed public parameter
passed to a custom shell script.
Priority:

3

verify_script_run_with_valid_user()
Description:
Runs the Run Command v2 VM extension with a different valid user on the VM.
Priority:

3

verify_custom_script_run()
Description:
Runs the Run Command v2 VM extension with a custom shell script.
Priority:

3

class CustomScriptTests
Description:
This test suite tests the functionality of the Custom Script VM extension.
File uri is a public Azure storage blob uri unless mentioned otherwise.
File uri points to a linux shell script unless mentioned otherwise.
It has 12 test cases to verify if CSE runs as intended when provided:
1. File uri and command in public settings
2. Two file uris and command for downloading second script in public settings
3. File uri and command in both public and protected settings (should fail)
4. File uri without a command or base64 script (should fail)
5. Both base64 script and command in public settings (should fail)
6. File uri and base64 script in public settings
7. File uri and gzip’ed base64 script in public settings
8. File uri and command in protected settings
9. Private file uri without sas token or credentials (should fail)
10. Private file uri with storage account credentials
11. Private sas file uri and command in public settings
12. File uri (pointing to python script) and command in public settings
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_private_script_with_storage_credentials_run()
Description:
Runs the Custom Script VM extension with private Azure storage file uri
without a sas token but with storage account credentials.
Downgrading priority from 3 to 5. The extension relies on the
storage account key, which we cannot use currently.
Priority:

5

verify_public_script_with_base64_script_run()
Description:
Runs the Custom Script VM extension with a base64 script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_private_sas_script_run()
Description:
Runs the Custom Script VM extension with private Azure storage file uri
with a sas token.
Priority:

3

verify_public_script_protected_settings_run()
Description:
Runs the Custom Script VM extension with public file uri and command in
protected settings.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_script_without_command_run()
Description:
Runs the Custom Script VM extension without a command and a script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_python_script_run()
Description:
Runs the Custom Script VM extension with a public Azure storage file uri
pointing to a python script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_script_run()
Description:
Runs the Custom Script VM extension with a public Azure storage file uri.
Downgrading priority from 1 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_script_in_both_settings_failed()
Description:
Runs the Custom Script VM extension with public file uri and command
in both public and protected settings.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_script_with_gzip_base64_script_run()
Description:
Runs the Custom Script VM extension with a gzip’ed base64 script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_second_public_script_run()
Description:
Runs the Custom Script VM extension with 2 public file uris passed in
and second script being run. Verifies second script created.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_private_script_without_sas_run_failed()
Description:
Runs the Custom Script VM extension with private Azure storage file uri
without a sas token.
Priority:

3

verify_base64_script_with_command_run()
Description:
Runs the Custom Script VM extension with a base64 script
and command with no file uris.
Priority:

3

class VMAccessTests
Description:
This test suite tests the functionality of the VMAccess VM extension.
Settings are protected unless otherwise mentioned.
OpenSSH format public keys correspond to ssh-rsa keys.
It has 8 test cases to verify if VMAccess runs successfully when provided:
1. Username and password
2. Username and OpenSSH format public key
3. Username with both a password and OpenSSH format public key
4. Username with no password or ssh key (should fail)
5. Username and certificate containing public ssh key in pem format
6. Username and SSH2 format public key
7. Username to remove
8. Username, OpenSSH format public key, and valid expiration date
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_no_password_and_ssh_key_run_failed()
Description:
Runs the VMAccess VM extension without a password and OpenSSH public key.
Priority:

3

verify_valid_password_run()
Description:
Runs the VMAccess VM extension with a valid username and password.
Priority:

1

verify_openssh_key_run()
Description:
Runs the VMAccess VM extension with an OpenSSH public key.
Priority:

3

verify_valid_expiration_run()
Description:
Runs the VMAccess VM extension with an OpenSSH public key
and valid expiration date.
Priority:

3

verify_remove_username_run()
Description:
Runs the VMAccess VM extension with a username to remove.
Priority:

3

verify_password_and_ssh_key_run()
Description:
Runs the VMAccess VM extension with both a password and OpenSSH public key.
Priority:

3

verify_ssh2_key_run()
Description:
Runs the VMAccess VM extension with an SSH2 public key.
Priority:

3

verify_pem_certificate_ssh_key_run()
Description:
Runs the VMAccess VM extension with a certificate containing a public ssh key
in pem format.
Priority:

3

class RunCommandV1Tests
Description:
This test suite tests the functionality of the Run Command v1 VM extension.
** Same set of tests as CSE **
It has 12 test cases to verify if RCv1 runs successfully when provided:
1. File uri and command in public settings
2. Two file uris and command for downloading second script in settings
3. File uri and command in both public and protected settings (should fail)
4. File uri without a command or base64 script (should fail)
5. Both base64 script and command in public settings (should fail)
6. File uri and base64 script in public settings
7. File uri and gzip’ed base64 script in public settings
8. File uri and command in protected settings
9. Private file uri without sas token or credentials (should fail)
10. Private file uri with storage account credentials
11. Private sas file uri and command in public settings
12. File uri (pointing to python script) and command in public settings
Platform:

Azure, Ready

Area:

vm_extension

Category:

functional

verify_private_script_without_sas_run_failed()
Description:
Runs the Run Command v1 VM extension with private Azure storage file uri
without a sas token.
Priority:

3

verify_private_sas_script_run()
Description:
Runs the Run Command v1 VM extension with private Azure storage file uri
with a sas token.
Priority:

3

verify_public_script_run()
Description:
Runs the Run Command v2 VM extension with a public Azure storage file uri.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_second_public_script_run()
Description:
Runs the Run Command v1 VM extension with 2 public file uris passed in
and second script being run. Verifies second script created.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_script_without_command_run_failed()
Description:
Runs the Run Command v1 VM extension without a command and a script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_base64_script_with_command_run_failed()
Description:
Runs the Run Command v1 VM extension with a base64 script
and command with no file uris.
Priority:

3

verify_public_script_with_base64_script_run()
Description:
Runs the Custom Script VM extension with a base64 script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_script_in_both_settings_failed()
Description:
Runs the Run Command v1 VM extension with public file uri and command
in both public and protected settings.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_script_with_gzip_base64_script_run()
Description:
Runs the Run Command v1 VM extension with a gzip’ed base64 script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_script_protected_settings_run()
Description:
Runs the Run Command v1 VM extension with public file uri and command in
protected settings.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_public_python_script_run()
Description:
Runs the Run Command v1 VM extension with a public Azure storage file uri
pointing to a python script.
Downgrading priority from 3 to 5. Due to the requirement for blob public access,
which is restricted for security reasons.
Priority:

5

verify_private_script_with_storage_credentials_run()
Description:
Runs the Run Command v1 VM extension with private Azure storage file uri
without a sas token but with storage account credentials.
Downgrading priority from 3 to 5. The extension relies on the
storage account key, which we cannot use currently.
Priority:

5

class CloudHypervisorTestSuite
Description:
This test suite is for executing the tests maintained in the
upstream cloud-hypervisor repo.
Platform:

Azure, Ready

Area:

cloud-hypervisor

Category:

community

verify_cloud_hypervisor_live_migration_tests()
Description:
Runs cloud-hypervisor live migration tests.
Priority:

3

Requirement:

node

verify_cloud_hypervisor_integration_tests()
Description:
Runs cloud-hypervisor integration tests.
Priority:

3

Requirement:

node

verify_cloud_hypervisor_performance_metrics_tests()
Description:
Runs cloud-hypervisor performance metrics tests.
Priority:

3

class LibvirtTckSuite
Description:
Runs the libvirt TCK (Technology Compatibility Kit) tests. It is a suite
of functional/integration tests designed to test a libvirt driver’s complicance
with API semantics, distro configuration etc.
Platform:

Azure, Ready

Area:

libvirt

Category:

community

verify_libvirt_tck()
Description:
Runs the Libvirt TCK (Technology Compatibility Kit) tests with the default
configuration i.e. the tests will exercise the qemu driver in libvirt.
Priority:

3

class KselftestTestsuite
Description:
This test suite is used to run kselftests.
Platform:

Azure, Ready

Area:

kselftest

Category:

community

verify_kselftest()
Description:
This test case runs linux kernel self tests on Mariner VMs.
Cases:
1. When a tarball is specified in .yml file, extract the tar and run kselftests.
Example:
- name: kselftest_file_path
value: <path_to_kselftests.tar.xz>
is_case_visible: true
2. When a tarball is not specified in .yml file, clone Mariner kernel,
copy current config to .config, build kselftests and generate a tar.
For both cases, verify that the kselftest tool extracts the tar, runs the script
run_kselftest.sh and redirects test results to a file kselftest-results.txt.
Priority:

3

Requirement:

min_core_count=16

class InfinibandSuite
Description:
Tests the functionality of infiniband.
Platform:

Azure, Ready

Area:

hpc

Category:

functional

verify_hpc_over_nd()
Description:
This test case will
1. Determine whether the VM has Infiniband over Network Direct
2. Ensure waagent is configures with OS.EnableRDMA=y
3. Check that appropriate drivers are present
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_mvapich_mpi()
Description:
This test case will
1. Ensure RDMA is setup
2. Install MVAPICH MPI
3. Set up ssh keys of server/client connection
4. Run MPI pingpong tests
5. Run other MPI tests
Priority:

4

Requirement:

unsupported_os[BSD, Windows]

verify_ib_naming()
Description:
This test case will
1. List all available network interfaces
2. Check if InfiniBand cards are present
3. Ensure the first InfiniBand card is named starting with “ib0”
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_ping_pong()
Description:
This test case will
1. Identify the infiniband devices and their cooresponding network interfaces
2. Run several ping-pong tests to check RDMA / Infiniband functionality
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

verify_open_mpi()
Description:
This test case will
1. Ensure RDMA is setup
2. Install Open MPI
3. Set up ssh keys of server/client connection
4. Run MPI pingpong tests
5. Run other MPI tests
Priority:

4

Requirement:

unsupported_os[BSD, Windows]

verify_ibm_mpi()
Description:
This test case will
1. Ensure RDMA is setup
2. Install IBM MPI
3. Set up ssh keys of server/client connection
4. Run MPI pingpong tests
Priority:

4

Requirement:

unsupported_os[BSD, Windows]

verify_hpc_over_sriov()
Description:
This test case will
1. Determine whether the VM has Infiniband over SR-IOV
2. Ensure waagent is configures with OS.EnableRDMA=y
3. Check that appropriate drivers are present
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_intel_mpi()
Description:
This test case will
1. Ensure RDMA is setup
2. Install Intel MPI
3. Set up ssh keys of server/client connection
4. Run MPI pingpong tests
5. Run other MPI tests
Priority:

4

Requirement:

unsupported_os[BSD, Windows]

class StorageTest
Description:
This test suite is to validate storage function in Linux VM.
Platform:

Azure, Ready

Area:

storage

Category:

functional

verify_disk_with_nobarrier()
Description:
This test case is to
1. Make raid0 based on StandardHDDLRS disks.
2. Mount raid0 with nobarrier options.
Priority:

3

Requirement:

disk

verify_disk_with_fio_verify_option()
Description:
This test case is to
1. Attach 2 512GB premium SSD disks
2. Create a 100GB partition for each disk using fdisk
3. Create a RAID type 1 device using partitions created in step 2
4. Run fio against raid0 with verify option for 100 times
Priority:

1

Requirement:

unsupported_os[Windows, BSD]

class ACCBasicTest
Description:
This Basic Validation Test (BVT) suite validates the availability of Secure Guard
Extensions (SGX) on a given
platform.
Platform:

Azure, Ready

Area:

ACC_BVT

Category:

functional

verify_sgx()
Description:
This case verifies if the VM is SGX Enabled.
Steps:
1. Add keys and tool chain from Intel-SGX, LLVM and Microsoft repositories.
2. Install DCAP driver if missing.
3. Install required package.
4. Run Helloworld and Remote Attestation tests.
Priority:

1

Requirement:

supported_features

class CVMSuite
Description:
This test suite ensures correct configuration and allowed devices for CVM
Platform:

Azure, Ready

Area:

ACC_CVM

Category:

functional

verify_lsvmbus()
Description:
This case verifies that lsvmbus only shows devices
that are allowed in a CVM guest
Steps:
1. Call lsvmbus
2. Iterate through list returned by lsvmbus to ensure all devices
listed are included in valid_class_ids
Priority:

1

Requirement:

supported_features

verify_isolation_config()
Description:
This case verifies the isolation config on guest
Steps:
1. Call dmesg to get output
2. Find isolation config in output
3. Check to ensure config a is 0x1
4. Check to ensure config b is 0xba2
Priority:

1

Requirement:

supported_features

class XdpPerformance
Description:
This test suite is to validate XDP performance.
Platform:

Azure, Ready

Area:

xdp

Category:

performance

perf_xdp_ntttcp_latency()
Description:
This case compare and record latency impact of XDP component.
The test use ntttcp to send tcp packets. And then compare the latency
with/without XDP component. If the gap is more than 40%, the test case
fails.
Priority:

3

Requirement:

network_interface

perf_xdp_rx_drop_singlethread_sriov()
Description:
This case tests the XDP drop performance by measuring Packets Per Second
(PPS) and received rate with single send thread.
see details from perf_xdp_rx_drop_multithread_sriov.
Priority:

3

Requirement:

network_interface

perf_xdp_tx_forward_singlecore_sriov()
Description:
This case tests the packet forwarded rate of XDP TX forwarding on the
single core SRIOV networking.
Refer to perf_xdp_tx_forward_singlecore_synthetic for more details.
Priority:

3

Requirement:

network_interface

perf_xdp_rx_drop_multithread_sriov()
Description:
This case tests the XDP drop performance by measuring Packets Per Second
(PPS) and received rate with multiple send threads.
* If the received packets rate is lower than 90% the test case fails.
* If the PPS is lower than 1M, the test case fails.
Priority:

3

Requirement:

network_interface

perf_xdp_tx_forward_singlecore_synthetic()
Description:
This case tests the packet forwarded rate of the XDP TX forwarding on
the single core Synthetic networking. The pktgen samples in Linux code
base is used to generate packets.
The minimum cpu count is 8, it makes sure the performance is won’t too
low.
Three roles in this test environment, 1) sender is to send packets, 2)
the forwarder is to forward packets to receiver, 3) and the receiver is
used to receive packets and drop.
Finally, it checks how many packets arrives to the forwarder or
receiver. If it’s lower than 90%, the test fails. Note, it counts the
rx_xdp_tx_xmit (mlx5), rx_xdp_tx (mlx4), or dropped count for synthetic
nic.
Priority:

3

Requirement:

network_interface

perf_xdp_tx_forward_multicore_sriov()
Description:
This case tests the packet forwarded rate of XDP TX forwarding on the
multi core SRIOV networking.
Refer to perf_xdp_tx_forward_singlecore_synthetic for more details.
The threshold of this test is lower than standard, it’s 85%. Because the
UDP packets count is big in this test scenario, and easy to lost.
Priority:

3

Requirement:

network_interface

perf_xdp_tx_forward_multicore_synthetic()
Description:
This case tests the packet forwarded rate of XDP TX forwarding on the
multi core Syntethic networking.
Refer to perf_xdp_tx_forward_singlecore_synthetic for more details.
Priority:

3

Requirement:

network_interface

perf_xdp_lagscope_latency()
Description:
This case compare and record latency impact of XDP component.
The test use lagscope to send tcp packets. And then compare the latency
with/without XDP component. If the gap is more than 40%, the test case
fails.
Priority:

3

Requirement:

network_interface

class XdpFunctional
Description:
This test suite is to validate XDP functionality.
Platform:

Azure, Ready

Area:

xdp

Category:

functional

verify_xdp_with_different_mtu()
Description:
It validates XDP with different MTU
1. Check current image supports XDP or not.
2. change MTU to 1500, 2000, 3506 to test XDP.
Priority:

3

Requirement:

min_count=2

verify_xdp_action_aborted()
Description:
It validates the XDP with action ABORT.
1. start tcpdump with icmp filter.
2. start xdpdump.
3. run ping 5 times.
4. check tcpdump with 5 packets.
Priority:

3

Requirement:

min_count=2

verify_xdp_action_tx()
Description:
It validates the XDP with action TX.
1. start tcpdump with icmp filter.
2. start xdpdump.
3. run ping 5 times.
4. check tcpdump with 5 packets, because the icmp is replied in xdp
level.
Priority:

2

Requirement:

min_count=2

verify_xdp_action_drop()
Description:
It validates the XDP with action DROP.
1. start tcpdump with icmp filter.
2. start xdpdump.
3. run ping 5 times.
4. check tcpdump with 5 packets.
Priority:

2

Requirement:

min_count=2

verify_xdp_sriov_failsafe()
Description:
It validates the XDP works with Synthetic network when SRIOV is
disabled.
1. Test in SRIOV mode.
2. Disable the SRIOV.
3. Test in Synthetic mode.
Priority:

2

Requirement:

network_interface

verify_xdp_basic()
Description:
It validates the basic functionality of XDP. It runs multiple times to
test the load/unload. It includes below steps,
1. Check current image supports XDP or not.
2. Install and validate xdpdump.
Priority:

1

verify_xdp_remove_add_vf()
Description:
It validates the XDP works with VF hot add/remove from API.
1. Run xdp dump to drop and count packets.
2. Remove VF from API.
3. Run xdp dump to drop and count packets.
5. Add VF back from API.
6. Run xdp dump to drop and count packets.
Priority:

3

Requirement:

network_interface

verify_xdp_multiple_nics()
Description:
It validates XDP with multiple nics.
1. Check current image supports XDP or not.
2. Install and validate xdpdump.
Priority:

3

Requirement:

min_nic_count=3

verify_xdp_synthetic()
Description:
It validates the XDP works with Synthetic network.
The test step is the same as verify_xdp_basic, but it run once only.
Priority:

2

Requirement:

network_interface

verify_xdp_community_test()
Description:
It runs all tests of xdp-tools. Check the official site for more
details.
Priority:

3

class Gpu
Description:
This test suite runs the gpu test cases.
Platform:

Azure, Ready

Area:

gpu

Category:

functional

verify_load_gpu_driver()
Description:
This test case verifies if gpu drivers are loaded fine.
Steps:
1. Validate if the VM SKU is supported for GPU.
2. Install LIS drivers if not already installed for Fedora and its
derived distros. Reboot the node
3. Install required gpu drivers on the VM and reboot the node. Validate gpu
drivers can be loaded successfully.
Priority:

1

Requirement:

supported_features[SerialConsole, AzureExtension]

verify_gpu_provision()
Description:
This test case verifies if gpu is detected as PCI device
Steps:
1. Boot VM with at least 1 GPU
2. Verify if GPU is detected as PCI Device
3. Stop-Start VM
4. Verify if PCI GPU device count is same as earlier
Priority:

1

Requirement:

min_gpu_count=1

verify_gpu_rescind_validation()
Description:
This test case will
1. Validate disabling GPU devices.
2. Validate enable back the disabled GPU devices.
Priority:

2

Requirement:

supported_features

verify_gpu_adapter_count()
Description:
This test case verifies the gpu adapter count.
Steps:
1. Assert that node supports GPU.
2. If GPU modules are not loaded, install and load the module first.
3. Find the expected gpu count for the node.
4. Validate expected and actual gpu count using lsvmbus output.
5. Validate expected and actual gpu count using lspci output.
6. Validate expected and actual gpu count using gpu vendor commands
example - nvidia-smi
Priority:

2

Requirement:

supported_features

verify_gpu_cuda_with_pytorch()
Description:
This test case will run PyTorch to check CUDA driver installed correctly.
1. Install PyTorch.
2. Check GPU count by torch.cuda.device_count()
3. Compare with PCI result
Priority:

3

Requirement:

supported_features

verify_max_gpu_provision()
Description:
This test case verifies if multiple gpus are detected as PCI devices
Steps:
1. Boot VM with multiple GPUs
2. Verify if GPUs are detected as PCI Devices
3. Stop-Start VM
4. Verify if PCI GPU device count is same as earlier
Priority:

3

Requirement:

min_gpu_count=8

verify_gpu_extension_installation()
Description:
This test case verifies if gpu drivers are installed using extension.
Steps:
1. Install the GPU Driver using Extension.
2. Reboot and check for kernel panic
3. Validate gpu drivers can be loaded successfully.
Priority:

2

Requirement:

supported_features[SerialConsole, AzureExtension]

class NetInterface
Description:
This test suite validates basic functionalities of Network Interfaces.
Platform:

Azure, Ready

Area:

network

Category:

functional

validate_netvsc_reload()
Description:
This test case verifies if synthetic network module - netvsc
can be reloaded gracefully when done multiple times.
Steps:
1. Validate netvsc isn’t built-in already. If it is then skip the test.
2. Unload and load netvsc module multiple times in a loop.
Priority:

1

Requirement:

network_interface

class Sriov
Description:
This test suite uses to verify accelerated network functionality.
Platform:

Azure, Ready

Area:

sriov

Category:

functional

verify_sriov_provision_with_max_nics_stop_start_from_platform()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Stop and Start VM from API.
4. Do the basic sriov testing.
Priority:

2

Requirement:

network_interface

verify_sriov_disable_enable_pci()
Description:
This case verify VM works well after disable and enable PCI device inside VM.
Steps,
1. Disable sriov PCI device inside the VM.
2. Enable sriov PCI device inside the VM.
3. Do the basic sriov check.
4. Do VF connection test.
Priority:

2

Requirement:

network_interface

verify_sriov_interrupts_change()
Description:
This case is to verify interrupts count increased after network traffic
went through the VF, if CPU is less than 8, it can’t verify the interrupts
spread to CPU evenly, when CPU is more than 16, the traffic is too light to
make sure interrupts distribute to every CPU.
Steps,
1. Start iperf3 on server node.
2. Get initial interrupts sum per irq and cpu number on client node.
3. Start iperf3 for 120 seconds with 128 threads on client node.
4. Get final interrupts sum per irq number on client node.
5. Compare interrupts changes, expected to see interrupts increased.
6. Get final interrupts sum per cpu on client node.
7. Collect cpus which don’t have interrupts count increased.
8. Compare interrupts count changes, expected half of cpus’ interrupts
increased.
Priority:

2

Requirement:

node

verify_sriov_provision_with_max_nics_reboot_from_platform()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Reboot VM from API.
4. Do the basic sriov testing.
Priority:

2

Requirement:

network_interface

verify_sriov_basic()
Description:
This case verifies module of sriov network interface is loaded and each
synthetic nic is paired with one VF.
Steps,
1. Check VF of synthetic nic is paired.
2. Check module of sriov network device is loaded.
3. Check VF counts listed from lspci is expected.
Priority:

1

Requirement:

network_interface

verify_sriov_single_vf_connection_max_cpu()
Description:
This case needs 2 nodes and 64 Vcpus. And it verifies module of sriov network
interface is loaded and each synthetic nic is paired with one VF, and check
rx statistics of source and tx statistics of dest increase after send 200 Mb
file from source to dest.
Steps,
1. Check VF of synthetic nic is paired.
2. Check module of sriov network device is loaded.
3. Check VF counts listed from lspci is expected.
4. Setup SSH connection between source and dest with key authentication.
5. Ping the dest IP from the source machine to check connectivity.
6. Generate 200Mb file, copy from source to dest.
7. Check rx statistics of source VF and tx statistics of dest VF is increased.
Priority:

2

Requirement:

network_interface

verify_sriov_provision_with_max_nics()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
Priority:

2

Requirement:

network_interface

verify_sriov_disable_enable_on_guest()
Description:
This case verify VM works well after down the VF nic and up VF nic inside VM.
Steps,
1. Do the basic sriov check.
2. Do network connection test with bring down the VF nic.
3. After copy 200Mb file from source to desc.
4. Check rx statistics of source synthetic nic and tx statistics of dest
synthetic nic is increased.
5. Bring up VF nic.
Priority:

2

Requirement:

network_interface

verify_sriov_reload_modules()
Description:
This case verify VM works well during remove and load sriov modules.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Remove sriov module, check network traffic through synthetic nic.
4. Load sriov module, check network traffic through VF.
Priority:

1

Requirement:

network_interface

verify_sriov_disable_enable()
Description:
This case verify VM works well after disable and enable accelerated network in
network interface through sdk.
Steps,
1. Do the basic sriov check.
2. Set enable_accelerated_networking as False to disable sriov.
3. Set enable_accelerated_networking as True to enable sriov.
4. Do the basic sriov check.
5. Do step 2 ~ step 4 for 2 times.
Priority:

1

Requirement:

supported_platform_type[AZURE]

verify_sriov_single_vf_connection()
Description:
This case verifies module of sriov network interface is loaded and
each synthetic nic is paired with one VF, and check rx statistics of source
and tx statistics of dest increase after send 200 Mb file from source to dest.
Steps,
1. Check VF of synthetic nic is paired.
2. Check module of sriov network device is loaded.
3. Check VF counts listed from lspci is expected.
4. Setup SSH connection between source and dest with key authentication.
5. Ping the dest IP from the source machine to check connectivity.
6. Generate 200Mb file, copy from source to dest.
7. Check rx statistics of source VF and tx statistics of dest VF is increased.
Priority:

1

Requirement:

network_interface

verify_sriov_provision_with_max_nics_reboot()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Reboot VM from guest.
4. Do the basic sriov testing.
Priority:

2

Requirement:

network_interface

verify_sriov_ethtool_offload_setting()
Description:
This case verify below two kernel patches.
1. hv_netvsc: Sync offloading features to VF NIC
2. hv_netvsc: Allow scatter-gather feature to be tunable
Steps,
1. Change scatter-gather feature on synthetic nic,
verify the the feature status sync to the VF dynamically.
2. Disable and enable sriov,
check the scatter-gather feature status keep consistent in VF.
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_irqbalance()
Description:
This test case verifies that irq rebalance is running.
When irqbalance is in debug mode, it will log “Selecting irq xxx for
rebalancing” when it selects an irq for rebalancing. We expect to see
this irq rebalancing when VM is under heavy network load.
An issue was previously seen in irqbalance 1.8.0-1build1 on Ubuntu.
When IRQ rebalance is not running, we expect to see poor network
performance and high package loss. Contact the distro publisher if
this is the case.
Steps,
1. Stop irqbalance service.
2. Start irqbalance as a background process with debug mode.
3. Generate some network traffic.
4. Check irqbalance output for “Selecting irq xxx for rebalancing”.
Priority:

2

Requirement:

network_interface

verify_sriov_max_vf_connection()
Description:
This case needs 2 nodes and 8 nics. And it verifies module of sriov network
interface is loaded and each synthetic nic is paired with one VF, and check
rx statistics of source and tx statistics of dest increase after send 200 Mb
file from source to dest.
Steps,
1. Check VF of synthetic nic is paired.
2. Check module of sriov network device is loaded.
3. Check VF counts listed from lspci is expected.
4. Setup SSH connection between source and dest with key authentication.
5. Ping the dest IP from the source machine to check connectivity.
6. Generate 200Mb file, copy from source to dest.
7. Check rx statistics of source VF and tx statistics of dest VF is increased.
Priority:

2

Requirement:

network_interface

verify_sriov_add_max_nics()
Description:
This case verify VM works well after attached the max sriov nics after
provision.
Steps,
1. Attach 7 extra sriov nic into the VM.
2. Do the basic sriov testing.
Priority:

2

Requirement:

network_interface

verify_services_state()
Description:
This case verifies all services state with Sriov enabled.
Steps,
1. Get overrall state from systemctl status, if no systemctl command,
skip the testing
2. The expected state should be running
Priority:

1

Requirement:

network_interface

verify_sriov_max_vf_connection_max_cpu()
Description:
This case needs 2 nodes, 8 nics and 64 Vcpus. And it verifies module of sriov
network interface is loaded and each synthetic nic is paired with one VF, and
check rx statistics of source and tx statistics of dest increase after send 200
Mb file from source to dest.
Steps,
1. Check VF of synthetic nic is paired.
2. Check module of sriov network device is loaded.
3. Check VF counts listed from lspci is expected.
4. Setup SSH connection between source and dest with key authentication.
5. Ping the dest IP from the source machine to check connectivity.
6. Generate 200Mb file, copy from source to dest.
7. Check rx statistics of source VF and tx statistics of dest VF is increased.
Priority:

2

Requirement:

network_interface

class NetworkSettings
Description:
This test suite runs the ethtool related network test cases.
Platform:

Azure, Ready

Area:

network

Category:

functional

verify_device_statistics()
Description:
This test case requires 4 or more cpu cores, so as to validate
among 4 or more channels(queues), no particular queue is continuously
starving(not sending/receiving any packets).
Steps:
1. Get all the device’s statistics.
2. Validate device statistics lists per queue statistics as well.
3. Run traffic using iperf3 and check stats for each device.
4. if the same queue (say queue #0) is inactive repeatitively,
and the count of channels is >= 4 (total #queues), test should fail
and require further investigation.
Priority:

2

Requirement:

min_core_count=4

verify_ringbuffer_settings_change()
Description:
This test case verifies if ring buffer settings can be changed with ethtool.
Steps:
1. Get the current ring buffer settings.
2. Change the rx and tx value to new_values using ethtool.
3. Get the settings again and validate the current rx and tx
values are equal to the new_values assigned.
4. Revert back the rx and tx value to their original values.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

verify_device_enabled_features()
Description:
This test case verifies required device features are enabled.
Steps:
1. Get the device’s enabled features.
2. Validate below features are in the list of enabled features-
rx-checksumming
tx-checksumming
tcp-segmentation-offload
scatter-gather
Priority:

1

verify_device_channels_change()
Description:
This test case verifies changing device channels count with ethtool.
Steps:
1. Get the current device channels info.
2 a. Keep Changing the channel count from min to max value using ethtool.
b. Get the channel count info and validate the channel count
value is equal to the new value assigned.
3. Revert back the channel count to its original value.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

verify_device_rss_hash_key_change()
Description:
This test case verifies changing device’s RSS hash key takes
into affect.
Steps:
1. Skip the test if the kernel version is any less than LTS 5.
2. Get all the device’s RSS hash key values.
3. Swap the last 2 characters of original hash key to make a new hash key.
4. Validate changing the hash key setting using the new hash key.
5. Revert back the settings to original values.
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_device_msg_level_change()
Description:
This test case verifies whether setting/unsetting device’s
message level flag takes into affect.
Steps:
1. Verify Get/Set message level supported on kernel version.
2. Get all the device’s message level number and name setting.
2. Depending on current setting, set/unset a message flag by number
and name.
3. Validate changing the message level flag setting.
4. Revert back the setting to original value.
Note: BSD does not support the feature tested here and
lacks the hv_netvsc module used to support it.
Priority:

2

Requirement:

unsupported_os[BSD, Windows]

verify_device_rx_hash_level_change()
Description:
This test case verifies whether changing device’s RX hash level
for tcp and udp takes into affect.
Steps:
Note: Same steps are used for both TCP and UDP.
1. Get all the device’s RX hash level status.
2. Depending on current setting, change to enabled/disabled.
3. Validate changing the hash level setting.
4. Revert back the settings to original values.
Priority:

2

verify_device_gro_lro_settings_change()
Description:
This test case verifies changing device’s GRO and LRO setting takes
into affect.
Steps:
1. Get all the device’s generic-receive-offload and large-receive-offload
settings.
2. If both GRO and LRO settings are “[fixed]” then skip testing specific
device.
3. Try flipping the GRO and LRO settings and validate it takes affect.
4. Revert back the settings to original values.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

class Stress
Description:
This test suite uses to verify accelerated network functionality under stress.
Platform:

Azure, Ready

Area:

sriov

Category:

stress

stress_sriov_with_max_nics_stop_start_from_platform()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Stop and Start VM from API.
4. Do the basic sriov testing.
5. Repeat step 3 and 4 for 10 times.
Priority:

2

Requirement:

network_interface

stress_sriov_iperf()
Description:
This case is to check whether the network connectivity is lost after running
iperf3 for 30 mins.
Steps,
1. Start iperf3 on server node.
2. Start iperf3 for 30 minutes on client node.
3. Do VF connection test.
Priority:

4

Requirement:

node

stress_synthetic_with_max_nics_reboot_from_platform()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
3. Reboot VM from API.
4. Check each nic has an ip address.
5. Repeat step 3 and 4 for 10 times.
Priority:

2

Requirement:

network_interface

stress_sriov_with_max_nics_reboot()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Reboot VM from guest.
4. Do the basic sriov testing.
5. Repeat step 3 and 4 for 10 times.
Priority:

2

Requirement:

network_interface

stress_synthetic_provision_with_max_nics_reboot()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
3. Reboot VM from guest.
4. Check each nic has an ip address.
5. Repeat step 3 and 4 for 10 times.
Priority:

2

Requirement:

network_interface

stress_synthetic_with_max_nics_stop_start_from_platform()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
3. Stop and Start VM from API.
4. Check each nic has an ip address.
5. Repeat step 3 and 4 for 10 times.
Priority:

2

Requirement:

network_interface

stress_sriov_with_max_nics_reboot_from_platform()
Description:
This case verify VM works well when provisioning with max (8) sriov nics.
Steps,
1. Provision VM with max network interfaces with enabling accelerated network.
2. Do the basic sriov testing.
3. Reboot VM from API.
4. Do the basic sriov testing.
5. Repeat step 3 and 4 for 10 times.
Priority:

2

Requirement:

network_interface

stress_sriov_disable_enable()
Description:
This case verify VM works well after disable and enable accelerated network in
network interface through sdk under stress.
It is a regression test case to check the bug
bionic/commit/id=16a3c750a78d8, which misses the second hunk of the upstream
commit/?id=877b911a5ba0. Details please check https://bugs.launchpad.net/
ubuntu/+source/linux-azure/+bug/1965618
Steps,
1. Do the basic sriov check.
2. Set enable_accelerated_networking as False to disable sriov.
3. Set enable_accelerated_networking as True to enable sriov.
4. Do the basic sriov check.
5. Do step 2 ~ step 4 for 25 times.
Priority:

3

Requirement:

supported_platform_type[AZURE]

class Synthetic
Description:
This test suite uses to verify synthetic network functionality.
Platform:

Azure, Ready

Area:

network

Category:

functional

verify_synthetic_add_max_nics_one_time_after_provision()
Description:
This case verify VM works well after attaching 7 extra synthetic nics
in one time.
Steps,
1. Provision VM with 1 network interface with synthetic network.
2. Add 7 extra network interfaces in one time.
3. Check each nic has an ip address.
Priority:

2

Requirement:

network_interface

verify_synthetic_provision_with_max_nics_reboot()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
3. Reboot VM from guest.
4. Check each nic has an ip address.
Priority:

2

Requirement:

network_interface

verify_synthetic_add_max_nics_one_by_one_after_provision()
Description:
This case verify VM works well after attaching 7 extra synthetic nics
one by one.
Steps,
1. Provision VM with 1 network interface with synthetic network.
2. Add 7 extra network interfaces one by one.
3. Check each nic has an ip address.
Priority:

2

Requirement:

network_interface

verify_synthetic_provision_with_max_nics_stop_start_from_platform()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
3. Stop and Start VM from API.
4. Check each nic has an ip address.
Priority:

2

Requirement:

network_interface

verify_synthetic_provision_with_max_nics()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
Priority:

2

Requirement:

network_interface

verify_synthetic_provision_with_max_nics_reboot_from_platform()
Description:
This case verify VM works well when provison with max (8) synthetic nics.
Steps,
1. Provision VM with max network interfaces with synthetic network.
2. Check each nic has an ip address.
3. Reboot VM from API.
4. Check each nic has an ip address.
Priority:

2

Requirement:

network_interface

class KdumpCrash
Description:
This test suite is used to verify if kernel crash dump is effect, which is judged
through vmcore file is generated after triggering kdump by sysrq.
It has 7 test cases. They verify if kdump is effect when:
1. VM has 1 cpu
2. VM has 2-8 cpus and trigger kdump on cpu 1
3. VM has 33-192 cpus and trigger kdump on cpu 32
4. VM has 193-415 cpus and trigger kdump on cpu 192
5. VM has more than 415 cpus and trigger kdump on cpu 415
6. crashkernel is set “auto”
7. crashkernel is set “auto” and VM has more than 2T memory
Platform:

Azure, Ready

Area:

kdump

Category:

functional

verify_kdumpcrash_on_cpu192()
Description:
This test case verifies if the kdump is effect when VM has 193~415 cpus, and
trigger kdump on the 193th cpu(cpu192), which is designed by a known issue.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

2

Requirement:

supported_features

verify_kdumpcrash_on_cpu32()
Description:
This test case verifies if the kdump is effect when VM has 33~192 cpus and
trigger kdump on the 33th cpu(cpu32), which is designed by a known issue.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

2

Requirement:

supported_features

verify_kdumpcrash_large_memory_auto_size()
Description:
This test case verifies if the kdump is effect when crashkernel is set auto and
the memory is more than 2T. With the crashkernel=auto parameter, system will
reserved a suitable size memory for crash kernel. We want to see if the
crashkernel=auto can also handle this scenario when the system memory is large.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

3

Requirement:

supported_features

verify_kdumpcrash_auto_size()
Description:
This test case verifies if the kdump is effect when crashkernel is set auto.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

3

Requirement:

supported_features

verify_kdumpcrash_single_core()
Description:
This test case verifies if kdump is effect when VM has 1 cpu.
VM need 2G memory at least to make sure it has enough memory to load crash
kernel.
Steps:
1. Check if vmbus version and kernel configurations support for crash dump.
2. Specify the memory reserved for crash kernel in kernel cmdline, setting the
“crashkernel” option to the required value.
a. Modify the grub config file to add crashkernel option or change the
value to the required one. (For Redhat 8, no need to modify grub config
file. It can specify crashkernel by using grubby command directly)
b. Update grub config
4. If needed, config the dump path.
3. Reboot system to make kdump effect.
4. Check if the crash kernel is loaded.
a. Check if kernel cmdline has crashkernel option and the value is expected
b. Check if /sys/kernel/kexec_crash_loaded file exists and the value is ‘1’
c. Check if /proc/iomem is reserved memory for crash kernel
5. Trigger kdump through ‘echo c > /proc/sysrq-trigger’, or trigger on
specified CPU by using command “taskset -c”.
6. Check if vmcore is generated under the dump path we configure after system
boot up.
Priority:

2

Requirement:

supported_features

verify_kdumpcrash_on_random_cpu()
Description:
This test case verifies if the kdump is effect when VM has any cores, and
trigger kdump on the random cpu.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

1

Requirement:

supported_features

verify_kdumpcrash_smp()
Description:
This test case verifies if the kdump is effect when VM has 2~8 cpus, and
trigger kdump on the second cpu(cpu1), which is designed by a known issue.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

2

Requirement:

supported_features

verify_kdumpcrash_on_cpu415()
Description:
This test case verifies if the kdump is effect when VM has more than 415 cpus,
and trigger kdump on the 416th cpu(cpu415), which is designed by a known issue.
The test steps are same as kdumpcrash_validate_single_core.
Priority:

4

Requirement:

supported_features

OSU Bench
Description:
This test suite runs the OSU Micro-Benchmarks MPI test cases.
Platform:

Azure, Ready

Area:

hpc

Category:

performance

perf_mpi_operations()
Description:
This test case runs GPU/CPU MPI latency.
Steps:
1. Install MVAPICH;
2. Install OSU Micro-Benchmarks;
3. Run GPU/CPU collective/latency tests in a single node.
Priority:

2

Requirement:

supported_features[SerialConsole]

class HvModule
Description:
This test suite covers test cases previously handled by LISAv2:
LIS-MODULES-CHECK, VERIFY-LIS-MODULES-VERSION,
INITRD-MODULES-CHECK, RELOAD-MODULES-SMP
It is responsible for ensuring the Hyper V drivers are all present,
are included in initrd, and are all the same version.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_lis_modules_version()
Description:
This test case will
1. Verify the list of given LIS kernel modules and verify if the version
matches with the Linux kernel release number. (Drivers loaded directly in
to the kernel are skipped)
Priority:

2

verify_hyperv_modules()
Description:
This test case will
1. Verify the presence of all Hyper V drivers using lsmod
to look for the drivers not directly loaded into the kernel.
Priority:

1

verify_reload_hyperv_modules()
Description:
This test case will reload hyper-v modules for 100 times.
Priority:

1

Requirement:

min_core_count=4

verify_initrd_modules()
Description:
This test case will ensure all necessary hv_modules are present in
initrd. This is achieved by
1. Skipping any modules that are loaded directly in the kernel
2. Use lsinitrd tool to check whether a necessary module is missing
Priority:

1

Requirement:

unsupported_os[BSD]

class Dns
Description:
This test suite covers DNS name resolution functionality.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_dns_name_resolution_after_upgrade()
Description:
This test case check DNS name resolution by ping bing.com after upgrade system.
Priority:

1

verify_dns_name_resolution()
Description:
This test case check DNS name resolution by ping bing.com.
Priority:

1

class Provisioning
Description:
This test suite uses to verify if an environment can be provisioned correct or not.
- The basic smoke test can run on all images to determinate if a image can boot and
reboot.
- Other provisioning tests verify if an environment can be provisioned with special
hardware configurations.
Platform:

Azure, Ready

Area:

provisioning

Category:

functional

verify_deployment_provision_standard_ssd_disk()
Description:
This case runs smoke test on a node provisioned with standard ssd disk.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

verify_deployment_provision_premium_disk()
Description:
This case runs smoke test on a node provisioned with premium disk.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

verify_deployment_provision_sriov()
Description:
This case runs smoke test on a node provisioned with sriov.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

verify_deployment_provision_ultra_datadisk()
Description:
This case runs smoke test on a node provisioned with an ultra datadisk.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

verify_deployment_provision_synthetic_nic()
Description:
This case runs smoke test on a node provisioned with synthetic nic.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

smoke_test()
Description:
This case verifies whether a node is operating normally.
Steps,
1. Connect to TCP port 22. If it’s not connectable, failed and check whether
there is kernel panic.
2. Connect to SSH port 22, and reboot the node. If there is an error and kernel
panic, fail the case. If it’s not connectable, also fail the case.
3. If there is another error, but not kernel panic or tcp connection, pass with
warning.
4. Otherwise, fully passed.
Priority:

0

Requirement:

supported_features[SerialConsole]

verify_deployment_provision_ephemeral_managed_disk()
Description:
This case runs smoke test on a node provisioned with ephemeral disk.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

verify_reboot_in_platform()
Description:
This case runs smoke test on a node provisioned.
The test steps are almost the same as smoke_test except for
executing reboot from Azure SDK.
Priority:

2

Requirement:

supported_features[SerialConsole, StartStop]

verify_stop_start_in_platform()
Description:
This case runs smoke test on a node provisioned.
The test steps are almost the same as smoke_test except for
executing stop then start from Azure SDK.
Priority:

1

Requirement:

supported_features[SerialConsole, StartStop]

verify_deployment_provision_premiumv2_disk()
Description:
This case runs smoke test on a node provisioned with premium disk.
The test steps are same as smoke_test.
Priority:

1

Requirement:

supported_features[SerialConsole]

class CPU
Description:
This test suite is used to run CPU related tests.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_l3_cache()
Description:
This test case will check that L3 cache is correctly mapped
to NUMA node.
Steps:
1. Check if NUMA is disabled in commandline. If disabled,
and kernel version is <= 2.6.37, test is skipped as hyper-v
has no support for NUMA : https://t.ly/x8k3
2. Get the mappings using command :
lscpu –extended=cpu,node,socket,cache
3. Each line in the mapping corresponds to one CPU core. The L3
cache of each core must be mapped to the NUMA node that core
belongs to instead of the core itself.
Example :
Correct mapping:
CPU NODE SOCKET L1d L1i L2 L3
8 0 0 8 8 8 0
9 1 1 9 9 9 1
Incorrect mapping:
CPU NODE SOCKET L1d L1i L2 L3
8 0 0 8 8 8 8
9 1 1 9 9 9 9
Priority:

1

Requirement:

unsupported_os[Windows, BSD]

verify_cpu_count()
Description:
This test will check that vCPU count correctness.
Steps :
1. Get vCPU count.
2. Calculate vCPU count by core_per_socket_count * socket_count *
thread_per_core_count.
3. Judge whether the actual vCPU count equals to expected value.
Priority:

1

Requirement:

unsupported_os

verify_vmbus_interrupts()
Description:
This test will verify if the CPUs inside a Linux VM are processing VMBus
interrupts by checking the /proc/interrupts file.
There are 3 types of Hyper-v interrupts : Hypervisor callback
interrupts, Hyper-V reenlightenment interrupts, and Hyper-V stimer0
interrupts, these types not shown up in arm64 arch.
Hyper-V reenlightenment interrupts are 0 unless the VM is doing migration.
Hypervisor callback interrupts are vmbus events that are generated on all
the vmbus channels, which belong to different vmbus devices. A VM with upto
4 vcpu on Azure/Hyper-V should have a NetVSC NIC, which normally has 4 VMBus
channel and should be bound to all the vCPUs.
Hyper-V Synthetic timer interrupts should be received on each CPU if the VM
is run for a long time. We can simulate this process by running CPU
intensive workload on each vCPU.
Steps:
1. Look for the Hyper-v timer property of each vCPU under /proc/interrupts
2. For Hyper-V reenlightenment interrupt, verify that the interrupt count
for all vCPU are zero.
3. For Hypervisor callback interrupt, verify that at least min(#vCPU, 4)
vCPU’s are processing interrupts.
4. For Hyper-V Synthetic timer, run a CPU intensive command on each vCPU and
verify that every vCPU is processing the interrupt.
Priority:

2

class Dhcp
Description:
This test suite covers DHCP functionalities.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_dhcp_client_timeout()
Description:
This test case check the timeout setting of DHCP on Azure equals or more
than 300 seconds.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

class Storage
Description:
This test suite is used to run storage related tests.
Platform:

Azure, Ready

Area:

storage

Category:

functional

verify_cifs_basic()
Description:
This test case will
1. Check if CONFIG_CIFS is enabled in KCONFIG
2. Create an Azure File Share
3. Mount the VM to Azure File Share
4. Verify mount is successful
Downgrading priority from 1 to 5. The file share relies on the
storage account key, which we cannot use currently.
Will change it back once file share works with MSI.
Priority:

5

Requirement:

unsupported_os[BSD, Windows]

verify_resource_disk_mounted()
Description:
This test will check that the resource disk is present in the list of mounted
devices. Most VMs contain a resource disk, which is not a managed disk and
provides short-term storage for applications and processes. It is intended to
only store data such as page or swap files.
Steps:
1. Get the mount point for the resource disk. If /var/log/cloud-init.log
file is present, mount location is /mnt, otherwise it is obtained from
ResourceDisk.MountPoint entry in waagent.conf configuration file.
2. Verify that “/dev/<disk> <mount_point>` entry is present in
/etc/mtab file and the disk should not be equal to os disk.
Priority:

1

Requirement:

supported_platform_type[AZURE]

verify_hot_add_disk_parallel_premium_ssd()
Description:
This test case will verify that the premium ssd data disks disks can
be added serially while the vm is running. The test steps are same as
hot_add_disk_parallel.
Priority:

2

Requirement:

disk

verify_hot_add_disk_serial()
Description:
This test case will verify that the standard hdd data disks disks can
be added one after other (serially) while the vm is running.
Steps:
1. Get maximum number of data disk for the current vm_size.
2. Get the number of data disks already added to the vm.
3. Serially add and remove the data disks and verify that the added
disks are present in the vm.
Priority:

2

Requirement:

disk

verify_azure_file_share_nfs()
Description:
This test case will verify mount azure nfs 4.1 on guest successfully.
Downgrading priority from 2 to 5. Creating and deleting file shares
with token authentication is unsupported.
Priority:

5

Requirement:

supported_features[Nfs]

verify_hot_add_disk_serial_standard_ssd()
Description:
This test case will verify that the standard ssd data disks disks can
be added serially while the vm is running. The test steps are same as
hot_add_disk_serial.
Priority:

2

Requirement:

disk

verify_hot_add_disk_serial_random_lun_premium_ssd()
Description:
This test case will verify that the premium ssd data disks disks can
be added serially on random lun while the vm is running.
Steps:
1. Get maximum number of data disk for the current vm_size.
2. Get the number of data disks already added to the vm.
3. Add 1 premium ssd data disks to the VM on random free lun.
4. Verify that the disks are added are available in the OS.
5. Repeat steps 3 & 4 till max disks supported by VM are attached.
6. Remove the disks from the vm from random luns.
7. Verify that 1 disk is removed from the OS.
8. Repeat steps 6 & 7 till all randomly attached disks are removed.
Priority:

2

Requirement:

disk

verify_disks_device_timeout_setting()
Description:
This test will check that VM disks are provisioned
with the correct timeout.
Steps:
1. Find the disks for the VM by listing /sys/block/sd*.
2. Verify the timeout value for disk in
/sys/block/<disk>/device/timeout file is set to 300.
Priority:

2

Requirement:

disk

verify_scsi_disk_controller_type()
Description:
This test verifies scsi disk controller type of the VM.
Steps:
1. Get the disk type of the boot partition.
2. Compare it with hardware disk controller type of the VM.
Priority:

1

Requirement:

disk

verify_hot_add_disk_serial_premium_ssd()
Description:
This test case will verify that the premium ssd data disks disks can
be added serially while the vm is running. The test steps are same as
hot_add_disk_serial.
Priority:

2

Requirement:

disk

verify_hot_add_disk_parallel()
Description:
This test case will verify that the standard HDD data disks can
be added in one go (parallel) while the vm is running.
Steps:
1. Get maximum number of data disk for the current vm_size.
2. Get the number of data disks already added to the vm.
3. Add maximum number of data disks to the VM in parallel.
4. Verify that the disks are added are available in the OS.
5. Remove the disks from the vm in parallel.
6. Verify that the disks are removed from the OS.
Priority:

2

Requirement:

disk

verify_resource_disk_io()
Description:
This test will check that the file IO operations are working correctly
Steps:
1. Get the mount point for the resource disk. If /var/log/cloud-init.log
file is present, mount location is /mnt, otherwise it is obtained from
ResourceDisk.MountPoint entry in waagent.conf configuration file.
2. Verify that resource disk is mounted from the output of mount command.
3. Write a text file to the resource disk.
4. Read the text file and verify that content is same.
Priority:

1

Requirement:

supported_platform_type[AZURE]

verify_nvme_disk_controller_type()
Description:
This test verifies nvme disk controller type of the VM.
Steps:
1. Get the disk type of the boot partition.
2. Compare it with hardware disk controller type of the VM.
Priority:

1

Requirement:

disk

verify_hot_add_disk_parallel_standard_ssd()
Description:
This test case will verify that the standard ssd data disks disks can
be added serially while the vm is running. The test steps are same as
hot_add_disk_parallel.
Priority:

2

Requirement:

disk

verify_os_partition_identifier()
Description:
This test will verify that identifier of root partition matches
from different sources.
Steps:
1. Get the partition identifier from blkid command.
2. Verify that the partition identifier from blkid is present in dmesg.
3. Verify that the partition identifier from blkid is present in fstab output.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

verify_hot_add_disk_serial_random_lun_standard_ssd()
Description:
This test case will verify that the standard ssd data disks disks can
be added serially on random lun while the vm is running.
Steps:
1. Get maximum number of data disk for the current vm_size.
2. Get the number of data disks already added to the vm.
3. Add 1 standard ssd data disks to the VM on random free lun.
4. Verify that the disks are added are available in the OS.
5. Repeat steps 3 & 4 till max disks supported by VM are attached.
6. Remove the disks from the vm from random luns.
7. Verify that 1 disk is removed from the OS.
8. Repeat steps 6 & 7 till all randomly attached disks are removed.
Priority:

2

Requirement:

disk

verify_swap()
Description:
This test will check that the swap is correctly configured on the VM.
Steps:
1. Check if swap file/partition is configured by checking the output of
swapon -s and lsblk.
2. Check swap status in waagent.conf.
3. Verify that truth value in step 1 and step 2 match.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

class LsVmBus
Description:
This test suite is used to check vmbus devices and their associated vmbus channels.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_vmbus_devices_channels()
Description:
This test case will
1. Check expected vmbus device names presented in the lsvmbus output.
- Operating system shutdown
- Time Synchronization
- Heartbeat
- Synthetic network adapter
- Synthetic SCSI Controller
- Synthetic IDE Controller (gen1 only)
It expects addtional three vmbus device names for non-cvm:
- Data Exchange
- Synthetic mouse
- Synthetic keyboard
2. Check that each netvsc and storvsc SCSI device have correct number of vmbus
channels created and associated.
2.1 Check expected channel count of each netvsc is min (num of vcpu, 8).
2.2 Check expected channel count of each storvsc SCSI device is min (num of
vcpu/4, 64).
2.2.1 Caculate channel count of each storvsc SCSI device.
2.2.2 Cap of channel count of each storvsc SCSI device,
it is decided by host storage VSP driver.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

verify_vmbus_devices_channels_bsd()
Description:
This test case will check expected vmbus device names presented in the lsvmbus
output for FreeBSD.
- Hyper-V Shutdown
- Hyper-V Timesync
- Hyper-V Heartbeat
- Hyper-V KBD
- Hyper-V Network Interface
- Hyper-V SCSI
Priority:

1

Requirement:

supported_os[BSD]

verify_vmbus_heartbeat_properties()
Description:
This test case will
1. Looks for the VMBus heartbeat device properties.
2. Checks the properties can be read and that the folder structure exists.
3. Checks that the in_* files are equal to the out_* files
when read together.
4. Checks the interrupts and events values are increasing during
reading them.
Priority:

4

Requirement:

unsupported_os[BSD, Windows]

class Floppy
Description:
This test suite ensures the floppy driver is disabled.
The floppy driver is not needed on Azure and
is known to cause problems in some scenarios.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_floppy_module_is_blacklisted()
Description:
The goal of this test is to ensure the floppy module is not enabled
for images used on the Azure platform.
This test case will
1. Dry-run modprobe to see if floppy module can be loaded
2. If “insmod” would be executed then the module is not already loaded
3. If module cannot be found then it is not loaded
If the module is loaded, running modprobe will have no output
Priority:

1

class Msr
Description:
Test suite verifies hyper-v platform id is set correctly via hypercall to host.
Theoretically, this could work for any guest which uses hypercalls
on Hyper-V or Azure.
Platform:

Azure, Ready

Area:

msr

Category:

functional

verify_hyperv_platform_id()
Description:
verify platform id is accurate in msr register
Priority:

1

class AzureImageStandard
Description:
This test suite is used to check azure image configuration.
Platform:

Azure, Ready

Area:

azure_image_standard

Category:

functional

verify_resource_disk_readme_file()
Description:
This test will check that the readme file existed in resource disk mount point.
Steps:
1. Obtain the mount point for the resource disk.
If the /var/log/cloud-init.log file is present,
attempt to read the customized mount point from
the cloud-init configuration file.
If mount point from the cloud-init configuration is unavailable,
use the default mount location, which is /mnt.
If none of the above sources provide the mount point,
it is retrieved from the ResourceDisk.MountPoint entry
in the waagent.conf configuration file.
2. Verify that resource disk is mounted from the output of mount command.
3. Verify lost+found folder exists.
4. Verify DATALOSS_WARNING_README.txt file exists.
5. Verify ‘WARNING: THIS IS A TEMPORARY DISK’ contained in
DATALOSS_WARNING_README.txt file.
Priority:

2

Requirement:

supported_platform_type[AZURE]

verify_hv_kvp_daemon_installed()
Description:
This test will check that kvp daemon is installed. This is an optional
requirement for Debian based distros.
Steps:
1. Verify that list of running process matching name of kvp daemon
has length greater than zero.
Priority:

2

Requirement:

supported_platform_type[AZURE, READY]

verify_os_update()
Description:
Verify if there is any issues in and after ‘os update’
Steps:
1. Run os update command.
2. Reboot the VM and see if the VM is still in good state.
Priority:

2

Requirement:

supported_platform_type[AZURE, READY]

verify_client_active_interval()
Description:
This test will check ClientAliveInterval value in sshd config.
Steps:
1. Find ClientAliveInterval from sshd config.
2. Pass with warning if not find it.
3. Pass with warning if the value is not between 0 and 180.
Priority:

2

Requirement:

supported_platform_type[AZURE, READY]

verify_grub()
Description:
This test will check the configuration of the grub file and verify that numa
is disabled for Redhat distro version < 6.6.0
Steps:
1. Verify grub configuration depending on the distro type.
2. For Redhat based distros, verify that numa is disabled for versions < 6.6.0
Priority:

1

Requirement:

unsupported_os[BSD]

verify_resource_disk_file_system()
Description:
This test will check that resource disk is formatted correctly.
Steps:
1. Get the mount point for the resource disk. If /var/log/cloud-init.log
file is present, mount location is /mnt, otherwise it is obtained from
ResourceDisk.MountPoint entry in waagent.conf configuration file.
2. Verify that resource disk file system type should not be ‘ntfs’.
Priority:

1

Requirement:

supported_platform_type[AZURE]

verify_default_targetpw()
Description:
This test will verify that Defaults targetpw is not enabled in the
/etc/sudoers file.
If targetpw is set, sudo will prompt for the
password of the user specified by the -u option (defaults to root)
instead of the password of the invoking user when running a command
or editing a file. More information can be found here :
Steps:
1. Get the content of /etc/sudoers file.
2. Verify that Defaults targetpw should be disabled, if present.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_bash_history_is_empty()
Description:
This test will check the /root/.bash_history not existing or is empty.
Steps:
1. Check .bash_history exist or not, if not, the image is prepared well.
2. If the .bash_history existed, check the content is empty or not, if not, the
image is not prepared well.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_serial_console_is_enabled()
Description:
This test will check the serial console is enabled from kernel command line
in dmesg.
Steps:
1. Get the kernel command line from /var/log/messages or
/var/log/syslog output.
2. Check expected setting from kernel command line.
2.1. Expected to see ‘console=ttyAMA0’ for aarch64.
2.2. Expected to see ‘console=ttyS0’ for x86_64.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_dhcp_file_configuration()
Description:
This test will verify that dhcp file exists at
/etc/sysconfig/network/dhcp and DHCLIENT_SET_HOSTNAME is set
to no.
Steps:
1. Verify that dhcp file exists.
2. Verify that DHCLIENT_SET_HOSTNAME=”no” is present in the file.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_network_manager_not_installed()
Description:
This test will verify that network manager doesn’t conflict with the
waagent on Fedora based distros.
Steps:
1. Get the output of command rpm -q NetworkManager and verify that
network manager is not installed.
Priority:

3

Requirement:

supported_platform_type[AZURE, READY]

verify_ifcfg_eth0()
Description:
This test will verify contents of ifcfg-eth0 file on Fedora based distros.
Steps:
1. Read the ifcfg-eth0 file and verify that “DEVICE=eth0”, “BOOTPROTO=dhcp” and
“ONBOOT=yes” is present in network file.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_repository_installed()
Description:
This test will check that repositories are correctly installed.
Steps:
1. Verify the repository configuration depending on the distro type.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_yum_conf()
Description:
This test will verify content of yum.conf file on Fedora based distros
for version < 6.6.0
Steps:
1. Read the yum.conf file and verify that “http_caching=packages” is
present in the file.
Priority:

2

Requirement:

supported_platform_type[AZURE, READY]

verify_no_pre_exist_users()
Description:
This test will check no pre added users existing in vm.
Steps:
1. Exclude current user from all users’ list.
2. Fail the case if the password of any above user existing.
3. Fail the case if the key of any user existing.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_udev_rules_moved()
Description:
This test will verify that udev rules have been moved out in CoreOS
and Fedora based distros
Steps:
1. Verify that 75-persistent-net-generator.rules and 70-persistent-net.rules
files are not present.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_boot_error_fail_warnings()
Description:
This test will check error, failure, warning messages from demsg,
/var/log/syslog or /var/log/messages file.
Steps:
1. Get failure, error, warning messages from dmesg, /var/log/syslog or
/var/log/messages file.
2. If any unexpected failure, error, warning messages excluding ignorable ones
existing, fail the case.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_network_file_configuration()
Description:
This test will verify that network file exists in /etc/sysconfig and networking
is enabled on Fedora based distros.
Steps:
1. Verify that network file exists.
2. Verify that networking is enabled in the file.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

verify_cloud_init_error_status()
Description:
This test will check ERROR, WARNING messages from /var/log/cloud-init.log
and also check cloud-init exit status.
Steps:
1. Get ERROR, WARNING messages from /var/log/cloud-init.log.
2. If any unexpected ERROR, WARNING messages or non-zero cloud-init status
fail the case.
Priority:

2

Requirement:

supported_platform_type[AZURE, READY]

class KernelDebug
Description:
This test suite covers kernel debug functionalities.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_enable_kprobe()
Description:
This test case check VM can be enabled kprobe.
Steps:
1. Check if CONFIG_KPROBE_EVENTS is enabled in kernel config.
2. Check if /sys/kernel/debug/tracing/ is mounted, if not, mount it.
3. Get origin values of /sys/kernel/debug/tracing/kprobe_events and
/sys/kernel/debug/tracing/events/kprobes/my/enable.
4. Write “p:my filp_close” to /sys/kernel/debug/tracing/kprobe_events and
write “1” to /sys/kernel/debug/tracing/events/kprobes/my/enable.
5. Check if /sys/kernel/debug/tracing/kprobe_events and
/sys/kernel/debug/tracing/events/kprobes/my/enable are changed.
6. Write origin values back to /sys/kernel/debug/tracing/kprobe_events and
/sys/kernel/debug/tracing/events/kprobes/my/enable.
Priority:

1

Requirement:

supported_platform_type[AZURE, READY]

class VmResize
Description:
This test suite tests VM behavior upon resizing
Platform:

Azure, Ready

Area:

vm_resize

Category:

functional

verify_vm_resize_decrease()
Description:
This test case stops VM resizes the VM, starts VM and checks if it has
the expected capabilities (memory size and core count) after the resize
Steps:
1. Stop VM
2. Resize VM into smaller VM size
3. Start VM
4. Check the VM’s core count and memory size against their expected values
Priority:

1

Requirement:

supported_features[Resize, StartStop]

verify_vm_resize_increase()
Description:
This test case stops VM resizes the VM, starts VM and checks if it has
the expected capabilities (memory size and core count) after the resize
Steps:
1. Stop VM
2. Resize VM into larger VM size
3. Start VM
4. Check the VM’s core count and memory size against their expected values
Priority:

1

Requirement:

supported_features[Resize, StartStop]

verify_vm_hot_resize()
Description:
This test case hot resizes the VM and checks if it has the expected capabilities
(memory size and core count) after the hot resize
Steps:
1. Resize VM into larger VM size
2. Check the VM’s core count and memory size after hot resize
against their expected values
Priority:

1

Requirement:

supported_features[Resize]

verify_vm_hot_resize_decrease()
Description:
This test case hot resizes the VM and checks if it has the expected capabilities
(memory size and core count) after the resize
Steps:
1. Resize VM into smaller VM size
2. Check the VM’s core count and memory size after hot resize
against their expected values
Priority:

1

Requirement:

supported_features[Resize]

class Vdso
Description:
This test suite is used to test vdso using vdsotest benchmark.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_vdso()
Description:
This test is to check gettime, getres, getcpu and gettimeofday calls are not
being redirected as system calls, leading to performance bottleneck, Linux
systems have a mechanism called vdso which helps in above methods to be executed
in userspace (no syscall).
The kernel selftest can’t be used here for two reasons:
1. need clone all linux source code
2. can’t repro the regression issue https://bugs.launchpad.net/bugs/1977753
Steps:
1. Install vdsotest benchmark.
2. Run vdsotest benchmark.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

class TimeSync
Description:
This test suite is related with time sync.
Platform:

Azure, Ready

Area:

time

Category:

functional

verify_timesync_ptp()
Description:
This test is to check -
1. PTP time source is available on Azure guests (newer versions of Linux).
2. PTP device name is hyperv.
3. When accelerated network is enabled, multiple PTP devices will
be available, the names of ptp are changeable, create the symlink
/dev/ptp_hyperv to whichever /dev/ptp entry corresponds to the Azure host.
4. Chrony should be configured to use the symlink /dev/ptp_hyperv
instead of /dev/ptp0 or /dev/ptp1.
Priority:

2

verify_timesync_unbind_clocksource()
Description:
This test is to check -
1. Check clock source name is one of hyperv_clocksource_tsc_page,
lis_hv_clocksource_tsc_page, hyperv_clocksource, tsc,
arch_sys_counter(arm64).
(there’s a new feature in the AH2021 host that allows Linux guests so use
the plain “tsc” instead of the “hyperv_clocksource_tsc_page”,
which produces a modest performance benefit when reading the clock.)
2. Check CPU flag contains constant_tsc from /proc/cpuinfo.
3. Check clocksource name shown up in dmesg.
4. Unbind current clock source if there are 2+ clock sources, check current
clock source can be switched to a different one.
Priority:

2

verify_pmu_disabled_for_arm64()
Description:
This test is to check “initcall_blacklist=arm_pmu_acpi_init”
kernel parameter for ARM64 images.
Since Hyper-V right now doesn’t surface a fully functional PMU. It is needed to
have “initcall_blacklist=arm_pmu_acpi_init” to disable the PMU driver
temporarily to fall back to a timer-based sampling instead of PMU-event-based
sampling.
Priority:

1

Requirement:

unsupported_os[BSD, Windows]

verify_timesync_unbind_clockevent()
Description:
This test is to check -
1. Current clock event name is ‘Hyper-V clockevent’ for x86,
‘arch_sys_timer’ for arm64.
2. ‘Hyper-V clockevent’ or ‘arch_sys_timer’ and ‘hrtimer_interrupt’
show up times in /proc/timer_list should equal to cpu count.
3. when cpu count is 1 and cpu type is Intel type, unbind current time
clock event, check current time clock event switch to ‘lapic’.
Priority:

2

verify_timesync_ntp()
Description:
This test is to check, ntp works properly.
1. Stop systemd-timesyncd if this service exists.
2. Set rtc clock to system time.
3. Restart Ntp service.
4. Check and set server setting in config file.
5. Restart Ntp service to reload with new config.
6. Check leap code using ntpq -c rv.
7. Check local time is synchronised with time server using ntpstat.
Priority:

2

verify_timedrift_corrected()
Description:
This test is to verify that timedrift is automatically corrected by chrony
after a time jump.
Steps:
1. Set makestep to 1.0 -1 to allow Chrony to make large adjustments.
2. Manually change the system clock to a time in the past.
3. Verify that Chrony has corrected the time drift.
Priority:

1

verify_timesync_chrony()
Description:
This test is to check chrony works properly.
1. Restart chrony service.
2. Check and set server setting in config file.
3. Restart chrony service to reload with new config.
4. Check chrony sources and sourcestats.
5. Check chrony tracking.
Priority:

2

class GDB
Description:
This test suite covers gdb functionality.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_gdb()
Description:
This test case check gdb work well by checking output.
1. compile code with gdb options
2. run gdb with compiled file
3. expect to see ‘Hello World![Inferior 1 (process 1869) exited normally]’
from output
Priority:

2

class SerialConsoleSuite
Description:
This tests functionality of connecting to serial console.
Platform:

Azure, Ready

Area:

serial_console

Category:

functional

verify_serial_console()
Description:
The test runs echo back command on serial console and verifies
that the command has been successfully put to the
serial console.
Priority:

3

Requirement:

supported_features[SerialConsole]

class Boot
Description:
This test suite is to test VM working well after updating on VM and rebooting.
Platform:

Azure, Ready

Area:

core

Category:

functional

verify_boot_with_debug_kernel()
Description:
This test case will
1. Skip testing if the distro is not redhat/centos type, RHEL added this test
case, since they encounter an issue which is seeing call trace when boot
with debug kernel.
2. Install kernel-debug package and set boot with this debug kernel.
3. Reboot VM, check kernel version is debug type.
Priority:

3

Requirement:

supported_features[SerialConsole]

class Kvp
Description:
This test suite verify the KVP service runs well on Azure and Hyper-V
platforms. The KVP is used to communicate between Windows host and guest VM.
Platform:

Azure, Ready

Area:

kvp

Category:

functional

verify_kvp()
Description:
Verify KVP daemon installed, running, permission correct.
1. verify that the KVP Daemon is running.
2. run the KVP client tool and verify that the data pools are created and
accessible.
3. check kvp_pool file permission is 644.
4. check kernel version supports hv_kvp.
5. Check if KVP pool 3 file has a size greater than zero.
6. At least 11 items are present in pool 3, and verify record count is
correct.
Priority:

1

class MdatpSuite
Description:
Test to verify there are no pre-installed copies of mdatp.
Platform:

Azure, Ready

Area:

mdatp

Category:

functional

verify_mdatp_not_preinstalled()
Description:
Check for mdatp endpoint/cloud install, dump config info.
Fails if mdatp is installed in the image.
Raises specific error messages depending on the type of info
found
Priority:

3

Requirement:

supported_platform_type[AZURE]

class Docker
Description:
This test suite runs the docker test cases for java, python, dotnet 3.1
, dotnet5.0, and wordpress.
Platform:

Azure, Ready

Area:

docker

Category:

functional

verify_docker_python_app()
Description:
This test case creates and runs a python app using docker
Steps:
1. Install Docker
2. Copy python dockerfile and program file to node
3. Create docker image and run docker container
4. Check results of docker run against python string identifier
Priority:

3

verify_docker_dotnet50_app()
Description:
This test case creates and runs a dotnet app using docker
Steps:
1. Install Docker
2. Copy dotnet dockerfile into node
3. Create docker image and run docker container
4. Check results of docker run against dotnet string identifier
Priority:

2

verify_docker_dotnet31_app()
Description:
This test case creates and runs a dotnet app using docker
Steps:
1. Install Docker
2. Copy dotnet dockerfile into node
3. Create docker image and run docker container
4. Check results of docker run against dotnet string identifier
Priority:

1

verify_docker_java_app()
Description:
This test case creates and runs a java app using docker
Steps:
1. Install Docker
2. Copy java dockerfile and program file to node
3. Create docker image and run docker container
4. Check results of docker run against java string identifier
Priority:

2

verify_docker_compose_wordpress_app()
Description:
This test case uses docker-compose to create and run a wordpress mysql app
Steps:
1. Install Docker and Docker-Compose on node
2. Copy docker-compose.yml into node
3. Start docker container with docker-compose
4. Run ps in the docker container and capture output
5. Check that “apache2” can be found in captured output
Priority:

3

class Lis
Description:
This test suite contains tests that are dependent on an LIS driver
Platform:

Azure, Ready

Area:

lis

Category:

functional

verify_lis_preinstall_disk_size_negative()
Description:
This test case is to verify LIS RPM installation script to check the disk
space before proceeding to LIS install.
This avoids half / corrupted installations.
Steps:
1. Test leaves “non installable” size on disk and checks if ./install.sh
script skips the installation of not.
Priority:

2

verify_lis_preinstall_disk_size_positive()
Description:
This test case is to verify LIS RPM installation script to check the disk
space before proceeding to LIS install.
This avoids half / corrupted installations.
Steps:
1. Test leaves “bare minimum” size available for LIS install and checks if
LIS installation is successful.
Priority:

2

verify_lis_driver_version()
Description:
Downloads header files based on the LIS version and compares the installed
LIS version to the expected one which is found in the header files.
Steps:
1. Check for RPM
2. Capture installed LIS version on the node
3. For each rhel version (5,6,7), it downloads the header file and compares
the LIS version in the header file with the LIS version installed
Priority:

2

class Utilities
Description:
This suite includes utility test cases and not validations.
Platform:

Azure, Ready

Area:

utility

Category:

functional

utility_tools_install()
Description:
This test case will install the tools
passed as parameter ‘case_tool_install’
Priority:

5

class Power
Description:
This test suite is to test hibernation in guest VM.
Platform:

Azure, Ready

Area:

power

Category:

functional

verify_hibernation_with_storage_workload()
Description:
This case is to verify hibernation with storage workload.
Steps,
1. Run fio benchmark, make sure no issues.
2. Hibernate and resume vm.
3. Run fio benchmark, make sure no issues.
Priority:

3

Requirement:

supported_features

verify_hibernation_synthetic_network()
Description:
This case is to verify vm hibernation with synthetic network.
Steps,
1. Install HibernationSetup tool to prepare prerequisite for vm
hibernation.
2. Get nics info before hibernation.
3. Hibernate vm.
4. Check vm is inaccessible.
5. Resume vm by starting vm.
6. Check vm hibernation successfully by checking keywords in dmesg.
6. Get nics info after hibernation.
7. Fail the case if nics count and info changes after vm resume.
Priority:

3

Requirement:

supported_features

verify_hibernation_sriov_network()
Description:
This case is to verify vm hibernation with sriov network.
It has the same steps with verify_hibernation_synthetic_network.
Priority:

3

Requirement:

supported_features

verify_hibernation_synthetic_network_max_nics()
Description:
This case is to verify vm hibernation with synthetic network with max nics.
Steps,
1. Install HibernationSetup tool to prepare prerequisite for vm
hibernation.
2. Get nics info before hibernation.
3. Hibernate vm.
4. Check vm is inaccessible.
5. Resume vm by starting vm.
6. Check vm hibernation successfully by checking keywords in dmesg.
6. Get nics info after hibernation.
7. Fail the case if nics count and info changes after vm resume.
Priority:

3

Requirement:

supported_features

verify_hibernation_max_data_disks()
Description:
This case is to verify vm hibernation with max data disks.
It has the same steps with verify_hibernation_synthetic_network_max_nics.
Priority:

3

Requirement:

min_data_disk_count=32

verify_hibernation_with_memory_workload()
Description:
This case is to verify hibernation with memory workload.
Steps,
1. Run stress-ng benchmark, make sure no issues.
2. Hibernate and resume vm.
3. Run stress-ng benchmark, make sure no issues.
Priority:

3

Requirement:

supported_features

verify_hibernation_time_sync()
Description:
This case is to verify vm time sync working after hibernation.
Steps,
1. Reset time using hwclock as 1 year after current date.
2. Hibernate and resume vm.
3. Check vm time sync correctly.
Priority:

3

Requirement:

supported_features

verify_hibernation_sriov_network_max_nics()
Description:
This case is to verify vm hibernation with sriov network with max nics.
It has the same steps with verify_hibernation_synthetic_network_max_nics.
Priority:

3

Requirement:

supported_features

verify_hibernation_with_network_workload()
Description:
This case is to verify hibernation with network workload.
Steps,
1. Run iperf3 network benchmark, make sure no issues.
2. Hibernate and resume vm.
3. Run iperf3 network benchmark, make sure no issues.
Priority:

3

Requirement:

supported_features

class PowerStress
Description:
This test suite is to test hibernation in guest vm under stress.
Platform:

Azure, Ready

Area:

power

Category:

stress

stress_hibernation()
Description:
This case is to verify vm hibernation in a loop.
Priority:

3

Requirement:

supported_features

class PerfToolSuite
Description:
This test suite is to generate performance data with perf tool.
Platform:

Azure, Ready

Area:

perf_tool

Category:

performance

perf_messaging()
Description:
This test case uses perf tool to measure the messaging performance.
The steps are:
1. Run perf messaging benchmark 20 times.
3. Calculate the average, min, max time of the 20 runs.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_epoll()
Description:
This test case uses perf tool to measure the epoll performance.
The steps are:
1. Run perf epoll benchmark 20 times.
3. Calculate the average, min, max operations of the 20 runs.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

class KVMPerformance
Description:
This test suite is to validate performance of nested VM using FIO tool.
Platform:

Azure, Ready

Area:

nested

Category:

performance

perf_nested_kvm_ntttcp_private_bridge()
Description:
This test case runs ntttcp test on two nested VMs on same L1 guest
connected with private bridge
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_nested_hyperv_storage_singledisk()
Description:
This test case is to validate performance of nested VM in hyper-v
using fio tool with single l1 data disk attached to the l2 VM.
Priority:

3

Requirement:

supported_features[NestedVirtualization]

perf_nested_kvm_netperf_pps_nat()
Description:
This script runs netperf test on two nested VMs on different L1 guests
connected with NAT
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_nested_kvm_storage_singledisk()
Description:
This test case is to validate performance of nested VM using fio tool
with single l1 data disk attached to the l2 VM.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_nested_hyperv_storage_multidisk()
Description:
This test case is to validate performance of nested VM using fio tool with raid0
configuration of 6 l1 data disk attached to the l2 VM.
Priority:

3

Requirement:

supported_features[NestedVirtualization]

perf_nested_hyperv_ntttcp_different_l1_nat()
Description:
This script runs ntttcp test on two nested VMs on different L1 guests
connected with NAT
Priority:

3

Requirement:

supported_features[NestedVirtualization]

perf_nested_kvm_storage_multidisk()
Description:
This test case is to validate performance of nested VM using fio tool with raid0
configuration of 6 l1 data disk attached to the l2 VM.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_nested_kvm_ntttcp_different_l1_nat()
Description:
This script runs ntttcp test on two nested VMs on different L1 guests
connected with NAT
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

class NvmePerformace
Description:
This test suite is to validate NVMe disk performance of Linux VM using fio tool.
Platform:

Azure, Ready

Area:

nvme

Category:

performance

perf_nvme()
Description:
This test case uses fio to test NVMe disk performance
using ‘libaio’ as ioengine
Priority:

3

Requirement:

supported_features

perf_nvme_io_uring()
Description:
This test case uses fio to test NVMe disk performance
using ‘io_uring’ as ioengine.
Priority:

3

Requirement:

supported_features[Nvme]

class NetworkPerformace
Description:
This test suite is to validate linux network performance.
Platform:

Azure, Ready

Area:

network

Category:

performance

perf_tcp_iperf_sriov()
Description:
This test case uses iperf3 to test sriov tcp network throughput.
Priority:

3

Requirement:

network_interface

perf_udp_iperf_synthetic()
Description:
This test case uses iperf to test synthetic udp network throughput.
Priority:

3

Requirement:

network_interface

perf_tcp_latency_synthetic()
Description:
This test case uses lagscope to test synthetic network latency.
Priority:

2

Requirement:

network_interface

perf_udp_1k_ntttcp_synthetic()
Description:
This test case uses ntttcp to test synthetic udp network throughput.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_tcp_max_pps_synthetic()
Description:
This test case uses sar to test synthetic network PPS (Packets Per Second)
when running netperf with multiple ports.
Priority:

3

Requirement:

network_interface

perf_sockperf_latency_tcp_sriov_busy_poll()
Description:
This test case uses sockperf to test sriov network latency.
Priority:

3

Requirement:

network_interface

perf_tcp_iperf_synthetic()
Description:
This test case uses iperf3 to test synthetic tcp network throughput.
Priority:

3

Requirement:

network_interface

perf_sockperf_latency_tcp_synthetic_busy_poll()
Description:
This test case uses sockperf to test synthetic network latency.
Priority:

3

Requirement:

network_interface

perf_udp_1k_ntttcp_sriov()
Description:
This test case uses ntttcp to test sriov udp network throughput.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

perf_tcp_single_pps_synthetic()
Description:
This test case uses sar to test synthetic network PPS (Packets Per Second)
when running netperf with single port.
Priority:

3

Requirement:

network_interface

perf_tcp_single_pps_sriov()
Description:
This test case uses sar to test sriov network PPS (Packets Per Second)
when running netperf with single port.
Priority:

3

Requirement:

network_interface

perf_tcp_ntttcp_sriov()
Description:
This test case uses ntttcp to test sriov tcp network throughput.
Priority:

3

Requirement:

node

perf_sockperf_latency_udp_synthetic()
Description:
This test case uses sockperf to test synthetic network latency.
Priority:

3

Requirement:

network_interface

perf_tcp_ntttcp_128_connections_synthetic()
Description:
This test case uses ntttcp to test synthetic tcp network throughput for
128 connections.
Priority:

2

Requirement:

network_interface

perf_udp_iperf_sriov()
Description:
This test case uses iperf to test sriov udp network throughput.
Priority:

3

Requirement:

network_interface

perf_tcp_ntttcp_synthetic()
Description:
This test case uses ntttcp to test synthetic tcp network throughput.
Priority:

3

Requirement:

node

perf_sockperf_latency_udp_sriov_busy_poll()
Description:
This test case uses sockperf to test sriov network latency.
Priority:

3

Requirement:

network_interface

perf_sockperf_latency_tcp_sriov()
Description:
This test case uses sockperf to test sriov network latency.
Priority:

3

Requirement:

network_interface

perf_sockperf_latency_tcp_synthetic()
Description:
This test case uses sockperf to test synthetic network latency.
Priority:

3

Requirement:

network_interface

perf_tcp_latency_sriov()
Description:
This test case uses lagscope to test sriov network latency.
Priority:

2

Requirement:

network_interface

perf_sockperf_latency_udp_sriov()
Description:
This test case uses sockperf to test sriov network latency.
Priority:

3

Requirement:

network_interface

perf_tcp_max_pps_sriov()
Description:
This test case uses sar to test sriov network PPS (Packets Per Second)
when running netperf with multiple ports.
Priority:

3

Requirement:

network_interface

perf_sockperf_latency_udp_synthetic_busy_poll()
Description:
This test case uses sockperf to test synthetic network latency.
Priority:

3

Requirement:

network_interface

class StoragePerformance
Description:
This test suite is to validate premium SSD data disks performance of Linux VM using
fio tool.
Platform:

Azure, Ready

Area:

storage

Category:

performance

perf_storage_generic_fio_test()
Description:
This test case uses fio to test data disk performance.
This will give flexibility to run FIO by runbook param.
If nothing is passed, it will run FIO with default param.
There is no system resource info on FIO-Man-page, FIO-readdocs.
We have faced OOM with 512 MB memory.
We deploy host azure VM with 64 GB in pipeline.
So, Keeping memory need as 2 GB.
Priority:

3

Requirement:

node

perf_storage_over_nfs_synthetic_tcp_4k()
Description:
This test case uses fio to test performance of nfs server over TCP with
VM’s initialized with synthetic network interface.
Priority:

3

Requirement:

network_interface

perf_storage_over_nfs_sriov_tcp_4k()
Description:
This test case uses fio to test performance of nfs server over TCP with
VM’s initialized with SRIOV network interface.
Priority:

3

Requirement:

network_interface

perf_premiumv2_datadisks_4k()
Description:
This test case uses fio to test premiumV2 disk performance with 4K block size.
Priority:

3

Requirement:

supported_features

perf_ultra_datadisks_1024k()
Description:
This test case uses fio to test ultra disk performance using 1024K block size.
Priority:

3

Requirement:

disk

perf_premiumv2_datadisks_1024k()
Description:
This test case uses fio to test premiumV2 disk performance using
1024K block size.
Priority:

3

Requirement:

supported_features

perf_premium_datadisks_io()
Description:
This test case uses fio to test vm with 24 data disks.
Priority:

3

Requirement:

disk

perf_storage_over_nfs_synthetic_udp_4k()
Description:
This test case uses fio to test performance of nfs server over UDP with
VM’s initialized with synthetic network interface.
Priority:

3

Requirement:

network_interface

perf_storage_over_nfs_sriov_udp_4k()
Description:
This test case uses fio to test performance of nfs server over UDP with
VM’s initialized with SRIOV network interface.
Priority:

3

Requirement:

network_interface

perf_premium_datadisks_4k()
Description:
This test case uses fio to test data disk performance with 4K block size.
Priority:

3

Requirement:

disk

perf_premium_datadisks_1024k()
Description:
This test case uses fio to test data disk performance using 1024K block size.
Priority:

3

Requirement:

disk

perf_ultra_datadisks_4k()
Description:
This test case uses fio to test ultra disk performance with 4K block size.
Priority:

3

Requirement:

disk

class CPUSuite
Description:
This test suite is used to run cpu related tests, set cpu core 16 as minimal
requreiemnt, since test case relies on idle cpus to do the testing.
Platform:

Azure, Ready

Area:

cpu

Category:

functional

verify_cpu_offline_channel_add()
Description:
This test will check that the added channels to synthetic network
adapter do not handle interrupts on offline cpu.
Steps:
1. Get list of offline CPUs.
2. Add channels to synthetic network adapter.
3. Verify that the channels were added to synthetic network adapter.
4. Verify that the added channels do not handle interrupts on offline cpu.
Priority:

4

Requirement:

min_core_count=16

verify_cpu_offline_storage_workload()
Description:
This test will check cpu hotplug with storage workload.
The cpu hotplug steps are same as verify_cpu_hot_plug test case.
Priority:

4

Requirement:

min_core_count=16

verify_cpu_hot_plug()
Description:
This test will check cpu hotplug.
Steps :
1. skip test case when kernel doesn’t support cpu hotplug.
2. set all vmbus channels target to cpu 0.
when kernel version >= 5.8 and vmbus version >= 4.1, code supports changing
vmbus channels target cpu, by setting the cpu number to the file
/sys/bus/vmbus/devices/<device id>/channels/<channel id>/cpu.
then all cpus except for cpu 0 are in idle state.
2.1 save the raw cpu number of each channel for restoring after testing.
2.2 set all vmbus channel interrupts go into cpu 0.
3. collect idle cpu which can be used for hotplug.
if the kernel supports step 2, now in used cpu is 0.
exclude the in used cpu from all cpu list to get idle cpu set which can be
offline and online.
if the kernel doesn’t step 2,
the idle cpu is quite rely on the cpu usage at that time.
4. skip testing when there is no idle cpu can be set offline and online.
5. set idle cpu offline then back to online.
6. restore the cpu vmbus channel target cpu back to the original state.
Priority:

3

Requirement:

min_core_count=32

verify_cpu_offline_network_workload()
Description:
This test will check cpu hotplug with network workload.
The cpu hotplug steps are same as verify_cpu_hot_plug test case.
Priority:

4

Requirement:

min_core_count=16

class CPUStressSuite
Description:
This test suite is used to run cpu related tests under stress.
Platform:

Azure, Ready

Area:

cpu

Category:

stress

stress_cpu_hot_plug()
Description:
This test will check cpu hotplug under stress.
Detailed steps please refer case verify_cpu_hot_plug.
Priority:

3

Requirement:

min_core_count=32

class LtpTestsuite
Description:
This test suite is used to run Ltp related tests.
Platform:

Azure, Ready

Area:

ltp

Category:

community

verify_ltp_lite()
Description:
This test case will run Ltp lite tests.
1. When ltp_source_file (downloaded ltp code) is specified in .yml,
case will use it to extract the tar and run ltp, instead of downloading runtime.
Example:
- name: ltp_source_file
value: <path_to_ltp.tar.xz>
is_case_visible: true
2. When ltp_source_file not in .yml, clone github with ltp_tests_git_tag
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

class RustVmmTestSuite
Description:
This test suite is for executing the rust-vmm/mshv tests
Platform:

Azure, Ready

Area:

rust-vmm

Category:

community

verify_rust_vmm_mshv_tests()
Description:
Runs rust-vmm/mshv tests
Priority:

3

class AzureCVMAttestationTestSuite
Description:
This test suite is for generating CVM attestation report only for azure cvms.
Platform:

Azure, Ready

Area:

cvm

Category:

functional

verify_nested_cvm_attestation_report()
Description:
Runs get-snp-report tool to generate
and verify attestation report for nested cvm.
Priority:

3

Requirement:

supported_features

verify_azure_cvm_attestation_report()
Description:
Runs get-snp-report tool to generate
and create attestation report for azure cvm.
Priority:

3

Requirement:

supported_platform_type[AZURE]

class NestedCVMAttestationTestSuite
Description:
This test suite is for generating and verifying
CVM attestation report only for nested cvms.
Platform:

Azure, Ready

Area:

cvm

Category:

functional

verify_nested_cvm_attestation_report()
Description:
Runs get-snp-report tool to generate
and verify attestation report for nested cvm.
Priority:

3

Requirement:

supported_features

verify_azure_cvm_attestation_report()
Description:
Runs get-snp-report tool to generate
and create attestation report for azure cvm.
Priority:

3

Requirement:

supported_platform_type[AZURE]

class CVMAzureHostTestSuite
Description:
This test suite is for azure host vm pre-checks
for nested-cvm cases.
Platform:

Azure, Ready

Area:

cvm

Category:

functional

verify_azure_vm_snp_enablement()
Description:
Runs Dmesg tool to get kernel logs
and verify if azure vm is snp enabled.
Priority:

3

Requirement:

supported_features

class CdromSuite
Description:
Tests to check the behavior of the virtual cdrom device in VMs.
Platform:

Azure, Ready

Area:

cdrom

Category:

functional

verify_cdrom_device_status_code()
Description:
Test to check the installation ISO is unloaded
after provisioning and rebooting a new VM.
Priority:

2

Requirement:

supported_platform_type[AZURE]

class StressNgTestSuite
Description:
A suite for running the various classes of stressors provided
by stress-ng.
Platform:

Azure, Ready

Area:

stress-ng

Category:

stress

stress_ng_io_stressors()
Description:
Runs stress-ng’s ‘io’ class stressors for 60s each.
Priority:

4

stress_ng_cpu_stressors()
Description:
Runs stress-ng’s ‘cpu’ class stressors for 60s each.
Priority:

4

stress_ng_jobfile()
Description:
Runs a stress-ng jobfile. The path to the jobfile must be specified using a
runbook variable named “stress_ng_jobs”. For more info about jobfiles refer:
Priority:

4

stress_ng_network_stressors()
Description:
Runs stress-ng’s ‘network’ class stressors for 60s each.
Priority:

4

stress_ng_memory_stressors()
Description:
Runs stress-ng’s ‘memory’ class stressors for 60s each.
Priority:

4

stress_ng_vm_stressors()
Description:
Runs stress-ng’s ‘vm’ class stressors for 60s each.
Priority:

4

class DpdkPerformance
Description:
This test suite is to validate DPDK performance
Platform:

Azure, Ready

Area:

dpdk

Category:

performance

perf_dpdk_minimal_failsafe_pmd()
Description:
DPDK Performance: failsafe mode, minimal core count
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

perf_dpdk_multi_queue_netvsc_pmd()
Description:
DPDK Performance: direct use of VF, multiple tx/rx queues
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

perf_dpdk_multi_queue_failsafe_pmd()
Description:
DPDK Performance: failsafe mode, muliple tx/rx queues
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

perf_dpdk_send_only_failsafe_pmd()
Description:
DPDK Performance: failsafe mode, minimal core count
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

perf_dpdk_send_only_netvsc_pmd()
Description:
DPDK Performance: failsafe mode, minimal core count
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

perf_dpdk_l3fwd_ntttcp_tcp()
Description:
Run the L3 forwarding perf test for DPDK.
This test creates a DPDK port forwarding setup between
two NICs on the same VM. It forwards packets from a sender on
subnet_a to a receiver on subnet_b. Without l3fwd,
packets will not be able to jump the subnets. This imitates
a network virtual appliance setup, firewall, or other data plane
tool for managing network traffic with DPDK.
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

perf_dpdk_minimal_netvsc_pmd()
Description:
DPDK Performance: netvsc mode, minimal core count
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

class Dpdk
Description:
This test suite check DPDK functionality
Platform:

Azure, Ready

Area:

dpdk

Category:

functional

verify_dpdk_vpp()
Description:
verify vpp is able to detect azure network interfaces
1. run fd.io vpp install scripts
2. install vpp from their repositories
3. start vpp service
4. check that azure interfaces are detected by vpp
Priority:

4

Requirement:

unsupported_features[Gpu, Infiniband]

verify_uio_binding()
Description:
UIO basic functionality test.
- Bind interface to uio_hv_generic
- check that sysfs entry is created
- unbind
- check that the driver is unloaded.
- rebind to original driver
Priority:

2

Requirement:

supported_features[IsolatedResource]

verify_dpdk_send_receive_gb_hugepages_failsafe()
Description:
Tests a basic sender/receiver setup for default failsafe driver setup.
Sender sends the packets, receiver receives them.
We check both to make sure the received traffic is within the expected
order-of-magnitude.
Test uses 1GB hugepages.
Priority:

2

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_ovs()
Description:
Install and run OVS+DPDK functional tests
Priority:

4

Requirement:

disk

verify_dpdk_nff_go()
Description:
Install and run ci test for NFF-Go on ubuntu
Priority:

4

Requirement:

supported_features[IsolatedResource]

verify_dpdk_send_receive_failsafe()
Description:
Tests a basic sender/receiver setup for default failsafe driver setup.
Sender sends the packets, receiver receives them.
We check both to make sure the received traffic is within the expected
order-of-magnitude.
Priority:

2

Requirement:

min_count=2

verify_dpdk_l3fwd_ntttcp_tcp()
Description:
Run the L3 forwarding test for DPDK.
This test creates a DPDK port forwarding setup between
two NICs on the same VM. It forwards packets from a sender on
subnet_a to a receiver on subnet_b. Without l3fwd,
packets will not be able to jump the subnets. This imitates
a network virtual appliance setup, firewall, or other data plane
tool for managing network traffic with DPDK.
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_send_receive_netvsc()
Description:
Tests a basic sender/receiver setup for direct netvsc pmd setup.
Sender sends the packets, receiver receives them.
We check both to make sure the received traffic is within the expected
order-of-magnitude.
Priority:

2

Requirement:

min_count=2

verify_dpdk_multiprocess()
Description:
Build and run DPDK multiprocess client/server sample application.
Requires 3 nics since client/server needs two ports + 1 nic for LISA
Priority:

4

Requirement:

supported_features[IsolatedResource]

verify_dpdk_build_gb_hugepages_failsafe()
Description:
failsafe version with 2MB hugepages
This test case checks DPDK can be built and installed correctly.
Prerequisites, accelerated networking must be enabled.
The VM should have at least two network interfaces,
with one interface for management.
Priority:

2

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_sriov_rescind_failover_receiver()
Description:
test sriov failsafe during vf revoke (receive side)
Priority:

2

Requirement:

supported_features[IsolatedResource]

verify_dpdk_ring_ping()
Description:
This test runs the dpdk ring ping utility from:
to measure the maximum latency for 99.999 percent of packets during
the test run. The maximum should be under 200000 nanoseconds
(.2 milliseconds).
Not dependent on any specific PMD.
Priority:

4

Requirement:

supported_features[IsolatedResource]

verify_dpdk_build_failsafe()
Description:
failsafe version with 1GiB hugepages.
This test case checks DPDK can be built and installed correctly.
Prerequisites, accelerated networking must be enabled.
The VM should have at least two network interfaces,
with one interface for management.
Priority:

2

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_l3fwd_ntttcp_tcp_gb_hugepages()
Description:
Run the l3fwd test using GiB hugepages.
This test creates a DPDK port forwarding setup between
two NICs on the same VM. It forwards packets from a sender on
subnet_a to a receiver on subnet_b. Without l3fwd,
packets will not be able to jump the subnets. This imitates
a network virtual appliance setup, firewall, or other data plane
tool for managing network traffic with DPDK.
Priority:

3

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_sriov_rescind_failover_send_only()
Description:
test sriov failsafe during vf revoke (send only version)
Priority:

2

Requirement:

supported_features[IsolatedResource]

verify_dpdk_send_receive_gb_hugepages_netvsc()
Description:
Tests a basic sender/receiver setup for direct netvsc pmd setup.
Sender sends the packets, receiver receives them.
We check both to make sure the received traffic is within the expected
order-of-magnitude.
Test uses 1GB hugepages.
Priority:

2

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_send_receive_multi_txrx_queue_failsafe()
Description:
Tests a basic sender/receiver setup for default failsafe driver setup.
Sender sends the packets, receiver receives them.
We check both to make sure the received traffic is within the
expected order-of-magnitude.
Priority:

2

Requirement:

min_count=2

verify_dpdk_build_gb_hugepages_netvsc()
Description:
netvsc pmd version with 1GiB hugepages
This test case checks DPDK can be built and installed correctly.
Prerequisites, accelerated networking must be enabled.
The VM should have at least two network interfaces,
with one interface for management.
Priority:

2

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_build_netvsc()
Description:
netvsc pmd version.
This test case checks DPDK can be built and installed correctly.
Prerequisites, accelerated networking must be enabled.
The VM should have at least two network interfaces,
with one interface for management.
Priority:

2

Requirement:

unsupported_features[Gpu, Infiniband]

verify_dpdk_send_receive_multi_txrx_queue_netvsc()
Description:
Tests a basic sender/receiver setup for default failsafe driver setup.
Sender sends the packets, receiver receives them.
We check both to make sure the received traffic is within the expected
order-of-magnitude.
Priority:

2

Requirement:

min_count=2

class KvmUnitTestSuite
Description:
This test suite is for executing the community maintained KVM tests.
Platform:

Azure, Ready

Area:

kvm

Category:

community

verify_kvm_unit_tests()
Description:
Runs kvm-unit-tests.
Priority:

3

class Xfstesting
Description:
This test suite is to validate different types of data disk on Linux VM
using xfstests.
Platform:

Azure, Ready

Area:

storage

Category:

community

verify_azure_file_share()
Description:
This test case will run cifs xfstests testing against
azure file share.
Downgrading priority from 3 to 5. The file share relies on the
storage account key, which we cannot use currently.
Will change it back once file share works with MSI.
Priority:

5

Requirement:

unsupported_os[BSD, Windows]

verify_xfs_standard_datadisk()
Description:
This test case will run xfs xfstests testing against
standard data disk with xfs type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_generic_standard_datadisk()
Description:
This test case will run generic xfstests testing against
standard data disk with xfs type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_btrfs_standard_datadisk()
Description:
This test case will run btrfs xfstests testing against
standard data disk with btrfs type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_btrfs_nvme_datadisk()
Description:
This test case will run btrfs xfstests testing against
nvme data disk with btrfs type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_generic_ext4_standard_datadisk()
Description:
This test case will run generic xfstests testing against
standard data disk with ext4 type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_xfs_nvme_datadisk()
Description:
This test case will run xfs xfstests testing against
nvme data disk with xfs type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_ext4_standard_datadisk()
Description:
This test case will run ext4 xfstests testing against
standard data disk with ext4 type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_ext4_nvme_datadisk()
Description:
This test case will run ext4 xfstests testing against
nvme data disk with ext4 type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_generic_ext4_nvme_datadisk()
Description:
This test case will run generic xfstests testing against
nvme data disk with ext4 type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

verify_generic_nvme_datadisk()
Description:
This test case will run generic xfstests testing against
nvme data disk with xfs type system.
Priority:

3

Requirement:

unsupported_os[BSD, Windows]

class Nvme
Description:
This test suite is to validate NVMe disk on Linux VM.
Platform:

Azure, Ready

Area:

nvme

Category:

functional

verify_nvme_manage_ns()
Description:
This test case will run commands 2-5, the commands are expected fail or not
based on the capabilities of the device.
1. Use nvme id-ctrl device command list the capabilities of the device.
1.1 When ‘Format NVM Supported’ shown up in output of ‘nvme id-ctrl device’,
then nvme disk can be format, otherwise, it can’t be format.
1.2 When ‘NS Management and Attachment Supported’ shown up in output of
‘nvme id-ctrl device’, nvme namespace can be created, deleted and detached,
otherwise it can’t be managed.
2. nvme format namespace - format a namespace.
3. nvme create-ns namespace - create a namespace.
4. nvme delete-ns -n 1 namespace - delete a namespace.
5. nvme detach-ns -n 1 namespace - detach a namespace.
Priority:

3

Requirement:

supported_features[Nvme]

verify_nvme_fstrim()
Description:
This test case will
1. Create a partition, xfs filesystem and mount it.
2. Check how much the mountpoint is trimmed before operations.
3. Create a 300 gb file ‘data’ using dd command in the partition.
4. Check how much the mountpoint is trimmed after creating the file.
5. Delete the file ‘data’.
6. Check how much the mountpoint is trimmed after deleting the file,
and compare the final fstrim status with initial fstrim status.
Priority:

3

Requirement:

supported_features[Nvme]

verify_nvme_function()
Description:
This test case will do following things for each NVMe device.
1. Get the number of errors from nvme-cli before operations.
2. Create a partition, filesystem and mount it.
3. Create a txt file on the partition, content is ‘TestContent’.
4. Create a file ‘data’ on the partition, get the md5sum value.
5. Umount and remount the partition.
6. Get the txt file content, compare the value.
7. Compare the number of errors from nvme-cli after operations.
Priority:

2

Requirement:

supported_features[Nvme]

verify_nvme_rescind()
Description:
This test case will
1. Disable NVME devices.
2. Enable PCI devices.
3. Get NVMe devices slots.
4. Check NVMe devices are back after rescan.
Priority:

2

Requirement:

supported_features[Nvme]

verify_nvme_max_disk()
Description:
This case runs nvme_basic_validation test against 10 NVMe disks.
The test steps are same as nvme_basic_validation.
Priority:

2

Requirement:

supported_features

verify_nvme_sriov_rescind()
Description:
This test case does following steps to verify VM working normally during
disable and enable nvme and sriov devices.
1. Disable PCI devices.
2. Enable PCI devices.
3. Get PCI devices slots.
4. Check PCI devices are back after rescan.
Priority:

2

Requirement:

supported_features[Nvme]

verify_nvme_function_unpartitioned()
Description:
The test case is same as verify_nvme_function, except it uses
unpartitioned disks.
This test case will do following things for each NVMe device.
1. Get the number of errors from nvme-cli before operations.
2. Create filesystem and mount it.
3. Create a txt file on the partition, content is ‘TestContent’.
4. Create a file ‘data’ on the partition, get the md5sum value.
5. Umount and remount the partition.
6. Get the txt file content, compare the value.
7. Compare the number of errors from nvme-cli after operations.
Priority:

2

Requirement:

supported_features[Nvme]

verify_nvme_basic()
Description:
This test case will
1. Get nvme devices and nvme namespaces from /dev/ folder,
compare the count of nvme namespaces and nvme devices.
2. Compare the count of nvme namespaces return from nvme list
and list nvme namespaces under /dev/.
3. Compare nvme devices count return from lspci
and list nvme devices under /dev/.
4. Azure platform only, nvme devices count should equal to
actual vCPU count / 8.
Priority:

1

Requirement:

supported_features[Nvme]

verify_nvme_blkdiscard()
Description:
This test case will
1. Create a partition, xfs filesystem and mount it.
2. Umount the mountpoint.
3. Run blkdiscard command on the partition.
4. Remount command should fail after run blkdiscard command.
Priority:

3

Requirement:

supported_features[Nvme]