Skip to content

Cluster Testing

This guide outlines how to validate the resilience, performance, and quota enforcement of your Ceph cluster.

1. Health & Benchmarks

Before proceeding, ensure the cluster is healthy and establish a performance baseline.

1.1 Status Check

bash
# Run on Admin Node
sudo ceph -s

Expected: HEALTH_OK, 3 monitors, 4 OSDs (up/in).

1.2 RADOS Benchmark

Test the raw speed of the storage backend.

bash
# Write 1GB for 10 seconds
sudo rados bench -p cephfs_data 10 write --no-cleanup

# Sequential Read test
sudo rados bench -p cephfs_data 10 seq

2. Quota Enforcement

We will simulate a user trying to exceed their 10GB limit.

Setup

  1. Mount student_001's subvolume.
  2. Assume mount point: /mnt/student_001.

The Stress Test

Try to write an 11GB file into the 10GB quota space.

bash
# Attempt to write 11 blocks of 1GB each
sudo dd if=/dev/zero of=/mnt/student_001/quota_test.img bs=1G count=11

Expected Result: The command must fail with a specific error:

dd: error writing '/mnt/student_001/quota_test.img': Disk quota exceeded

3. User Isolation (Security)

Verify that student_001 cannot access student_002's volume.

Access Control Test

Try to mount student_002's path using student_001's secret key.

bash
# Attempt mount with wrong credentials
sudo mount -t ceph <MON_IP>:<PATH_STUDENT_002> \
  /mnt/test_hack \
  -o name=student_001,secret=<KEY_STUDENT_001>

Expected Result: The command should fail with:

mount error: Operation not permitted