Cluster Testing
This guide outlines how to validate the resilience, performance, and quota enforcement of your Ceph cluster.
1. Health & Benchmarks
Before proceeding, ensure the cluster is healthy and establish a performance baseline.
1.1 Status Check
# Run on Admin Node
sudo ceph -sExpected: HEALTH_OK, 3 monitors, 4 OSDs (up/in).
1.2 RADOS Benchmark
Test the raw speed of the storage backend.
# Write 1GB for 10 seconds
sudo rados bench -p cephfs_data 10 write --no-cleanup
# Sequential Read test
sudo rados bench -p cephfs_data 10 seq2. Quota Enforcement
We will simulate a user trying to exceed their 10GB limit.
Setup
- Mount
student_001's subvolume. - Assume mount point:
/mnt/student_001.
The Stress Test
Try to write an 11GB file into the 10GB quota space.
# Attempt to write 11 blocks of 1GB each
sudo dd if=/dev/zero of=/mnt/student_001/quota_test.img bs=1G count=11Expected Result: The command must fail with a specific error:
dd: error writing '/mnt/student_001/quota_test.img': Disk quota exceeded
3. User Isolation (Security)
Verify that student_001 cannot access student_002's volume.
Access Control Test
Try to mount student_002's path using student_001's secret key.
# Attempt mount with wrong credentials
sudo mount -t ceph <MON_IP>:<PATH_STUDENT_002> \
/mnt/test_hack \
-o name=student_001,secret=<KEY_STUDENT_001>Expected Result: The command should fail with:
mount error: Operation not permitted