Part 2: Distributed Lock
In this part, you will implement a distributed lock using your KV server as the coordination backend. The lock is defined in src/lock.rs.
Lock semantics
The Lock struct takes an Arc<dyn KvClient> and a lock name. It supports the two standard operations, acquire and release.
Lock state is stored as a key in the KV server. It is up to you to decide how to store the locked and unlocked states.
Lock
Implement acquire and release in src/lock.rs.
Hints:
- You may uniquely identify lock clients by generating a random value using
rand. - You should consider the case when a put fails with
Err(KVError::Version), and what this means for the lock acquisition. - Like in Part 1, you may use
tokio::time::sleep(Duration::from_millis(10))to add a short delay between retry requests. Keep retry sleeps in the 10-50ms range. - You are not required to handle the case when a client who holds the lock crashes. Proper handling would use leases with timeouts.
Testing
cargo test --test kvsrv_test test_lock_basic -- --test-threads=1
cargo test --test kvsrv_test test_lock_reacquire -- --test-threads=1
cargo test --test kvsrv_test test_lock_nested -- --test-threads=1
cargo test --test kvsrv_test test_lock_1_client_reliable -- --test-threads=1
cargo test --test kvsrv_test test_lock_2_clients_reliable -- --test-threads=1
cargo test --test kvsrv_test test_lock_5_clients_reliable -- --test-threads=1
Reliability
Your lock must also work when the client operates over an unreliable network.
Think about what happens when acquire gets Err(KVError::Maybe) and recall when we gave Err(KVError::Maybe) in Part 1. Your lock should re-read the key to check whether it now holds the lock.
Testing
Run all lock tests:
cargo test --test kvsrv_test test_lock -- --test-threads=1