Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Part 2: Distributed Lock


In this part, you will implement a distributed lock using your KV server as the coordination backend. The lock is defined in src/lock.c.

Lock semantics

The lock_t struct takes a kv_client_t * and a lock name. It supports the two standard operations, lock_acquire and lock_release.

Lock state is stored as a key in the KV server. It is up to you to decide how to store the locked and unlocked states.

Lock

Implement lock_acquire and lock_release in src/lock.c.

Hints:

  • You may uniquely identify lock clients by generating a random value (e.g. with rand()).
  • You should consider the case when a put fails with KV_VERSION, and what this means for the lock acquisition.
  • Like in Part 1, you may use usleep(10 * 1000) to add a short delay between retry requests. Keep retry sleeps in the 10-50ms range.
  • You are not required to handle the case when a client who holds the lock crashes. Proper handling would use leases with timeouts.

Testing

Run the reliable lock tests:

./test_kvsrv test_lock_basic
./test_kvsrv test_lock_reacquire
./test_kvsrv test_lock_nested
./test_kvsrv test_lock_1_client_reliable
./test_kvsrv test_lock_2_clients_reliable
./test_kvsrv test_lock_5_clients_reliable

Reliability

Your lock must also work when the client operates over an unreliable network.

Think about what happens when acquire gets KV_MAYBE and recall when we gave KV_MAYBE in Part 1. Your lock should re-read the key to check whether it now holds the lock.

Testing

Run all lock tests:

./test_kvsrv test_lock