Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Part 1: Key-Value Server


In this part, you will implement a single-server key-value store and a client that communicates with it via RPC. The server stores keys with associated values and version numbers, supporting optimistic concurrency control through version checking on puts.

KV server semantics

The server maintains a map of keys to (value, version) pairs. It supports two operations:

  1. Get(key): Returns the current value and version for the key. If the key does not exist, returns KV_NO_KEY.
  2. Put(key, value, version): Installs or replaces the value for the key, but only if the provided version matches the server’s current version for that key. On success, the server increments the version and returns KV_OK. On version mismatch, returns KV_VERSION.

For a key that does not yet exist, the server’s version is 0. So to create a new key, the client must put with version = 0. If a put request is made for a key that doesn’t exist, but the version is not 0, then you should return KV_NO_KEY.

Subsequent puts must provide the current version. For further reading, this is how optimistic concurrency control works, by reading the current version and writing back with the version it read. If another client modifies the key in between the read/write, the version will mismatch and fail.

KV server

Implement the KV server in src/kv_single/server.c. We outline a few tasks for you, though you are not required to follow them step by step.

  1. Define RPCs. The starter provides sample JSON helpers (parse_get_args, parse_put_args, make_get_reply, make_put_reply) and a routing dispatch. You’ll need to add the fields that you need and parse them.
  2. Add state to kv_server. You will need a data structure to store key/value/version triples. A natural fit is the provided Pintos hash table (lib/hash.h) with an embedded struct hash_elem in your KV entry type.
  3. Implement handle_get and handle_put. You’ll need to consider synchronisation here, e.g. with a pthread_mutex_t on the kv_server struct.

Client

Implement the client in src/kv_single/client.c.

The client wraps an rpc_client_t endpoint and exposes itself through the kv_client_t vtable. For now, implement client_get and client_put assuming a reliable network (no drops):

You should use rpc_call(c->endpoint, "get", body) to send an RPC. On a reliable network, rpc_call always returns a non-NULL reply. Remember to free() the reply string when you’re done.

Testing

Run the reliable-network tests:

./test_kvsrv test_reliable

You should pass all tests whose names start with test_reliable.

Reliability

Now make your client handle an unreliable network. When rpc_call returns NULL, the RPC was lost, which means the request either never reached the server, or the reply was dropped. We must distinguish between these two cases.

  1. if the request was dropped, the put didn’t happen, so retrying is safe.
  2. if the reply was dropped, the put did happen, and retrying would fail with KV_VERSION (since the version was already incremented).

When rpc_call returns NULL, we should retry the RPC. You may use usleep(10 * 1000) (10ms) to sleep between retries. Then, when a retry returns KV_VERSION, you should return KV_MAYBE which tells the caller the request may have been executed. Keep in mind that if your initial put RPC call returns KV_VERSION, your client should still return KV_VERSION, since you know the RPC was definitely not executed by the server.

Testing

Run all single-server KV tests:

./test_kvsrv test_reliable
./test_kvsrv test_concurrent
./test_kvsrv test_unreliable