C++ Network Programming
Concurrent OO network programming in C++: server architecture patterns (iterative, thread-per-request, thread-per-connection, thread-pool, reactive select-based), process vs thread tradeoffs, synchronization primitives (mutex, RW-lock, condvar, semaphore), socket IPC, non-blocking I/O, and the ACE toolkit wrapper facade philosophy.
- › Choose server architecture (iterative/concurrent/reactive) based on service duration and load
- › Implement thread-per-request, thread-per-connection, and thread-pool server patterns
- › Build reactive select()-based event-loop server for high connection counts
- › Apply mutex, reader-writer lock, condition variable, and semaphore correctly
- › Write TCP client and server socket code with non-blocking I/O
- › Apply Half-Sync/Half-Async pattern to combine reactive receiving with synchronous processing
Install this skill and Claude can select and implement the right server architecture — iterative, thread-pool, or reactive select-based — for a given connection volume, write complete TCP socket code, and design correct synchronization using RAII-guarded mutexes and condition variables
Codifies the concurrency patterns that underpin every production network server, enabling Claude to reason about scalability bottlenecks and race conditions rather than just producing boilerplate socket code
- › Recommending a reactive select-based server for 10,000 concurrent idle WebSocket connections instead of a thread-per-connection design and sketching the event loop
- › Implementing a bounded BlockingQueue with a fixed-size worker thread pool including graceful shutdown signaling via a sentinel value
- › Auditing a multi-threaded request handler to identify all shared mutable state and adding the appropriate std::shared_mutex reader-writer lock pattern
C++ Network Programming Skill
Network Application Design Dimensions
Communication Model Selection
| Dimension | Option A | Option B | When to choose A | When to choose B |
|---|---|---|---|---|
| Protocol | Connectionless (UDP) | Connection-oriented (TCP) | Low overhead, loss-tolerant (DNS, streaming) | Reliable delivery required |
| Synchrony | Synchronous | Asynchronous | Simple control flow | High throughput, non-blocking |
| IPC | Message-passing | Shared memory | Cross-host communication | Intra-host, max performance |
Server Architecture Patterns
1. Iterative Server
// Single-threaded, processes one request at a time
void iterative_server() {
init_listener();
for (;;) {
accept_connection(); // blocks
receive_request();
perform_service();
send_response();
// next client only served after this one completes
}
}
// Use when: short-duration services, infrequently run, no blocking ops
// Problem: one slow client starves all others
2. Concurrent: Thread-per-Request
void master_thread() {
init_listener();
for (;;) {
recv_request();
std::thread worker([req](){ perform(req); send(req); });
worker.detach(); // or store for join
}
}
// Use when: requests are independent, variable duration, I/O-bound
// Problem: unlimited thread creation → resource exhaustion
3. Concurrent: Thread-per-Connection
void master_thread() {
init_listener();
for (;;) {
auto conn = accept_connection();
std::thread([conn](){
while (auto req = recv(conn))
send(conn, perform(req));
}).detach();
}
}
// Supports priority: high-priority clients → high-priority threads
// Better than thread-per-request for multi-request connections
4. Thread Pool
// Pre-spawn N threads, queue incoming requests
BlockingQueue<Request> work_queue;
void master_thread() {
for (int i = 0; i < THREAD_POOL_SIZE; ++i)
std::thread(worker_thread).detach();
for (;;)
work_queue.push(accept_and_recv());
}
void worker_thread() {
for (;;) {
auto req = work_queue.pop(); // blocks
perform_and_reply(req);
}
}
// Best for: bounded resource usage, predictable latency
// THREAD_POOL_SIZE: typically 2x CPU cores for I/O-bound work
5. Reactive Server (select-based)
// Single thread, event-driven — no OS threading required
void reactive_server() {
fd_set read_fds;
init_listener(listen_fd);
for (;;) {
FD_ZERO(&read_fds);
FD_SET(listen_fd, &read_fds);
for (auto fd : active_connections) FD_SET(fd, &read_fds);
select(max_fd + 1, &read_fds, nullptr, nullptr, nullptr);
if (FD_ISSET(listen_fd, &read_fds))
accept_new_connection();
for (auto fd : active_connections)
if (FD_ISSET(fd, &read_fds))
handle_request(fd); // must NOT block
}
}
// Use when: many connections, mostly idle, short service time
// Problem: one blocking call freezes all clients; hard to implement long ops
Server Architecture Selection Guide
| Pattern | Concurrency | Memory | Best For |
|---|---|---|---|
| Iterative | None | Minimal | Short ops, low load |
| Thread-per-request | Per-request thread | High | Independent burst requests |
| Thread-per-connection | Per-connection thread | Medium | Stateful long-lived connections |
| Thread pool | Fixed N threads | Bounded | Predictable load, production servers |
| Reactive (select) | None (event loop) | Minimal | Many idle connections (C10K-style) |
Processes vs Threads Tradeoffs
Processes:
+ Strong isolation (MMU protection) — crash of one doesn't affect others
+ Suitable for security-sensitive separation
- Higher context-switch overhead
- IPC required for sharing data (pipes, shared memory, sockets)
- Harder to implement fine-grained scheduling
Threads:
+ Lower creation + context-switch cost
+ Shared address space — easy data sharing
+ Fine-grained priority control
- Shared address space = bugs can corrupt everything
- Synchronization required for shared data
- One blocked syscall can block whole process (user-space threading)
Synchronization Primitives
Mutex (Mutual Exclusion)
std::mutex mtx;
// RAII guard — unlocks when scope exits (exception-safe)
{
std::lock_guard<std::mutex> guard{mtx};
shared_data.modify();
} // ← unlocked here
// For condition_variable compatibility:
std::unique_lock<std::mutex> lock{mtx};
cv.wait(lock, []{ return ready; });
Reader-Writer Lock (Multiple readers, one writer)
std::shared_mutex rw_mutex;
// Multiple readers simultaneously
std::shared_lock<std::shared_mutex> read_lock{rw_mutex};
read_data();
// Exclusive writer
std::unique_lock<std::shared_mutex> write_lock{rw_mutex};
write_data();
// Use when: read-heavy workloads (caches, config data)
Condition Variable
std::mutex mtx;
std::condition_variable cv;
bool ready = false;
// Producer
{
std::lock_guard<std::mutex> lock{mtx};
ready = true;
}
cv.notify_one(); // or notify_all()
// Consumer
std::unique_lock<std::mutex> lock{mtx};
cv.wait(lock, []{ return ready; }); // handles spurious wakeups
process();
Semaphore (counting)
// C++20
#include <semaphore>
std::counting_semaphore<10> sem{5}; // max 10, initial 5
sem.acquire(); // decrement; blocks if 0
// critical section
sem.release(); // increment
// Use for: bounded resource pools (DB connections, thread pool slots)
Socket Communication Patterns
TCP Client
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
int sock = socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in addr{AF_INET, htons(PORT), {inet_addr("127.0.0.1")}};
connect(sock, (sockaddr*)&addr, sizeof(addr));
write(sock, msg, len);
read(sock, buf, sizeof(buf));
close(sock);
TCP Server
int listen_fd = socket(AF_INET, SOCK_STREAM, 0);
int opt = 1;
setsockopt(listen_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
sockaddr_in addr{AF_INET, htons(PORT), INADDR_ANY};
bind(listen_fd, (sockaddr*)&addr, sizeof(addr));
listen(listen_fd, SOMAXCONN);
while (true) {
int client_fd = accept(listen_fd, nullptr, nullptr);
// handle client_fd (iterative, thread, or pool)
}
Non-Blocking I/O
// Set socket non-blocking
fcntl(fd, F_SETFL, O_NONBLOCK);
// read() returns -1 with errno==EAGAIN when no data
ssize_t n = read(fd, buf, sizeof(buf));
if (n < 0 && errno == EAGAIN) {
// no data yet — register with select/poll/epoll and return
}
Key Design Principles (ACE Pattern Language)
1. Wrapper Facade: encapsulate raw OS APIs in type-safe C++ classes
→ eliminates weakly-typed handle errors caught only at runtime
2. Separation of Concerns: separate
- Connection establishment (Acceptor/Connector)
- Service handler (what to do with connection)
- Event loop (when to invoke handlers)
3. Component Configurator: configure services at runtime without recompiling
→ load service implementations as shared libraries
4. Half-Sync/Half-Async:
- Async layer: receive requests (non-blocking)
- Sync layer: process requests (blocking OK, in thread pool)
- Queue in between
→ combines reactivity with ease of synchronous programming
Inherent vs Accidental Complexity
Inherent complexity (domain-level — must understand):
- Selecting communication mechanism and designing protocols
- Utilizing computing resources efficiently
- Using concurrency for predictable high performance
- Configuring services for availability and flexibility
Accidental complexity (tooling-level — can be eliminated with good abstractions):
- Lack of type-safe, portable OS APIs (solve with wrapper facades)
- Algorithmic decomposition instead of OO design
- Reinventing common patterns in every application