While performance for a simple program can be measured as execution speed (i.e.,
timecommand), it’s insufficient for complex systems like servers
Positive Correlation: in a simple, sequential system, improving one improves the other
Faster CPU will reduce the latency of each request, which in turn allows the server to handle more requests per second (higher throughput)

Negative Correlation: when parallelism is introduced (i.e., processing multiple requests concurrently), a trade-off often appears
Throughput typically improves because the system is doing more work simultaneously
Latency for each individual request might increase (get worse) due to the overhead of managing parallel tasks (i.e., context switching, contention for shared resources like CPU or memory)

Concorde vs Boeing 747
Adding more parallel threads does not infinitely improve throughput