Q2 2026 LLM inference benchmark across vLLM, TGI, SGLang, and Triton — throughput, p50/p99 TTFT/TPOT, KV-cache efficiency, and which engine wins per workload class.
How OpenAI's o3 family scales reasoning at inference time — chain-of-thought RL, verifier models, cost curves, and when test-time compute beats pre-training.