Mercurial
comparison asyncio_threads/inference/README.md @ 48:46daba6e3cf4
Few python scrtips to show how to use asychio.
| author | MrJuneJune <me@mrjunejune.com> |
|---|---|
| date | Sat, 13 Dec 2025 14:23:02 -0800 |
| parents | |
| children |
comparison
equal
deleted
inserted
replaced
| 47:829623189a57 | 48:46daba6e3cf4 |
|---|---|
| 1 Inference Questions | |
| 2 | |
| 3 Context | |
| 4 | |
| 5 You are tasked with building a simplified inference engine component responsible for handling incoming user requests for a large language model (LLM). To optimize throughput and GPU utilization, the engine must batch multiple requests together, run the inference call once per batch, and then deconstruct the results to return token-level output to the individual users. | |
| 6 | |
| 7 Objective | |
| 8 | |
| 9 Complete the provided Python class, BatchInferenceEngine by implementing the methods necessary to: | |
| 10 Queue incoming user requests. | |
| 11 Process a batch when the queue reaches a defined batch size. | |
| 12 | |
| 13 Simulate the token-level output from an LLM and correctly associate each generated token with its original request. | |
| 14 | |
| 15 | |
| 16 Task Requirements | |
| 17 | |
| 18 | |
| 19 | |
| 20 | |
| 21 | |
| 22 Implement the logic for $enqueue\_request$. | |
| 23 | |
| 24 | |
| 25 | |
| 26 Implement the logic for $\_process\_batch$. | |
| 27 | |
| 28 | |
| 29 | |
| 30 Demonstrate the usage by creating 7 unique requests and enqueueing them one by one. Show the state of the queue and the processed tokens after each batch run. |