| 1 | <!DOCTYPE html> |
| 2 | <html> |
| 3 | <head> |
| 4 | <meta http-equiv='content-type' content='text/html; charset=utf-8'> |
| 5 | <meta name='viewport' content='width=device-width, initial-scale=1.0'> |
| 6 | <style type='text/css'> |
| 7 | body { margin: 1em 15%; } |
| 8 | </style> |
| 9 | </head> |
| 10 | <body> |
| 11 | <div class='story-header'> |
| 12 | <h2><a href='0000763603.html'>[$] Measuring (and fixing) I/O-controller throughput loss</a></h2> |
| 13 | <div class='details'>([Kernel] Aug 29, 2018 21:20 UTC (Wed) (corbet))</div> |
| 14 | <br/> |
| 15 | <div class='content' style='text-align: justify'> |
| 16 | Many services, from web hosting and video streaming to cloud storage, need to move data to and from storage. They also often require that each per-client I/O flow be guaranteed a non-zero amount of bandwidth and a bounded latency. An expensive way to provide these guarantees is to over-provision storage resources, keeping each resource underutilized, and thus have plenty of bandwidth available for the few I/O flows dispatched to each medium. Alternatively one can use an I/O controller. Linux provides two mechanisms designed to throttle some I/O streams to allow others to meet their bandwidth and latency requirements. These mechanisms work, but they come at a cost: a loss of as much as 80% of total available I/O bandwidth. I have run some tests to demonstrate this problem; some upcoming improvements to the [1]bfq I/O scheduler promise to improve the situation considerably.<br/><br/><br/><br/>[1] https://lwn.net/Articles/601799/ |
| 17 | </div> |
| 18 | <hr/> |
| 19 | </div> |
| 20 | </body> |