The server UI was reading 1024 bytes, then sleeping for 0.25 seconds. Since
most new LogRecord events are larger than this it leads to a build up of data
which is only processed slowly, leading to a bottleneck and a slow down of
all bitbake processes.
Thanks to Dongxiao Xu <dongxiao.xu@intel.com> for the great work in debugging
this. A large value has been left in for the read() command just to ensure some
fairness amongst process handling if a task tries to log truly huge amounts of
data to the server, or goes crazy and ensures the main loop doesn't stall.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
def read(self):
start = len(self.queue)
try:
- self.queue = self.queue + self.input.read(1024)
+ self.queue = self.queue + self.input.read(102400)
except (OSError, IOError):
pass
end = len(self.queue)