Hi,
this is due multiple web spiders (Bing, Baidu, ...) hitting the trac and cgit. Patches for the robots.txt are welcome.
btw, could you send me the log file for such case ? I would guess they hit some high-resource pages that could be easily excluded by the robots.txt but that was just missed in the 'deny' ...
Cheers,
Sylvain