aboutsummaryrefslogtreecommitdiff
path: root/Cargo.lock
diff options
context:
space:
mode:
authorDaniel Schadt <kingdread@gmx.de>2020-07-17 16:07:39 +0200
committerDaniel Schadt <kingdread@gmx.de>2020-07-17 16:13:38 +0200
commit420869dab5dc73e86a7915f5cc29da4a9291a586 (patch)
treea3215be4d2d61c540d02666bc457240991aa2e83 /Cargo.lock
parentf511152cb33743026503297a18a74f118dec4bc6 (diff)
downloadezau-420869dab5dc73e86a7915f5cc29da4a9291a586.tar.gz
ezau-420869dab5dc73e86a7915f5cc29da4a9291a586.tar.bz2
ezau-420869dab5dc73e86a7915f5cc29da4a9291a586.zip
retry uploading
As it turns out, uploading is often the reason why the process crashes/exits. This is bad because it means that 1) we lose links to logs (as they are not being uploaded), leading to incomplete reporting and 2) we rely on an external watchdog to keep the service alive (and I'd rather just not have ezau crashing, especially on Windows where we usually don't supervise it with systemd). Therefore, a configuration setting has been added that lets ezau retry the upload process. This is not 100% good and failsafe, because 1) it always waits a hardcoded amount of seconds (instead of e.g. using a proper backoff timer) 2) it blocks the rest of the process, so no logs will be compressed while it is retrying a single log 3) after those retries, the process will still exit But it is a good first approximation, and the aforementioned issues can be fixed "relatively easily" (e.g. by moving the whole per-log logic into a separate thread(pool) and handling failures even better).
Diffstat (limited to 'Cargo.lock')
0 files changed, 0 insertions, 0 deletions