Age | Commit message (Collapse) | Author |
|
The biggest adaption was the update to matrix-sdk 0.4, as some types got
shuffled around and the API is now imported from ruma. The
relates_to.is_none() check seems to no longer work, as Relation::_Custom
is used instead, so instead we now explicitely check if the message is
neither a reply nor a replacement (on a related note, it is unknown to
me whether we need to find the first original message or if we could
also set a replaces on the last message in the chain).
serenity, tokio and reqwest needed no API updates on the other hand.
|
|
Similar to Discord posting, this now allows ezau to post a message to
the given Matrix room for every log.
The text handling is still pretty bad and should be reworked, but so
should the Discord one. This is just the initial support, now that the
actual posting works we can add some tests and proper text parsing,
together with unifying some of the logic between Discord and Matrix.
Note that this currently only works for unencrypted rooms!
|
|
|
|
If you use ezau on Windows, you might prefer to use the built-in zipping
functionality from arcDPS instead of relying on ezau to do this job.
However, that would lead to weird interactions because arcDPS would
still create the temporary file in the watched folder, and
powershell would race with ezau to zip and delete this temporary file.
To prevent this from breaking existing (& working) configurations - and
to stick true to the name - zipping is enabled by default if not given
otherwise in the configuration.
|
|
As it turns out, uploading is often the reason why the process
crashes/exits. This is bad because it means that 1) we lose links to
logs (as they are not being uploaded), leading to incomplete reporting
and 2) we rely on an external watchdog to keep the service alive (and
I'd rather just not have ezau crashing, especially on Windows where we
usually don't supervise it with systemd).
Therefore, a configuration setting has been added that lets ezau retry
the upload process. This is not 100% good and failsafe, because
1) it always waits a hardcoded amount of seconds (instead of e.g. using
a proper backoff timer)
2) it blocks the rest of the process, so no logs will be compressed
while it is retrying a single log
3) after those retries, the process will still exit
But it is a good first approximation, and the aforementioned issues can
be fixed "relatively easily" (e.g. by moving the whole per-log logic
into a separate thread(pool) and handling failures even better).
|
|
ezau having the watching functionality is nice, but sometimes for
scripts you might want to have the old "upload this single log and post
it to discord" functionality. As such, ezau has now been split into two
subcommands (which use the same core):
ezau watch runs the inotify-based directory watcher to zip and upload
new logs. Additionally, it now respects the "upload = ..." config
settings, which means you can also use it as a zipper only, without
having every log uploaded.
ezau upload performs a single-shot upload with the discord notification.
Furthermore, the discord auth token/channel id have been moved to a
configuration file. Switches to override this for single runs might be
provided in the future, but for now, it seems more sensible to have it
in a persistent configuration.
|