Age | Commit message (Collapse) | Author |
|
-1 is not always the right choice, e.g. when the previous update script
has the same alembic version. Therefore, we actually need to do the
effort to track the "previous alembic" version.
|
|
|
|
|
|
|
|
|
|
This does a lot of changes already. What is definitely not working yet
are the tests, as they still try to put the data into the database - but
that should be easy to fix.
The convenience methods Track.{length,uphill,...} were a bit stupid to
fix, as our template code assumed that those attributes can just be
accessed. As a fix, I've introduced a new class that "re-introduces"
those and can lazily load them from the disk in case the cache does not
exist. This works pretty well, but is not too nice - we end up with a
lot of "proxy" properties.
Other than that, I'm positively surprised how well this has worked so
far, the upgrade scripts seem to be doing good and serving the file
straight from the disk seems to work nicely as well. What isn't tested
yet however is "edge cases", in case a data directory goes missing, ...
|
|
Since we plan on moving the GPX data (and the original copy) into the
data directory, it makes more sense to have a per-track "handle" instead
of having all methods of DataManager take a track_id parameter.
|
|
|
|
|
|
|
|
|
|
This tuple was basically the same as TileLayerConfig, just without the
validation. You could see in the old _extract_user_layers that otherwise
they were 1:1 doing the same job. Therefore, it made sense to remove
TileSource and instead rewrite the code to use TileLayerConfig directly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The default session timeout is at 15 minutes, which can be rather short.
Therefore, we now have a "Remember me" option, which optionally saves
the authentication in a cookie (signed of course, so nobody can tamper
with it). This cookie is set to basically never expire, keeping the user
logged in while not messing with the session timeout (which is also used
for other things like flash messages).
We might think about just removing the session authentication completely
and doing everything with cookies, but we'll see about that. We
definitely want to keep two separate timeouts, but the cookie helper
doesn't seem to provide a way to have single cookies last for longer
(short of having a second helper like we currently do).
|
|
|
|
|
|
See https://github.com/tox-dev/tox/issues/2636 - without the rename, tox
fails to recognize the configuration for flake8, as there is a
(non-testenv) section named the same.
|
|
The Poetry FAQ[1] gives some options on how tox and poetry can be used
together, since both of them want to do the virtual env managing. Since
we mostly want to use tox as a venv manager and to easily run multiple
linters, and we want to have poetry do the dependency management, the
method of explicitely using `poetry install` seems to be the most
reasonable. This means we don't have to generate a requirements.txt file
or make duplicated listings of our dependencies in tox.ini.
[1]: https://python-poetry.org/docs/master/faq/#is-tox-supported
|
|
|
|
There seems to be an issue with the latest one.
|
|
|
|
This seems like something we should do rather earlier than later. Using
black takes away the pain of manually formatting the code, adhering to
the style guidelines and it takes away bikeshedding over minor things.
|
|
|
|
|
|
It would be nice to gradually improve the typing situation in Fietsboek.
At least the parts that do not do heavy metaprogramming should have
types. For most of the API, we already have types in the doc strings, so
those could be removed then.
|
|
While it should be fine the way it was, we might want to introduce more
"secret keys" (like for additional cookies), for which we would need
more secrets.
|
|
|
|
|
|
|
|
|
|
We might re-introduce Makefiles, but for different purposes (SASS or
TypeScript compilation).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We forgot to include the CSRF token here.
|
|
This takes away the pain of dealing with default values or value
conversions in main()
|
|
This is the first step, in the next step, we should actually use
request.config.
|
|
This way, we not only save the decompression time, we can also save
bandwidth! We *might* even consider using brotli, which seems to be
widely supported and has even better compression ratios, but brotli
compression of full efficiency is also slow.
Ideally, we'd save a "fast compressed" version of the GPX file on
upload, and then have a slower background-queue re-compress them with
higher settings. That however should probably wait till we move the GPX
data out of the database(?!), then we can even serve the data straight
with a FileResponse.
|
|
|
|
|
|
|