Conversation
|
I'm not such a fan of caching data that may change. If the JSON-LD context is updated with important changes, all the frontends will be broken during 6 hours. If the context was super stable, I could consent to that, but currently it is frequently changed. I see two possible alternatives:
|
|
Mh, I see. Maybe we could set a very short cache time e.g. a minute or two, ... or less? We might also improve that on the frontend side and do some caching there...
I like the idea of using redis here but this has caused me quite some significant annoyances while debugging things. So I have some concerns as well. Etags could be nice as well but result in delay since a HTTP request is sent anyways, if I understand correctly? |
I think this is coming from the If we added a similar mechanism on the frontend, it would surely reduce the amount of calls. But then there will be still be the problem of invalidating the cache... 🤔 With 1-2 minutes, there would be less risks indeed.
On dev mode, Redis cache is disabled by default so this helps.
Yes the client do a I think my concern is mainly for developers, because the usages are different than in prod mode. I want to note also that the JSON-LD context is loaded dynamically. Whenever a new ontology is registered, it is added to the JSON-LD context. So if the client fetches the JSON-LD before all the services are started (and the In the end it's a matter of evaluating what is more annoying: some extra unneeded requests, or hard-to-debug problems for developers due to invalid cache ? For me, the second one is 100x worst. |
Add
Cache-Control: public, max-age=21600header to context.jsonld requests by default (advise browser to cache for 6 hours).