chore: Pin models and increase max tokens in e2e/canary tests#1763
chore: Pin models and increase max tokens in e2e/canary tests#1763Luca Forstner (lforst) merged 4 commits intomainfrom
Conversation
|
Just realized that e2e tests continue to be relatively flakey even with retry because retry doesn't affect beforeAll where we run the llm calls. |
Abhijeet Prasad (AbhiPrasad)
left a comment
There was a problem hiding this comment.
I assume we'll do a follow up to add a retry to withScenarioHarness? Then we can remove per assertion retries in favour of just retrying the entire scenario.
# Conflicts: # e2e/scenarios/ai-sdk-instrumentation/scenario.impl.mjs
Yeah I am vibing on a more proper but elaborate fix here where we migrate to more granular testing, putting everything into the tests and not the beforeAll hook. Secondary reason for that is that the tests are starting to take up 20 minutes, and I would like to shard them up and that only really makes sense if we do the whole scenario shebang in the tests and not the before all hook. |
Closes #1760