You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
this approach of using the image-loader HTTP endpoints misses out on the benefits of queue-based ingestion and could fall over with giant images (and run sub-optimally for smaller images). Instead, I would suggest...
give GetObject permission to the source image bucket, iterate the files from Step 1, performing an S3 copy to the `destination' grid's ingestion queue bucket - blitz through those as fast as possible (we should add support for uploadTime S3 metadata on the queue bucket, such that it gets written to the file which ends up in the image bucket after ingestion)
wait for all the images to be ingested, then perform the metadata updates similar to the current script using the JSON files from Step1
grid-tools/migrate.py
Line 22 in ea88fc2
this approach of using the image-loader HTTP endpoints misses out on the benefits of queue-based ingestion and could fall over with giant images (and run sub-optimally for smaller images). Instead, I would suggest...
offsetrather thansincefor paging in migration #1 though!), or rather pre-produce the metadata JSON payloads as files, with the mediaId as the filenameGetObjectpermission to thesourceimage bucket, iterate the files from Step 1, performing an S3 copy to the `destination' grid's ingestion queue bucket - blitz through those as fast as possible (we should add support for uploadTime S3 metadata on the queue bucket, such that it gets written to the file which ends up in the image bucket after ingestion)