Don't read full image media into RAM on copying (#419)
authorSebastian Spaeth <Sebastian@SSpaeth.de>
Wed, 19 Dec 2012 13:18:03 +0000 (14:18 +0100)
committerSebastian Spaeth <Sebastian@SSpaeth.de>
Tue, 8 Jan 2013 13:51:41 +0000 (14:51 +0100)
We copy uploaded media from the queue store to the local workbench
and then to its final destination. The latter was done by simply:
dst.write(src.read()) which is of course evil as it reads the whole
file content into RAM. Which *might* arguably still be OK for
images, but you never know.

Make use of the provided storage() methods that offer chunked copying
rather than opening and fudging with files ourselves.

Signed-off-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
mediagoblin/media_types/image/processing.py

index bdb2290f5fe36f660b4c306b85f8412cddc16b26..bf4640699b5a35746fad183b3997cc197b7f3583 100644 (file)
@@ -120,17 +120,10 @@ def process_image(entry):
     else:
         medium_filepath = None
 
-    # we have to re-read because unlike PIL, not everything reads
-    # things in string representation :)
-    queued_file = file(queued_filename, 'rb')
-
-    with queued_file:
-        original_filepath = create_pub_filepath(
+    # Copy our queued local workbench to its final destination
+    original_filepath = create_pub_filepath(
             entry, name_builder.fill('{basename}{ext}'))
-
-        with mgg.public_store.get_file(original_filepath, 'wb') \
-            as original_file:
-            original_file.write(queued_file.read())
+    mgg.public_store.copy_local_to_storage(queued_filename, original_filepath)
 
     # Remove queued media file from storage and database
     mgg.queue_store.delete_file(queued_filepath)