Upload Files to Google Cloud Instance Fast

Are you experiencing dull transfer speeds between your GCE VM and a Cloud Storage saucepan? And so read on to learn how to maximize your upload and download throughput for Linux and Windows VMs!

Overview

I recently had a DoiT International customer inquire why their information on a Google Compute Engine (henceforth referred to as GCE) Windows Server'due south local SSD was uploading to a Deject Storage saucepan much slower than expected. At get-go, I began with what I idea would be a simple benchmarking of gsutil commands to demonstrate the effectiveness of using ideal gsutil arguments as recommended by the GCP documentation. Instead, my 'quick' expect into the issue turned into a full-blown investigation into data transfer performance between GCE and GCS, as my initial findings were unusual and very much unexpected.

If you are simply interested in knowing what the platonic methods are for moving information between GCE and GCS for a Linux or Windows machine, go ahead and ringlet all the way down to "Constructive Transfer Tool Use Conclusions".

If you instead want to scratch your head over the bizarre, often counter-intuitive throughput rates achievable through usually used commands and arguments, stay with me as we dive into the details of what led to my complex recommendations summary at the finish of this article.

Linux VM performance with gsutil: Large files

Although the client's asking involved data transfer on a Windows server, I first performed basic benchmarking where I felt the nigh comfortable:
Linux, via the "Debian GNU/Linux 10 (buster)" GCE public image.

Since the customer was already attempting file transfers from local SSDs and I wanted to minimize the odds that networked disks would impact transfer speeds, I configured two VM sizes, n2-standard-4 and n2-standard-lxxx, with each having one local SSD attached where we will perform benchmarking.

The GCS saucepan I will employ, every bit well as all VMs described in this article, are created every bit regional resources located in us-central1.

To simulate the customer's large file upload feel, I created an empty file xxx GBs in size:

fallocate -l 30G temp_30GB_file

From here, I tested 2 normally recommended gsutil parameters:

  • -m: Used to perform parallel, multi-threaded re-create. Useful for transferring a large number of files in parallel, non the upload of individual files.
  • -o GSUtil:parallel_composite_upload_threshold=150M: Used to divide large files exceeding the specified threshold into parts that are so uploaded in parallel and combined upon upload completion of all parts.

The estimated max performance for the local SSD on both VMs is every bit follows:

1 toh5nomacvqipj dqogpeg
Local SSD Read/Write throughput limits

Nosotros should therefore be able to achieve upward to 660 MB/s read and 350 MB/s write throughput with gsutil. Let'southward see what the upload benchmarks revealed:

time          gsutil cp          temp_30GB_file gs://doit-speed-test-saucepan/ # n2-standard-4: 2m21.893s, 216.50 MB/s # n2-standard-80: 2m11.676s, 233.xxx MB/s
time          gsutil -m cp          temp_30GB_file gs://doit-speed-examination-saucepan/ # n2-standard-4: 2m48.710s, 182.09 MB/s # n2-standard-80: 2m29.348s, 205.69 MB/due south
time          gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp          temp_30GB_file gs://doit-speed-test-bucket/ # n2-standard-4: 1m40.104s, 306.88 MB/s # n2-standard-80: 0m52.145s, 589.13 MB/southward
time          gsutil -g -o GSUtil:parallel_composite_upload_threshold=150M cp          temp_30GB_file gs://doit-speed-test-bucket/ # n2-standard-iv: 1m44.579s, 293.75 MB/due south # n2-standard-eighty: 0m51.154s, 600.54 MB/s

As expected based on GCP's gsutil documentation, large file uploads benefit from including -o GSUtil. When more vCPUs are made available to assist in the parallel upload of file parts, upload fourth dimension is improved dramatically, to the bespeak that with a consequent 600 MB/due south upload speed on the n2-standard-80 nosotros come close to achieving the SSD's max throughput of 660 MB/s. Including -m for simply one file decreases upload fourth dimension by a few seconds. Then far, we've seen nothing out of the ordinary.

Let'due south check out the download benchmarks:

time          gsutil cp          gs://doit-speed-test-bucket/temp_30GB_file . # n2-standard-four: 8m3.186s, 63.58 MB/due south # n2-standard-80: 6m13.585, 82.23 MB/south
fourth dimension          gsutil -m cp          gs://doit-speed-test-bucket/temp_30GB_file . # n2-standard-4: 7m57.881s, 64.28 MB/southward # n2-standard-fourscore: 6m20.131s, lxxx.81 MB/south
0 dh5 vsbszu bexfc
Hol up

Download operation on the 80 vCPU VM simply achieved 23% of the maximum local SSD write throughput. Additionally, despite the enabling of multi-threading with -m not improving performance for this single file download, and despite both machines utilizing well nether their maximum throughput (10 Gbps for n2-standard-four, 32 Gbps for n2-standard-80), evidently using a higher tier automobile within the same family leads to a ~30% improvement in download speed. Weird, just not as weird as getting only i/4th of local SSD write throughput with an absurdly expensive VM.

What is going on?

After much searching around on this issue, I constitute no answers just instead discovered s5cmd, a tool designed to dramatically ameliorate uploads to and downloads from S3 buckets. It claims to run 12X faster than the equivalent AWS CLI commands (e.grand. aws s3 cp) due in big role to being written in Go, a compiled language, versus the AWS CLI that is written in Python. Information technology but so happens that gsutil is also written in Python. Perhaps gsutil is severely hampered past its language option, or only optimized poorly? Given that GCS buckets can be configured to have S3 API Interoperability, is it possible to speed upward uploads and downloads with s5cmd by simply working with a compiled tool?

Linux VM operation with s5cmd: Big files

It took a little bit to go s5cmd working, mostly because I had to discover the hard fashion that GCS Interoperability doesn't support S3's multipart upload API, and given that this tool is written just with AWS in mind information technology volition neglect on large file uploads in GCP. Yous must provide -p=1000000, an argument that forces multi-office upload to be avoided. See s5cmd bug #1 and #two for more info.

Annotation that s5cmd also offers a -c parameter for setting the number of concurrent parts/files transferred, with a default value of v.

With those 2 arguments in mind I performed the following Linux upload benchmarks:

time          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=one -p=1000000          temp_30GB_file s3://doit-speed-test-bucket/ # n2-standard-4: 6m7.459s, 83.60 MB/southward # n2-standard-fourscore: 6m50.272s, 74.88 MB/s
time          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -p=1000000          temp_30GB_file s3://doit-speed-test-bucket/ # n2-standard-4: 7m18.682s, 70.03 MB/south # n2-standard-80: 6m48.380s, 75.22 MB/south

As expected, large file uploads perform considerably worse compared to gsutil given the lack of a multi-part upload strategy as an option. Nosotros are seeing 75–85 MB/s upload compared to gsutil'south 200–600 MB/due south. Providing concurrency of one vs. the default 5 only has a minor touch on on improving operation. Thus, due to s5cmd'south treatment of AWS as a splendid denizen without consideration for GCP, nosotros cannot ameliorate uploads by using s5cmd.

Beneath are the s5cmd download benchmarks:

time          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=1 -p=1000000          s3://doit-speed-test-saucepan/temp_30GB_file . # n2-standard-4: 1m56.170s, 264.44 MB/s # n2-standard-80: 1m46.196s, 289.28 MB/s
time          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=1          s3://doit-speed-test-bucket/temp_30GB_file . # n2-standard-iv: 3m21.380s, 152.55 MB/s # n2-standard-80: 3m45.414s, 136.28 MB/s
time          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -p=1000000          s3://doit-speed-exam-bucket/temp_30GB_file . # n2-standard-4: 2m33.148s, 200.59 MB/s # n2-standard-80: 2m48.071s, 182.78 MB/south
fourth dimension          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp          s3://doit-speed-exam-bucket/temp_30GB_file . # n2-standard-4: 1m46.378s, 288.78 MB/south # n2-standard-eighty: 2m1.116s, 253.64 MB/s

What a dramatic improvement! While there is some variability in download time, it seems that by leaving out -c and -p, leaving them to their defaults, we achieve optimal speed. We are unable to reach the max write throughput of 350 MB/south, but ~289 MB/southward on an n2-standard-4 is much closer to that than ~64 MB/s provided by gsutil on the aforementioned machine. That is a iv.5X increase in download speed simply by swapping out the data transfer tool used.

Summarizing all of the above findings, for Linux:

  • Given that s5cmd cannot enable multi-part uploads when working with GCS, it makes sense to continue using gsutil for upload to GCS so long every bit yous include -o GSUtil:parallel_composite_upload_threshold=150M.
  • s5cmd with its default parameters blows gsutil out of the water in download performance. Simply utilizing a information transfer tool written with a compiled language yields dramatic (iv.5X) performance improvements.

Windows VM performance with gsutil: Large files

If you idea the higher up wasn't unusual plenty, buckle in equally nosotros go off the deep end with Windows. Since the DoIT customer was dealing with Windows Server, after all, it was time to set out on benchmarking that Bone. I began to suspect their trouble was not going to be betwixt the keyboard and the chair.

Having confirmed that, for Linux, gsutil works bully for upload when given the right parameters and s5cmd works great for download with default parameters, it was time to attempt these commands on Windows where I will once again feel humbled by my lack of experience with Powershell.

I somewhen managed to gather benchmarks from an n2-standard-4 machine with a local SSD fastened running on the "Windows Server version 1809 Datacenter Cadre for Containers, built on 20200813" GCE VM prototype. Due to the per vCPU licensing fees that Windows server charges, I've opted to not gather metrics from an n2-standard-lxxx in this experiment.

An important side note before we dive into the metrics:
The GCP documentation on attaching local SSDs recommends that for "All Windows Servers" you should apply the SCSI driver to attach your local SSD rather than the NVMe commuter you typically use for a Linux machine, as SCSI is better optimized for achieving maximum throughput performance. I went ahead and provisioned two VMs with a local SSD attached, one attached via NVMe and one via SCSI, determined to compare their performance aslope the various tools and parameters I've been investigating thus far.

Beneath are the upload speed benchmarks:

Measure-Control {gsutil cp          temp_30GB_file gs://doit-speed-examination-bucket/} # NVMe: 3m50.064s, 133.53 MB/s # SCSI: 4m7.256s, 124.24 MB/s
Measure out-Command {gsutil -m cp          temp_30GB_file gs://doit-speed-test-bucket/} # NVMe: 3m59.462s, 128.29 MB/s # SCSI: 3m34.013s, 143.54 MB/south
Mensurate-Command {gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp          temp_30GB_file gs://doit-speed-examination-bucket/} # NVMe: 5m54.046s, 86.77 MB/s # SCSI: 6m13.929s, 82.15 MB/s
Measure-Control {gsutil -thousand -o GSUtil:parallel_composite_upload_threshold=150M cp          temp_30GB_file gs://doit-speed-test-bucket/} # NVMe: 5m55.751s, 86.xl MB/s # SCSI: 5m58.078s, 85.79 MB/s
0 jsjn5xs8bow83 fs
There are no words to convey my emotions correct now

With no arguments provided to gsutil, upload throughput is ~60% of the throughput achieved on a Linux machine. Providing whatever combination of arguments degrades performance. When multi-part upload is enabled—which led to a 42% improvement in upload speed on Linux — the upload speed drops past 35%. You lot may also notice that when -one thousand is not provided and gsutil is allowed to upload more than optimally for a single big file, upload from the NVMe drive completes more quickly than on the SCSI drive, the latter of which supposedly has drivers more optimized for Windows Servers. What is going on?!

Low upload performance around 80–85 MB/s was the exact range that the DoiT customer was experiencing, and then their trouble was at least reproducible. Past removing the GCP-recommended argument
-o GSUtil:parallel_composite_upload_threshold=150M for large file uploads, the customer could remove a 35% performance penalty. 🤷

Download benchmarking tells a more harrowing tale:

Mensurate-Command {gsutil cp          gs://doit-speed-test-bucket/temp_30GB_file .} # NVMe 1st attempt: 11m39.426s, 43.92 MB/s # NVMe second endeavor: 9m1.857s, 56.69 MB/s # SCSI 1st attempt: 8m54.462s, 57.48 MB/s # SCSI 2nd attempt: 10m1.023s, 51.05 MB/s
Measure out-Command {gsutil -thou cp          gs://doit-speed-exam-bucket/temp_30GB_file .} # NVMe 1st attempt: 8m52.537s, 57.69 MB/s # NVMe 2nd endeavour: 22m4.824s, 23.19 MB/s # NVMe 3rd attempt: 8m50.202s, 57.94 MB/south # SCSI 1st attempt: 7m29.502s, 68.34 MB/southward # SCSI 2nd attempt: 9m9.652s, 55.89 MB/s

I could not yield consistent download benchmarks due to the following issue:

  • Each download operation would hang for up to 2 minutes before initiating
  • The download would begin and progress at about 68–70 MB/s, until…
  • It sometimes paused again for an indeterminate amount of time

The process of hanging up and re-initiating the download would continue back-and-forth, causing same-VM same-disk download speed averages to range from 23 MB/s to 58 MB/s. It was madness trying to determine whether NVMe or SCSI were more optimal for downloads with these random, extended hang-ups in the download process. More on this verdict later.

Windows VM performance with s5cmd: Large files

Frustrated with wild and wacky gsutil download performance, I quickly moved on to s5cmd— possibly it might solve or reduce the impact of hangups?

Let's cover the s5cmd upload benchmarks get-go:

Measure out-Command {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=1 -p=meg          temp_30GB_file s3://doit-speed-test-saucepan/} # NVMe: 6m21.780s, 80.46 MB/s # SCSI: 7m14.162s, 70.76 MB/south
Measure out-Command {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -p=1000000          temp_30GB_file s3://doit-speed-test-saucepan/} # NVMe: 12m56.066s, 39.58 MB/southward # SCSI: 8m12.255s, 62.41 MB/south

Similar to s5cmd upload on Linux, it is hampered by the inability to use multi-office uploads. Upload performance with concurrency set up to one is comparable to that accomplished by the same tool on a Linux machine, but performance with concurrency left to its default value of 5 causes dramatic drops (and swings) in performance. Including concurrency is unusual in the severity of its impact, but since s5cmd upload performance continues to be markedly worse than gsutil upload performance (strange given that this is true when both are not using multi-part uploads) nosotros don't want to employ s5cmd for uploads anyway; let's just ignore s5cmd's upload concurrency oddity.

Moving on to the s5cmd download benchmarks:

Measure out-Command {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=1 -p=one thousand thousand          s3://doit-speed-test-saucepan/temp_30GB_file .} # NVMe 1st endeavour: 2m17.954s, 222.68 MB/due south # NVMe 2nd attempt: 1m44.718s, 293.36 MB/south # SCSI 1st effort: 3m9.581s, 162.04 MB/s # SCSI 2nd try: 1m52.500s, 273.07 MB/s
Measure-Control {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp          -c=one s3://doit-speed-examination-bucket/temp_30GB_file .} # NVMe 1st endeavor: 3m18.006s, 155.xv MB/s # NVMe 2nd try: 4m2.792s, 126.53 MB/s # SCSI 1st attempt: 3m37.126s, 141.48 MB/southward # SCSI 2d attempt: 4m9.657s, 123.05 MB/s
Measure out-Command {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -p=million          s3://doit-speed-examination-saucepan/temp_30GB_file .} # NVMe 1st effort: 2m17.151s, 223.99 MB/due south # NVMe 2nd endeavour: 1m47.217s, 286.52 MB/s # SCSI 1st effort: 4m39.120s, 110.06 MB/s # SCSI 2nd attempt: 1m42.159s, 300.71 MB/s
Measure-Command {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp          s3://doit-speed-examination-bucket/temp_30GB_file .} # NVMe 1st attempt: 2m48.714s, 182.08 MB/south # NVMe 2nd attempt: 2m41.174s, 190.lx MB/s # SCSI 1st attempt: 2m35.480s, 197.58 MB/s # SCSI 2nd attempt: 2m40.483s, 191.42 MB/s

While there are some hangups and variability with downloads as with gsutil, s5cmd is much more than performant than gsutil at downloads one time over again. It also experienced lower duration and/or lower frequency hangups. These strange hangups still remain an occasional event, though.

In dissimilarity to how I achieved maximum operation on a Linux VM past leaving out the -c and -p parameters, optimal operation seems to have been achieved by including both of these with -c=i -p=meg. Information technology is difficult to declare inclusion of these as the almost optimal configuration given the random hangups dogging my benchmarks, but it seems to run well enough with these arguments. Every bit with gsutil, it is also challenging to determine whether NVMe or SCSI are better optimized due to the hangups.

In an attempt to better understand download speeds on NVMe and SCSI with optimal s5cmd arguments, I wrote a function that reports the Avg, Min, and Max runtime from 20 repeated downloads, with the goal of averaging out the momentary hangups:

Mensurate-CommandAvg {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=ane -p=1000000          s3://doit-speed-test-bucket/temp_30GB_file .} ### With 20 sample downloads # NVMe: # Avg: 1m48.014s, 284.41 MB/due south # Min: 1m23.411s, 368.30 MB/s # Max: 3m10.989s, 160.85 MB/south # SCSI:  # Avg: 1m47.737s, 285.14 MB/due south # Min: 1m24.784s, 362.33 MB/s # Max: 4m44.807s, 107.86 MB/south

There continues to be variability in how long the same download takes to complete, but it is evident that SCSI does not provide an advantage over NVMe in general for big file downloads, despite being the supposedly ideal commuter to use with a local SSD on a Windows VM.

Let's also validate whether uploads are more performant via NVMe using the same averaging function running on twenty repeated uploads:

Measure-CommandAvg {gsutil cp          temp_30GB_file gs://doit-speed-test-bucket/} # NVMe: # Avg: 3m23.216s, 151.17 MB/s # Min: 2m31.169s, 203.22 MB/s # Max: 4m13.943s, 121.42 MB/s # SCSI: # Avg: 5m1.570s, 101.87 MB/s # Min: 3m2.649s, 168.19 MB/s # Max: 35m3.276s, 14.61 MB/south

We see validation of our earlier individual runs that indicated NVMe might exist more performant than SCSI for uploads. In this repeated twenty sample run example, NVMe is considerably more than performant.

Thus, with Windows VMs, in contrast to the GCP docs, not just should we avoid using -o GSUtil:parallel_composite_upload_threshold=150M with gsutil when uploading to GCS, we should as well avoid using SCSI and prefer NVMe for our local SSD driver to better uploads and maybe downloads. We besides run into that for downloads and uploads in that location are frequent, unpredictable pauses that range from one–2 minutes to as much equally 10–thirty minutes.

What do I tell the customer…

At this point, I informed the customer that there are data transfer limitations inherent with using a Windows VM, however these could be partially mitigated by:

  • Leaving optional arguments to their defaults for gsutil cp large file uploads, in spite of the GCP documentation suggesting otherwise
  • By using s5cmd -c=i -p=1000000 instead of gsutil for downloads
  • By using the NVMe instead of SCSI driver for local SSD storage to improve both upload and possibly download speeds, in spite of the GCP documentation suggesting otherwise

However, I likewise informed the client that uploads and downloads would exist dramatically improved merely by fugitive Windows-based hangups entirely; move the data over to a Linux machine via disk snapshots, then perform data sync operations with GCS from a Linux-attached disk. That ultimately proved to exist the quickest way to gain the expected throughput between a GCE VM and GCS, and led to a satisfied client that was withal left frustrated with nonsensical performance problems on their Windows Server.

My takeaway from the experience was this: Not merely is gsutil woefully unoptimized for operations on Windows servers, there appears to be an underlying issue with GCS' power to transfer data to and from Windows as delays and hangups be within both gsutil and s5cmd for both download and upload operations.

The customer issue was solved…and nonetheless my curiosity had not notwithstanding been sated. What bandwidth banditry might I find when trying to transfer a large number of small files instead of a pocket-sized number of large files?

Linux VM performance with gsutil: Small files

Moving back to Linux, I split the large 30 GB file into 50K (well, 50,001) files:

mkdir parts divide -b 644245 temp_30GB_file mv ten* parts/

And proceeded to benchmark upload performance with gsutil:

nohup bash -c 'time          gsutil cp -r          parts/* gs://doit-speed-exam-bucket/smallparts/' & # n2-standard-4: 71m30.420s, 7.sixteen MB/s # n2-standard-fourscore: 69m32.803s, seven.36 MB/s
nohup fustigate -c 'time          gsutil -1000 cp -r          parts/* gs://doit-speed-test-bucket/smallparts/' & # n2-standard-4: 9m7.045s, 56.16 MB/s # n2-standard-80: 3m41.081s, 138.95 MB/s

Equally expected, providing -m to engage in multi-threaded, parallel upload of files dramatically improves upload speed — do not attempt to upload a large folder of files without it. The more vCPUs your automobile possesses, the more file uploads y'all can appoint in simultaneously.

Below are the download operation benchmarks with gsutil:

nohup bash -c 'time          gsutil cp -r          gs://doit-speed-test-bucket/smallparts/ parts/' & # n2-standard-4: 61m24.516s, 8.34 MB/s # n2-standard-eighty: 56m54.841s, ix.00 MB/south
nohup fustigate -c 'time          gsutil -m cp -r          gs://doit-speed-test-bucket/smallparts/ parts/' & # n2-standard-4: 7m42.249s, 66.46 MB/s # n2-standard-80: 3m38.421s, 140.65 MB/s

Once again, providing -m is a must — practise non attempt to download a big folder of files without it. Equally with uploads, gsutil functioning is improved through parallel file uploads with -m and numerous available vCPUs.

I have found nothing out of the ordinary on Linux with gsutil-based mass small-scale file downloads and uploads.

Linux VM functioning with s5cmd: Small files

Having already established that s5cmd should not be used for file uploads to GCS, I volition only report on the Linux download benchmarks below:

nohup bash -c 'fourth dimension          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp          s3://doit-speed-test-bucket/smallparts/* parts/' & # n2-standard-4: 1m19.531s, 386.26 MB/s # n2-standard-lxxx: 1m31.592s, 335.40 MB/s
nohup fustigate -c 'time          s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -c=80          s3://doit-speed-test-bucket/smallparts/* parts/' & # n2-standard-80: 1m29.837s, 341.95 MB/s

On the n2-standard-4 automobile we come across a 6.9X speedup in mass minor file download speed when compared to gsutil. It makes sense then to use s5cmd for downloading many minor files as well as for downloading larger files.

Zilch out of the ordinary was observed on Linux with s5cmd-based mass modest file downloads.

Windows VM performance with s5cmd: Pocket-size files (and additional large file tests)

Given that s5cmd has significantly faster downloads than gsutil on any Os, I will only consider s5cmd for Windows download benchmarks with small files:

Measure-CommandAvg {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp          s3://doit-speed-test-bucket/smallparts/* parts/} # NVMe: # Avg: 2m39.540s, 192.55 MB/s # Min: 2m35.323s, 197.78 MB/s # Max: 2m44.260s, 187.02 MB/s # SCSI:  # Avg: 2m45.431s, 185.lxx MB/s # Min: 2m40.785s, 191.06 MB/due south # Max: 2m50.930s, 179.72 MB/s

We see that downloading 50K smaller files to a Windows VM performs improve and more predictably than when downloading much larger files. NVMe outperforms SCSI only by a sliver.

There is an odd consistency and lack of long hang-ups with the data sync in this use example vs. the individual large file copy commands seen earlier. Merely to fully verify that the hang-up trend is more likely to occur with large files, I ran the averaging office over twenty repeated downloads of the thirty GB file:

Measure-CommandAvg {s5cmd --endpoint-url                    https://storage.googleapis.com                      cp -p=million          s3://doit-speed-examination-bucket/temp_30GB_file .} ### With twenty sample downloads # NVMe: # Avg: 3m3.770s, 167.17 MB/southward # Min: 1m34.901s, 323.70 MB/southward # Max: 10m34.575s, 48.41 MB/s # SCSI:  # Avg: 2m20.131s, 219.22 MB/s # Min: 1m31.585s, 335.43 MB/s # Max: 3m43.215s, 137.63 MB/s

We see that the Windows download runtime on NVMe ranges from 1m37s to 10m35s, whereas with 50K small files on the same Os the download time simply ranged from 2m35s to 2m44s. Thus, there appears to be a Windows or GCS-specific result with large file transfers on a Windows VM.

Also annotation that the average NVMe download fourth dimension appears to be about 73% longer (3m3s vs 1m46s) than when running s5cmd on Linux.

It is tempting to say that SCSI might be more than advantageous for mass pocket-size file downloads than NVMe based on the results above, but with random hang-ups in the download process skewing the average, I'm going to stick with NVMe equally the preferred driver given its proven effectiveness at uploading mass modest files (come across below) and its comparably equal operation at downloading large files as shown earlier.

Windows VM performance with gsutil: Small files

Beneath are the metrics for uploading many small files from a Windows VM:

Measure out-CommandAvg {gsutil -q -m cp -r          parts gs://doit-speed-exam-bucket/smallparts/} # NVMe: # Avg: 16m36.562s, 30.83 MB/due south # Min: 16m22.914s, 31.25 MB/s # Max: 17m0.299s, 30.11 MB/s # SCSI:  # Avg: 17m29.591s, 29.27 MB/s # Min: 17m5.236s, 29.96 MB/south # Max: 18m3.469s, 28.35 MB/s

We see NVMe outperforms SCSI, and nosotros keep to see that speeds are much slower than on a Linux machine. With Linux, mass small file uploads take about 9m7s, making the average Windows NVMe upload time of 16m36s nearly 82% slower than Linux.

Criterion Reproducibility

If you are interested in performing your own benchmarks to replicate my findings, beneath are the trounce and Powershell scripts I used along with comments summarizing the throughput I observed:

Linux benchmarking script

Windows benchmarking script

Effective Transfer Tool Use Conclusions

All in all, there is far more than complexity than at that place should exist in determining what the best method is for transferring information between GCE VMs — with information located on a local SSD — and GCS.

0 v8c6vl8kwalvd rm
The performance differences between various GCE OSs and GCS are all related somehow, I just know information technology

Windows servers experience drastic reductions in both download and upload speeds for reasons as-of-even so unknown when compared to the best equivalent commands on a Linux machine. These performance drops are substantial, typically 70–80% slower than the equivalent best command to run on Linux. Large file transfers are impacted more significantly than the transfer of many small files.

Thus, if you need to migrate TBs of data or specially large files from Windows to GCS in a time-sensitive manner, you may want to bypass these performance issues by taking a disk snapshot, attaching a disk created from that snapshot to a Linux machine, and uploading the data via that OS.

Separate from Windows server issues, the default data transfer tool gsutil available on GCE VMs is inadequate for high-throughput downloads for any OS. Past using s5cmd instead, you tin can achieve several-fold improvements in download speed.

To assist you sort through the myriad of tool and statement choices, beneath is a summary of my recommendations for maximizing throughput based on the benchmarks covered in this commodity:

Linux — Download

  • Single large file: s5cmd --endpoint-url https://storage.googleapis.com cp s3://your_bucket/your_file .
  • Multiple files, modest or large: s5cmd --endpoint-url https://storage.googleapis.com cp s3://your_bucket/path* your_path/

Linux — Upload

  • Single big file: gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp your_file gs://your_bucket/
  • Multiple files, small or large: gsutil -m -o GSUtil:parallel_composite_upload_threshold=150M cp -r your_path/ gs://your_bucket/

Windows Server— Download

  • Employ NVMe, not SCSI, for connecting a local SSD
  • Single large file: s5cmd --endpoint-url https://storage.googleapis.com cp -c=1 -p=1000000 s3://your_bucket/your_file .
  • Multiple files, small or large: s5cmd --endpoint-url https://storage.googleapis.com cp s3://your_bucket/path* your_path/

Windows Server— Upload

  • Use NVMe, not SCSI, for connecting a local SSD
  • Single large file: gsutil cp your_file gs://your_bucket/
  • Multiple files, small or big: gsutil -m cp -r your_path/ gs://your_bucket/your_path/

romolinto1951.blogspot.com

Source: https://www.doit-intl.com/optimize-data-transfer-between-compute-engine-and-cloud-storage/

0 Response to "Upload Files to Google Cloud Instance Fast"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel