Filestore High Scale, a new storage option – and tier of the existing Filestore service of Google – has been announced by Google Cloud for workloads that can take advantage of access to a distributed high-performance storage option.
Users can deploy shared file systems with hundreds of thousands of IOPS, 10s of GB/s of throughput, and at a scale of 100s of TBs with Filestore High Scale that is based on technology Google acquired when it bought Elastifile in 2019.
“Virtual screening allows us to computationally screen billions of small molecules against a target protein in order to discover potential treatments and therapies much faster than traditional experimental testing methods,” says Christoph Gorgulla, a postdoctoral research fellow at Harvard Medical School’s Wagner Lab., which already put the new service through its paces.
Gorilla added, “As researchers, we hardly have the time to invest in learning how to set up and manage a needlessly complicated file system cluster or to constantly monitor the health of our storage system. We needed a file system that could handle the load generated concurrently by thousands of clients, which have hundreds of thousands of vCPUs.”
The standard Google Cloud Filestore service already supports some of these use cases and Google Cloud noted that it specifically built Filestore High Scale for high-performance computing (HPC) workloads. Google Cloud said in an announcement that it specifically focuses on biotech use cases around COVID-19.
Filestore High Scale is meant to support tens of thousands of concurrent clients that is not necessarily a standard use case, but developers who need this kind of power can now get it in Google Cloud.
Google Cloud also announced that all Filestore tiers now provide beta support for NFS IP-based access controls. This is an important new feature for those companies that have advanced security requirements on top of their need for a high-performance, fully managed file storage service.