docker run -p 9000:9000 -v /path/to/credentials.json:/credentials.json -e "GOOGLE_APPLICATION_CREDENTIALS=/credentials.json" -e "MINIO_ACCESS_KEY=minioaccountname" -e "MINIO_SECRET_KEY=minioaccountkey" minio/minio gateway gcs yourprojectid
export GOOGLE_APPLICATION_CREDENTIALS=/path/credentials.json export MINIO_ACCESS_KEY=minioaccesskey export MINIO_SECRET_KEY=miniosecretkey minio gateway gcs yourprojectid
Minio object storage brings complete Amazon S3 v4/v2 API support to Google Compute Platform.
Minio aggregates multiple Persistent Disks into a single large pool with erasure code and bitrot capabilities. Alternatively, you can also store objects natively on Google Cloud Storage. The latter allows you to use other GCP services like BigQuery, Cloud Vision, and OCR.
Manage Minio configuration and credentials, using Kubernetes ConfigMaps and Secrets, when deployed via Helm Chart. In addition, you may use Kubernetes auto-scaling for Minio lambda functions.
Similar to how Docker and Kubernetes abstract the cloud computing infrastructure, Minio abstracts cloud storage infrastructure. This enables cloud-native applications to adopt any compute infrastructure without rewrite or migration.
Minio server can be deployed on Google Cloud Platform or run as a layer above Google Cloud Storage providing Amazon S3 compatible API.
Native Kubernetes support on GCP opens up the opportunity to use Google Compute Engine Disks as Kubernetes persistent volumes and deploy Minio on top of your Kubernetes cluster - as you would deploy any other application. With Kubernetes running on GCP, you can scale your Minio deployment to massive proportions.
By default, Minio server runs on port 9000. Use Kubernetes auto scale to load balancer services to scale on demand. Minio provides tools and SDKs for various popular languages. You may also use AWS SDK to access Minio server.