Accessing Object Storage in AWS


This tutorial demonstrates how to access your AWS S3 from Neuro Platform. You will set up a new Neuro project, create an S3 bucket, and make it is accessible from Neuro Platform jobs.

Make sure you have CLI and cookiecutter installed.

Creating Neuro Project

To create a new Neuro project and build an image, run:

$ cookiecutter gh:neuro-inc/cookiecutter-neuro-project --checkout release
$ cd <project-slug>
$ neuro-flow build myimage

Creating an AWS IAM User

Follow Creating an IAM User in Your AWS Account.

In AWS Console, go to "Services" drop-down list, "IAM" (Identity and Access Management). On the left-hand panel, choose "Access management" -> "Users", click the "Add user" button, go through the wizard, and as a result you'll have a new user added:

Ensure that this user has "AmazonS3FullAccess" in the list of permissions.

Then, you'll need to create an access key for the newly created user. For that, go to the user description, then to the "Security credentials" tab, and press the "Create access key" button:

Put these credentials to the local file in the home directory ~/aws-credentials.txt. For example:


Set appropriate permissions to the secret file:

chmod 600 ~/aws-credentials.txt

Set up the Neuro Platform to use this file and check that your Neuro project detects it:

$ neuro secret add aws-key @~/aws-credentials.txt

Open .neuro/live.yaml, find remote_debug section within jobs in it and add the following lines at the end of remote_debug:

     secret_files: '["secret:aws-key:/var/secrets/aws.txt"]'
     additional_env_vars: '{"AWS_CONFIG_FILE": "/var/secrets/aws.txt"}'

Creating a Bucket and Granting Access

Now, create a new S3 bucket. Remember: bucket names are globally unique.

aws s3 mb s3://$BUCKET_NAME/


Create a file and upload it into S3 Bucket:

echo "Hello World" | aws s3 cp - s3://$BUCKET_NAME/hello.txt

Change default preset to cpu-small in .neuro/live.yamlto avoid consuming GPU for this test:

  preset: cpu-small

Run a development job and connect to the job's shell:

$ neuro-flow run remote_debug

In your job's shell, try to use s3 to access your bucket:

aws s3 cp s3://my-neuro-bucket-42/hello.txt -

To close the remote terminal session, press ^D or type exit.

Please don't forget to terminate the job when you've done working with it:

$ neuro-flow kill remote-debug

Last updated