{"_id":"54fda39021538c21006c49b1","project":"545e249c7ca5470800b3a1b2","githubsync":"","user":"545e246f7ca5470800b3a1b1","version":{"_id":"54fda38e21538c21006c49a6","__v":3,"forked_from":"54eb4535615ffc19003059f0","project":"545e249c7ca5470800b3a1b2","createdAt":"2015-03-09T13:43:42.927Z","releaseDate":"2015-03-09T13:43:42.927Z","categories":["54fda38f21538c21006c49a7","54fda38f21538c21006c49a8","54fda38f21538c21006c49a9","54fda38f21538c21006c49aa","54fda38f21538c21006c49ab","54fda3d347f93619001d2ae2","54fea8975c4ab10d00ef4279"],"is_deprecated":false,"is_hidden":false,"is_beta":true,"is_stable":true,"codename":"","version_clean":"0.5.0","version":"0.5.0"},"category":{"_id":"54fda38f21538c21006c49aa","version":"54fda38e21538c21006c49a6","__v":1,"pages":["54fda39021538c21006c49b0","54fda39021538c21006c49b1","54fda39021538c21006c49b2"],"project":"545e249c7ca5470800b3a1b2","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2014-12-03T16:33:32.245Z","from_sync":false,"order":2,"slug":"transports","title":"Transports"},"__v":0,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2014-12-30T08:46:53.858Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"auth":"never","params":[],"url":""},"isReference":false,"order":1,"body":"The S3 transport allows transfer of files from the localhost to an Amazon S3 bucket or from a remote bucket to the localhost (transfer of directories is not supported yet).\n\nThis transport requires an AWS account. You'll have to choose a region, create a bucket, and get an access key id and secret key. Refer to the AWS S3 documentation [to create an S3 bucket](http://docs.aws.amazon.com/en_us/AmazonS3/latest/gsg/GetStartedWithS3.html).\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Bucket policy\"\n}\n[/block]\nIf you plan on using the credentials of your primary AWS account, no further configuration is needed as this account owns your S3 buckets.\n\nHowever, a more secure approach is to create a specific user using IAM and to add an access policy to the bucket. The S3 transport only requires a limited set of permissions: list the bucket contents, get, put and delete an object.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"{\\n  \\\"Version\\\": \\\"2008-10-17\\\",\\n  \\\"Id\\\": \\\"ARBITRARY_POLICY_ID\\\",\\n  \\\"Statement\\\": [\\n    {\\n      \\\"Sid\\\": \\\"ARBITRARY_STMT_ID\\\",\\n      \\\"Effect\\\": \\\"Allow\\\",\\n      \\\"Principal\\\": {\\n        \\\"AWS\\\": \\\"arn:aws:iam::AWS_ACCOUNT_ID:user/IAM_USER\\\"\\n      },\\n      \\\"Action\\\": [\\n        \\\"s3:ListBucket\\\",\\n        \\\"s3:DeleteObject\\\",\\n        \\\"s3:GetObject\\\",\\n        \\\"s3:PutObject\\\"\\n      ],\\n      \\\"Resource\\\": [\\n        \\\"arn:aws:s3:::BUCKET_NAME/*\\\",\\n        \\\"arn:aws:s3:::BUCKET_NAME\\\"\\n      ]\\n    }\\n  ]\\n}\",\n      \"language\": \"json\",\n      \"name\": \"S3 Bucket Policy\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"copy\"\n}\n[/block]\nUnlike the `move` verb, `copy` preserves the input resource during the transfer. It also means that subsequent agents will assume the resource is still located on its original location.\n\n## Copy a file from the localhost to a remote bucket\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"job \\\"daily-report\\\" do\\n  resource \\\"file\\\", path: \\\"/var/daily/report\\\"\\n  copy to: \\\"my-bucket/path/to/directory\\\", using: \\\"s3\\\", region: \\\"eu-central-1\\\", access_key_id: \\\"ACCESS_KEY_ID\\\", secret_key: \\\"SECRET\\\"\\nend\",\n      \"language\": \"ruby\",\n      \"name\": \"Sheepfile\"\n    }\n  ]\n}\n[/block]\n## Copy a file from a remote bucket to the localhost\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"job \\\"daily-report\\\" do\\n  resource \\\"s3_object\\\", bucket: \\\"my-bucket\\\", path: \\\"path/to/directory\\\", region: \\\"eu-central-1\\\"\\n  copy to: \\\"localhost\\\", using: \\\"s3\\\", access_key_id: \\\"ACCESS_KEY_ID\\\", secret_key: \\\"SECRET\\\"\\nend\",\n      \"language\": \"ruby\",\n      \"name\": \"Sheepfile\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"move\"\n}\n[/block]\n\n[block:callout]\n{\n  \"type\": \"danger\",\n  \"title\": \"Destructive action\",\n  \"body\": \"Unlike the `copy` verb, `move` destroys the input resource once the file has been copied to the target location.\"\n}\n[/block]\n## Move a file from the localhost to a remote bucket\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"job \\\"daily-report\\\" do\\n  resource \\\"file\\\", path: \\\"/var/daily/report\\\"\\n  move to: \\\"my-bucket/path/to/directory\\\", using: \\\"s3\\\", region: \\\"eu-central-1\\\", access_key_id: \\\"ACCESS_KEY_ID\\\", secret_key: \\\"SECRET\\\"\\nend\",\n      \"language\": \"ruby\",\n      \"name\": \"Sheepfile\"\n    }\n  ]\n}\n[/block]\n## Move a file from a remote bucket to the localhost\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"job \\\"daily-report\\\" do\\n  resource \\\"s3_object\\\", bucket: \\\"my-bucket\\\", path: \\\"path/to/directory\\\", region: \\\"eu-central-1\\\"\\n  move to: \\\"localhost\\\", using: \\\"s3\\\", access_key_id: \\\"ACCESS_KEY_ID\\\", secret_key: \\\"SECRET\\\"\\nend\",\n      \"language\": \"ruby\",\n      \"name\": \"Sheepfile\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Options\"\n}\n[/block]\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Option\",\n    \"h-1\": \"Description\",\n    \"0-0\": \"`access_key_id`\",\n    \"0-1\": \"The Access Key ID of the AWS user\",\n    \"0-2\": \"Required: yes\",\n    \"2-0\": \"`secret_key`\",\n    \"2-1\": \"The secret key used for the authentication of the AWS user\",\n    \"2-2\": \"Required: yes\",\n    \"1-0\": \"`region`\",\n    \"1-1\": \"The AWS region (data center). One of: `us-east-1`, `us-west-1`, `us-west2`, `eu-west-1`, `eu-central-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`\",\n    \"1-2\": \"Defaults to: `us-east-1`\"\n  },\n  \"cols\": 3,\n  \"rows\": 3\n}\n[/block]\nYou'll likely want to [Specify default values](doc:specify-default-values) globally for access to your S3 buckets (`access_key_id`, `secret_key`, `region`).","excerpt":"Transfer files and directories using Amazon Simple Storage Service","slug":"amazon-s3","type":"basic","title":"Amazon S3"}

Amazon S3

Transfer files and directories using Amazon Simple Storage Service

The S3 transport allows transfer of files from the localhost to an Amazon S3 bucket or from a remote bucket to the localhost (transfer of directories is not supported yet). This transport requires an AWS account. You'll have to choose a region, create a bucket, and get an access key id and secret key. Refer to the AWS S3 documentation [to create an S3 bucket](http://docs.aws.amazon.com/en_us/AmazonS3/latest/gsg/GetStartedWithS3.html). [block:api-header] { "type": "basic", "title": "Bucket policy" } [/block] If you plan on using the credentials of your primary AWS account, no further configuration is needed as this account owns your S3 buckets. However, a more secure approach is to create a specific user using IAM and to add an access policy to the bucket. The S3 transport only requires a limited set of permissions: list the bucket contents, get, put and delete an object. [block:code] { "codes": [ { "code": "{\n \"Version\": \"2008-10-17\",\n \"Id\": \"ARBITRARY_POLICY_ID\",\n \"Statement\": [\n {\n \"Sid\": \"ARBITRARY_STMT_ID\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::AWS_ACCOUNT_ID:user/IAM_USER\"\n },\n \"Action\": [\n \"s3:ListBucket\",\n \"s3:DeleteObject\",\n \"s3:GetObject\",\n \"s3:PutObject\"\n ],\n \"Resource\": [\n \"arn:aws:s3:::BUCKET_NAME/*\",\n \"arn:aws:s3:::BUCKET_NAME\"\n ]\n }\n ]\n}", "language": "json", "name": "S3 Bucket Policy" } ] } [/block] [block:api-header] { "type": "basic", "title": "copy" } [/block] Unlike the `move` verb, `copy` preserves the input resource during the transfer. It also means that subsequent agents will assume the resource is still located on its original location. ## Copy a file from the localhost to a remote bucket [block:code] { "codes": [ { "code": "job \"daily-report\" do\n resource \"file\", path: \"/var/daily/report\"\n copy to: \"my-bucket/path/to/directory\", using: \"s3\", region: \"eu-central-1\", access_key_id: \"ACCESS_KEY_ID\", secret_key: \"SECRET\"\nend", "language": "ruby", "name": "Sheepfile" } ] } [/block] ## Copy a file from a remote bucket to the localhost [block:code] { "codes": [ { "code": "job \"daily-report\" do\n resource \"s3_object\", bucket: \"my-bucket\", path: \"path/to/directory\", region: \"eu-central-1\"\n copy to: \"localhost\", using: \"s3\", access_key_id: \"ACCESS_KEY_ID\", secret_key: \"SECRET\"\nend", "language": "ruby", "name": "Sheepfile" } ] } [/block] [block:api-header] { "type": "basic", "title": "move" } [/block] [block:callout] { "type": "danger", "title": "Destructive action", "body": "Unlike the `copy` verb, `move` destroys the input resource once the file has been copied to the target location." } [/block] ## Move a file from the localhost to a remote bucket [block:code] { "codes": [ { "code": "job \"daily-report\" do\n resource \"file\", path: \"/var/daily/report\"\n move to: \"my-bucket/path/to/directory\", using: \"s3\", region: \"eu-central-1\", access_key_id: \"ACCESS_KEY_ID\", secret_key: \"SECRET\"\nend", "language": "ruby", "name": "Sheepfile" } ] } [/block] ## Move a file from a remote bucket to the localhost [block:code] { "codes": [ { "code": "job \"daily-report\" do\n resource \"s3_object\", bucket: \"my-bucket\", path: \"path/to/directory\", region: \"eu-central-1\"\n move to: \"localhost\", using: \"s3\", access_key_id: \"ACCESS_KEY_ID\", secret_key: \"SECRET\"\nend", "language": "ruby", "name": "Sheepfile" } ] } [/block] [block:api-header] { "type": "basic", "title": "Options" } [/block] [block:parameters] { "data": { "h-0": "Option", "h-1": "Description", "0-0": "`access_key_id`", "0-1": "The Access Key ID of the AWS user", "0-2": "Required: yes", "2-0": "`secret_key`", "2-1": "The secret key used for the authentication of the AWS user", "2-2": "Required: yes", "1-0": "`region`", "1-1": "The AWS region (data center). One of: `us-east-1`, `us-west-1`, `us-west2`, `eu-west-1`, `eu-central-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`", "1-2": "Defaults to: `us-east-1`" }, "cols": 3, "rows": 3 } [/block] You'll likely want to [Specify default values](doc:specify-default-values) globally for access to your S3 buckets (`access_key_id`, `secret_key`, `region`).