Download the entire Bitbucket repository on S3 using the Bitbucket Pipeline

I am using the Bitbuckets pipeline. I want all the contents of my repo (very small) to be directed to S3. I don’t want to be buttoned, clicked on S3, and then unzipped things. I just want it to take the existing file / folder structure in my Bitbucket repository and click on S3.

What should the yaml file and .py file look like?

Here is the current yaml file:

image: python:3.5.1

pipelines:
  branches:
    master:
      - step:
          script:
            # - apt-get update # required to install zip
            # - apt-get install -y zip # required if you want to zip repository objects
            - pip install boto3==1.3.0 # required for s3_upload.py
            # the first argument is the name of the existing S3 bucket to upload the artefact to
            # the second argument is the artefact to be uploaded
            # the third argument is the the bucket key
            # html files
            - python s3_upload.py my-bucket-name html/index_template.html html/index_template.html # run the deployment script
            # Example command line parameters. Replace with your values
            #- python s3_upload.py bb-s3-upload SampleApp_Linux.zip SampleApp_Linux # run the deployment script

And here is my current python:

from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError

def upload_to_s3(bucket, artefact, bucket_key):
    """
    Uploads an artefact to Amazon S3
    """
    try:
        client = boto3.client('s3')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False
    try:
        client.put_object(
            Body=open(artefact, 'rb'),
            Bucket=bucket,
            Key=bucket_key
        )
    except ClientError as err:
        print("Failed to upload artefact to S3.\n" + str(err))
        return False
    except IOError as err:
        print("Failed to access artefact in this directory.\n" + str(err))
        return False
    return True


def main():

    parser = argparse.ArgumentParser()
    parser.add_argument("bucket", help="Name of the existing S3 bucket")
    parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
    parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
    args = parser.parse_args()

    if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
        sys.exit(1)

if __name__ == "__main__":
    main()

This requires me to list every single file in the repo in the yaml file as another command. I just want it to capture everything and upload to S3.

+6
source share
5 answers

. python, 's3_upload.py'

from __future__ import print_function
import os
import sys
import argparse
import boto3
#import zipfile
from botocore.exceptions import ClientError

def upload_to_s3(bucket, artefact, is_folder, bucket_key):
    try:
        client = boto3.client('s3')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False
    if is_folder == 'true':
        for root, dirs, files in os.walk(artefact, topdown=False):
            print('Walking it')
            for file in files:
                #add a check like this if you just want certain file types uploaded
                #if file.endswith('.js'):
                try:
                    print(file)
                    client.upload_file(os.path.join(root, file), bucket, os.path.join(root, file))
                except ClientError as err:
                    print("Failed to upload artefact to S3.\n" + str(err))
                    return False
                except IOError as err:
                    print("Failed to access artefact in this directory.\n" + str(err))
                    return False
                #else:
                #    print('Skipping file:' + file)
    else:
        print('Uploading file ' + artefact)
        client.upload_file(artefact, bucket, bucket_key)
    return True


def main():

    parser = argparse.ArgumentParser()
    parser.add_argument("bucket", help="Name of the existing S3 bucket")
    parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
    parser.add_argument("is_folder", help="True if its the name of a folder")
    parser.add_argument("bucket_key", help="Name of file in bucket")
    args = parser.parse_args()

    if not upload_to_s3(args.bucket, args.artefact, args.is_folder, args.bucket_key):
        sys.exit(1)

if __name__ == "__main__":
    main()

bitbucket-pipelines.yml:

---
image: python:3.5.1

pipelines:
  branches:
    dev:
      - step:
          script:
            - pip install boto3==1.4.1 # required for s3_upload.py
            - pip install requests
            # the first argument is the name of the existing S3 bucket to upload the artefact to
            # the second argument is the artefact to be uploaded
            # the third argument is if the artefact is a folder
            # the fourth argument is the bucket_key to use
            - python s3_emptyBucket.py dev-slz-processor-repo
            - python s3_upload.py dev-slz-processor-repo lambda true lambda
            - python s3_upload.py dev-slz-processor-repo node_modules true node_modules
            - python s3_upload.py dev-slz-processor-repo config.dev.json false config.json
    stage:
      - step:
          script:
            - pip install boto3==1.3.0 # required for s3_upload.py
            - python s3_emptyBucket.py staging-slz-processor-repo
            - python s3_upload.py staging-slz-processor-repo lambda true lambda
            - python s3_upload.py staging-slz-processor-repo node_modules true node_modules
            - python s3_upload.py staging-slz-processor-repo config.staging.json false config.json
    master:
      - step:
          script:
            - pip install boto3==1.3.0 # required for s3_upload.py
            - python s3_emptyBucket.py prod-slz-processor-repo
            - python s3_upload.py prod-slz-processor-repo lambda true lambda
            - python s3_upload.py prod-slz-processor-repo node_modules true node_modules
            - python s3_upload.py prod-slz-processor-repo config.prod.json false config.json

dev "", dev-slz-processor-repo

, "s3_emptyBucket", :

from __future__ import print_function
import os
import sys
import argparse
import boto3
#import zipfile
from botocore.exceptions import ClientError

def empty_bucket(bucket):
    try:
        resource = boto3.resource('s3')
    except ClientError as err:
        print("Failed to create boto3 resource.\n" + str(err))
        return False
    print("Removing all objects from bucket: " + bucket)
    resource.Bucket(bucket).objects.delete()
    return True


def main():

    parser = argparse.ArgumentParser()
    parser.add_argument("bucket", help="Name of the existing S3 bucket to empty")
    args = parser.parse_args()

    if not empty_bucket(args.bucket):
        sys.exit(1)

if __name__ == "__main__":
    main()
+2

, yaml, aws: cgswong/aws. , , bitbucket (abesiyo/s3).

image: cgswong/aws

pipelines:
  branches:
    master:
      - step:
          script:
            - aws s3 --region "us-east-1" sync public/ s3://static-site-example.activo.com --cache-control "public, max-age=14400" --delete

:

  1. , S3 Bucket .
  2. , root '/' - , .
  3. '--delete' , , , .
  4. --Cache-control s3. , .
  5. , , ,

: Bitbucket Pipelines, S3 CloudFront

+6

docker https://hub.docker.com/r/abesiyo/s3/

BitBucket-pipelines.yml

image: abesiyo/s3

pipelines:
    default:
       - step:
          script:
             - s3 --region "us-east-1" rm s3://<bucket name>
             - s3 --region "us-east-1" sync . s3://<bucket name> 

, AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY

+1

- Amazon S3 bitbucket-pipelines.yml:

image: attensee/s3_website

pipelines:
  default:
    - step:
        script:
          - s3_website push

Im attensee/s3_website, s3_website. s3_website (s3_website.yml) [ Bitbucket] :

s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
s3_bucket: bitbucket-pipelines
site : .

S3_ID S3_SECRET , -

Thankx https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/

+1

Atlassian now offers Pipes to simplify the configuration of some common tasks. There is one for loading S3 as well.

No need to specify a different type of image:

image: node:8

pipelines:
  branches:
    master:
      - step:
          script:
            - pipe: atlassian/aws-s3-deploy:0.2.1
              variables:
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: "us-east-1"
                S3_BUCKET: "your.bucket.name"
                LOCAL_PATH: "dist"
0
source

Source: https://habr.com/ru/post/1654856/


All Articles