相关文章推荐
博学的绿茶  ·  Delphi ...·  8 月前    · 
爱吹牛的牙膏  ·  Socket.BeginReceive ...·  9 月前    · 
想出家的水煮鱼  ·  iOS WKWebView ...·  1 年前    · 

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Download Microsoft Edge More info about Internet Explorer and Microsoft Edge Azure Data Factory Azure Synapse Analytics

This article outlines how to use Copy Activity to copy data from Amazon Simple Storage Service (Amazon S3), and use Data Flow to transform data in Amazon S3. To learn more, read the introductory articles for Azure Data Factory and Synapse Analytics .

To learn more about the data migration scenario from Amazon S3 to Azure Storage, see Migrate data from Amazon S3 to Azure Storage .

Supported capabilities

This Amazon S3 connector is supported for the following capabilities:

Supported capabilities

① Azure integration runtime ② Self-hosted integration runtime

Specifically, this Amazon S3 connector supports copying files as is or parsing files with the supported file formats and compression codecs . You can also choose to preserve file metadata during copy . The connector uses AWS Signature Version 4 to authenticate requests to S3.

If you want to copy data from any S3-compatible storage provider , see Amazon S3 Compatible Storage .

Required permissions

To copy data from Amazon S3, make sure you've been granted the following permissions for Amazon S3 object operations: s3:GetObject and s3:GetObjectVersion .

If you use Data Factory UI to author, additional s3:ListAllMyBuckets and s3:ListBucket / s3:GetBucketLocation permissions are required for operations like testing connection to linked service and browsing from root. If you don't want to grant these permissions, you can choose "Test connection to file path" or "Browse from specified path" options from the UI.

For the full list of Amazon S3 permissions, see Specifying Permissions in a Policy on the AWS site.

Getting started

To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:

  • The Copy Data tool
  • The Azure portal
  • The .NET SDK
  • The Python SDK
  • Azure PowerShell
  • The REST API
  • The Azure Resource Manager template
  • Create an Amazon Simple Storage Service (S3) linked service using UI

    Use the following steps to create an Amazon S3 linked service in the Azure portal UI.

  • Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:

    Azure Data Factory Azure Synapse

    Connector configuration details

    The following sections provide details about properties that are used to define Data Factory entities specific to Amazon S3.

    Linked service properties

    The following properties are supported for an Amazon S3 linked service:

    Property Description Required authenticationType Specify the authentication type used to connect to Amazon S3. You can choose to use access keys for an AWS Identity and Access Management (IAM) account, or temporary security credentials .
    Allowed values are: AccessKey (default) and TemporarySecurityCredentials . accessKeyId ID of the secret access key. secretAccessKey The secret access key itself. Mark this field as a SecureString to store it securely, or reference a secret stored in Azure Key Vault . sessionToken Applicable when using temporary security credentials authentication. Learn how to request temporary security credentials from AWS.
    Note AWS temporary credential expires between 15 minutes to 36 hours based on settings. Make sure your credential is valid when activity executes, especially for operationalized workload - for example, you can refresh it periodically and store it in Azure Key Vault.
    Mark this field as a SecureString to store it securely, or reference a secret stored in Azure Key Vault . serviceUrl Specify the custom S3 endpoint https://<service url> .
    Change it only if you want to try a different service endpoint or want to switch between https and http. connectVia The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime.

    Example: using access key authentication

    "name": "AmazonS3LinkedService", "properties": { "type": "AmazonS3", "typeProperties": { "accessKeyId": "<access key id>", "secretAccessKey": { "type": "SecureString", "value": "<secret access key>" "connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference"

    Example: using temporary security credential authentication

    "name": "AmazonS3LinkedService", "properties": { "type": "AmazonS3", "typeProperties": { "authenticationType": "TemporarySecurityCredentials", "accessKeyId": "<access key id>", "secretAccessKey": { "type": "SecureString", "value": "<secret access key>" "sessionToken": { "type": "SecureString", "value": "<session token>" "connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference"

    Dataset properties

    For a full list of sections and properties available for defining datasets, see the Datasets article.

    Azure Data Factory supports the following file formats. Refer to each article for format-based settings.

  • Avro format
  • Binary format
  • Delimited text format
  • Excel format
  • JSON format
  • ORC format
  • Parquet format
  • XML format
  • The following properties are supported for Amazon S3 under location settings in a format-based dataset:

    Property Description Required folderPath The path to the folder under the given bucket. If you want to use a wildcard to filter the folder, skip this setting and specify that in the activity source settings. fileName The file name under the given bucket and folder path. If you want to use a wildcard to filter files, skip this setting and specify that in the activity source settings. version The version of the S3 object, if S3 versioning is enabled. If it's not specified, the latest version will be fetched.

    Example:

    "name": "DelimitedTextDataset", "properties": { "type": "DelimitedText", "linkedServiceName": { "referenceName": "<Amazon S3 linked service name>", "type": "LinkedServiceReference" "schema": [ < physical schema, optional, auto retrieved during authoring > ], "typeProperties": { "location": { "type": "AmazonS3Location", "bucketName": "bucketname", "folderPath": "folder/subfolder" "columnDelimiter": ",", "quoteChar": "\"", "firstRowAsHeader": true, "compressionCodec": "gzip"

    Copy activity properties

    For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties that the Amazon S3 source supports.

    Amazon S3 as a source type

    Azure Data Factory supports the following file formats. Refer to each article for format-based settings.

  • Avro format
  • Binary format
  • Delimited text format
  • Excel format
  • JSON format
  • ORC format
  • Parquet format
  • XML format
  • The following properties are supported for Amazon S3 under storeSettings settings in a format-based copy source:

    Property Description Required OPTION 2: S3 prefix
    - prefix Prefix for the S3 key name under the given bucket configured in a dataset to filter source S3 files. S3 keys whose names start with bucket_in_dataset/this_prefix are selected. It utilizes S3's service-side filter, which provides better performance than a wildcard filter.

    When you use prefix and choose to copy to file-based sink with preserving hierarchy, note the sub-path after the last "/" in prefix will be preserved. For example, you have source bucket/folder/subfolder/file.txt , and configure prefix as folder/sub , then the preserved file path is subfolder/file.txt . OPTION 3: wildcard
    - wildcardFolderPath The folder path with wildcard characters under the given bucket configured in a dataset to filter source folders.
    Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character). Use ^ to escape if your folder name has a wildcard or this escape character inside.
    See more examples in Folder and file filter examples . OPTION 3: wildcard
    - wildcardFileName The file name with wildcard characters under the given bucket and folder path (or wildcard folder path) to filter source files.
    Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character). Use ^ to escape if your file name has a wildcard or this escape character inside. See more examples in Folder and file filter examples . OPTION 4: a list of files
    - fileListPath Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.
    When you're using this option, do not specify a file name in the dataset. See more examples in File list examples . Additional settings: recursive Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink.
    Allowed values are true (default) and false .
    This property doesn't apply when you configure fileListPath . deleteFilesAfterCompletion Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store.
    This property is only valid in binary files copy scenario. The default value: false. modifiedDatetimeStart Files are filtered based on the attribute: last modified.
    The files will be selected if their last modified time is greater than or equal to modifiedDatetimeStart and less than modifiedDatetimeEnd . The time is applied to a UTC time zone in the format of "2018-12-01T05:00:00Z".
    The properties can be NULL , which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL , the files whose last modified attribute is greater than or equal to the datetime value will be selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL , the files whose last modified attribute is less than the datetime value will be selected.
    This property doesn't apply when you configure fileListPath . modifiedDatetimeEnd Same as above. enablePartitionDiscovery For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.
    Allowed values are false (default) and true . partitionRootPath When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.

    If it is not specified, by default,
    - When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.
    - When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.
    - When you use prefix, partition root path is sub-path before the last "/".

    For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":
    - If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns month and day with value "08" and "27" respectively, in addition to the columns inside the files.
    - If partition root path is not specified, no extra column will be generated. maxConcurrentConnections The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.

    Example:

    "activities":[
            "name": "CopyFromAmazonS3",
            "type": "Copy",
            "inputs": [
                    "referenceName": "<Delimited text input dataset name>",
                    "type": "DatasetReference"
            "outputs": [
                    "referenceName": "<output dataset name>",
                    "type": "DatasetReference"
            "typeProperties": {
                "source": {
                    "type": "DelimitedTextSource",
                    "formatSettings":{
                        "type": "DelimitedTextReadSettings",
                        "skipLineCount": 10
                    "storeSettings":{
                        "type": "AmazonS3ReadSettings",
                        "recursive": true,
                        "wildcardFolderPath": "myfolder*A",
                        "wildcardFileName": "*.csv"
                "sink": {
                    "type": "<sink type>"
    

    Folder and file filter examples

    This section describes the resulting behavior of the folder path and file name with wildcard filters.

    bucket recursive Source folder structure and filter result (files in bold are retrieved) Folder*/* false bucket
        FolderA
            File1.csv
            File2.json
            Subfolder1
                File3.csv
                File4.json
                File5.csv
        AnotherFolderB
            File6.csv bucket Folder*/* bucket
        FolderA
            File1.csv
            File2.json
            Subfolder1
                File3.csv
                File4.json
                File5.csv
        AnotherFolderB
            File6.csv bucket Folder*/*.csv false bucket
        FolderA
            File1.csv
            File2.json
            Subfolder1
                File3.csv
                File4.json
                File5.csv
        AnotherFolderB
            File6.csv bucket Folder*/*.csv bucket
        FolderA
            File1.csv
            File2.json
            Subfolder1
                File3.csv
                File4.json
                File5.csv
        AnotherFolderB
            File6.csv

    File list examples

    This section describes the resulting behavior of using a file list path in a Copy activity source.

    Assume that you have the following source folder structure and want to copy the files in bold:

    Sample source structure Content in FileListToCopy.txt Configuration bucket
        FolderA
            File1.csv
            File2.json
            Subfolder1
                File3.csv
                File4.json
                File5.csv
        Metadata
            FileListToCopy.txt File1.csv
    Subfolder1/File3.csv
    Subfolder1/File5.csv In dataset:
    - Bucket: bucket
    - Folder path: FolderA

    In Copy activity source:
    - File list path: bucket/Metadata/FileListToCopy.txt

    The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line, with the relative path to the path configured in the dataset.

    Preserve metadata during copy

    When you copy files from Amazon S3 to Azure Data Lake Storage Gen2 or Azure Blob storage, you can choose to preserve the file metadata along with data. Learn more from Preserve metadata.

    Mapping data flow properties

    When you're transforming data in mapping data flows, you can read files from Amazon S3 in the following formats:

  • Delimited text
  • Delta
  • Excel
  • Parquet
  • Format specific settings are located in the documentation for that format. For more information, see Source transformation in mapping data flow.

    Source transformation

    In source transformation, you can read from a container, folder, or individual file in Amazon S3. Use the Source options tab to manage how the files are read.

    Wildcard paths: Using a wildcard pattern will instruct the service to loop through each matching folder and file in a single source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the plus sign that appears when you hover over your existing wildcard pattern.

    From your source container, choose a series of files that match a pattern. Only a container can be specified in the dataset. Your wildcard path must therefore also include your folder path from the root folder.

    Wildcard examples:

  • * Represents any set of characters.

  • ** Represents recursive directory nesting.

  • ? Replaces one character.

  • [] Matches one or more characters in the brackets.

  • /data/sales/**/*.csv Gets all .csv files under /data/sales.

  • /data/sales/20??/**/ Gets all files in the 20th century.

  • /data/sales/*/*/*.csv Gets .csv files two levels under /data/sales.

  • /data/sales/2004/*/12/[XY]1?.csv Gets all .csv files in December 2004 starting with X or Y prefixed by a two-digit number.

    Partition root path: If you have partitioned folders in your file source with a key=value format (for example, year=2019), then you can assign the top level of that partition folder tree to a column name in your data flow's data stream.

    First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you want to read.

    List of files: This is a file set. Create a text file that includes a list of relative path files to process. Point to this text file.

    Column to store file name: Store the name of the source file in a column in your data. Enter a new column name here to store the file name string.

    After completion: Choose to do nothing with the source file after the data flow runs, delete the source file, or move the source file. The paths for the move are relative.

    To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting will be the same folder as your source folder.

    If you have a source path with wildcard, your syntax will look like this:

    /data/sales/20??/**/*.csv

    You can specify "from" as:

    /data/sales

    And you can specify "to" as:

    /backup/priorSales

    In this case, all files that were sourced under /data/sales are moved to /backup/priorSales.

    File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations do not run in Data Flow debug mode.

    Filter by last modified: You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.

    Lookup activity properties

    To learn details about the properties, check Lookup activity.

    GetMetadata activity properties

    To learn details about the properties, check GetMetadata activity.

    Delete activity properties

    To learn details about the properties, check Delete activity.

    Legacy models

    The following models are still supported as is for backward compatibility. We suggest that you use the new model mentioned earlier. The authoring UI has switched to generating the new model.

    Legacy dataset model

    Property Description Required bucketName The S3 bucket name. The wildcard filter is not supported. Yes for the Copy or Lookup activity, no for the GetMetadata activity The name or wildcard filter of the S3 object key under the specified bucket. Applies only when the prefix property is not specified.

    The wildcard filter is supported for both the folder part and the file name part. Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character).
    - Example 1: "key": "rootfolder/subfolder/*.csv"
    - Example 2: "key": "rootfolder/subfolder/???20180427.txt"
    See more example in Folder and file filter examples. Use ^ to escape if your actual folder or file name has a wildcard or this escape character inside. prefix Prefix for the S3 object key. Objects whose keys start with this prefix are selected. Applies only when the key property is not specified. version The version of the S3 object, if S3 versioning is enabled. If a version is not specified, the latest version will be fetched. modifiedDatetimeStart Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to modifiedDatetimeStart and less than modifiedDatetimeEnd. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z".

    Be aware that enabling this setting will affect the overall performance of data movement when you want to filter huge amounts of files.

    The properties can be NULL, which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, the files whose last modified attribute is less than the datetime value will be selected. modifiedDatetimeEnd Files are filtered based on the attribute: last modified. The files will be selected if their last modified time is greater than or equal to modifiedDatetimeStart and less than modifiedDatetimeEnd. The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z".

    Be aware that enabling this setting will affect the overall performance of data movement when you want to filter huge amounts of files.

    The properties can be NULL, which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, the files whose last modified attribute is greater than or equal to the datetime value will be selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, the files whose last modified attribute is less than the datetime value will be selected. format If you want to copy files as is between file-based stores (binary copy), skip the format section in both input and output dataset definitions.

    If you want to parse or generate files with a specific format, the following file format types are supported: TextFormat, JsonFormat, AvroFormat, OrcFormat, ParquetFormat. Set the type property under format to one of these values. For more information, see the Text format, JSON format, Avro format, Orc format, and Parquet format sections. No (only for binary copy scenario) compression Specify the type and level of compression for the data. For more information, see Supported file formats and compression codecs.
    Supported types are GZip, Deflate, BZip2, and ZipDeflate.
    Supported levels are Optimal and Fastest.

    To copy all files under a folder, specify bucketName for the bucket and prefix for the folder part.

    To copy a single file with a given name, specify bucketName for the bucket and key for the folder part plus file name.

    To copy a subset of files under a folder, specify bucketName for the bucket and key for the folder part plus wildcard filter.

    Example: using prefix

    "name": "AmazonS3Dataset", "properties": { "type": "AmazonS3Object", "linkedServiceName": { "referenceName": "<Amazon S3 linked service name>", "type": "LinkedServiceReference" "typeProperties": { "bucketName": "testbucket", "prefix": "testFolder/test", "modifiedDatetimeStart": "2018-12-01T05:00:00Z", "modifiedDatetimeEnd": "2018-12-01T06:00:00Z", "format": { "type": "TextFormat", "columnDelimiter": ",", "rowDelimiter": "\n" "compression": { "type": "GZip", "level": "Optimal"

    Example: using key and version (optional)

    "name": "AmazonS3Dataset", "properties": { "type": "AmazonS3", "linkedServiceName": { "referenceName": "<Amazon S3 linked service name>", "type": "LinkedServiceReference" "typeProperties": { "bucketName": "testbucket", "key": "testFolder/testfile.csv.gz", "version": "XXXXXXXXXczm0CJajYkHf0_k6LhBmkcL", "format": { "type": "TextFormat", "columnDelimiter": ",", "rowDelimiter": "\n" "compression": { "type": "GZip", "level": "Optimal"

    Legacy source model for the Copy activity

    Property Description Required recursive Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder will not be copied or created at the sink.
    Allowed values are true (default) and false. maxConcurrentConnections The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.

    Example:

    "activities":[
            "name": "CopyFromAmazonS3",
            "type": "Copy",
            "inputs": [
                    "referenceName": "<Amazon S3 input dataset name>",
                    "type": "DatasetReference"
            "outputs": [
                    "referenceName": "<output dataset name>",
                    "type": "DatasetReference"
            "typeProperties": {
                "source": {
                    "type": "FileSystemSource",
                    "recursive": true
                "sink": {
                    "type": "<sink type>"
    

    Next steps

    For a list of data stores that the Copy activity supports as sources and sinks, see Supported data stores.

  •