Amazon S3 Utilities

Overview

The Amazon S3 Utilities Plug-in leverages the Amazon AWS Java API to connect with Amazon S3 to store and retrieve files.  

Key Features & Functionality

The following smart services are included:

  • Upload documents to AWS S3
  • Download documents from AWS S3
  • Create Folders in AWS S3
  • Delete documents from AWS S3

The plug-in also includes a function:

  • getPreSignedURLForS3 that generates a V4 pre signed url that expires after 5s. This allows for a short term access grant to a secured resource. It can be used in a WebAPI object to redirect a user from Appian to a resource on S3.

Amazon S3 Utilities supports the following Amazon S3 features:

Note:  The plug-in requires Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files when using client side encryption.

(https://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)

The Appian Secure Credential Store is leveraged for the credentials to integrate with Amazon S3. Before executing the plug-in, create an new secure credential store with the following 3 attributes.  These values are obtained from Amazon AWS IAM console.

  • accesskeyid: this is the access key id for connecting to AWS S3
  • accesskeysecret: this is the access key secret for connecting to AWS S3
  • kmscmkid: this attribute is only required if using AWS Client Side Encryption
Anonymous
  • Josh, what is the need for a custom timer? Is it for shortening or extending? Extending the expiration timer beyond a minimum opens up a security hole since the links can be shared and accessed by anyone. As explained in my previous response, the 5s timer is just to issue a redirect to the link once it's signed by AWS, which is more than long enough to do so (it could be argued it's too long still).

  • jeank0002, in case you or anyone else is interested, I used the getawsv4signature from the Cryptographic Hash Functions plugin to create a presigned S3 URL with a custom expiration value (a constant in our case).  Here's the code I used:

    if(a!isNullOrEmpty(ri!fileURI), 
      null, 
      a!localVariables(
        local!resourceURI: if(left(ri!fileURI, 1) = "/", ri!fileURI, "/"&ri!fileURI),
        local!currentTime: now(),
        local!dateTimeStamp: rule!TRSY_formatXAmzDateTime(dateTime: local!currentTime, isDateOnly: false),
        local!dateStamp: rule!TRSY_formatXAmzDateTime(dateTime: local!currentTime, isDateOnly: true),
        local!bucketName: cons!AWS_S3_PROFILE_IMAGE_BUCKET_NAME,
        local!hostname: local!bucketName & "." & cons!AWS_S3_DOMAIN_NAME,
        
        /* CredentialScope = <dateStamp>/<aws-region>/<aws-service>/aws4_request */
        local!credentialScope: joinarray(
          {
            local!dateStamp,
            cons!AWS_S3_REGION,
            cons!AWS_S3_SERVICE_NAME,
            cons!AWS_SIGNATURE_VERSION
          },
          "/"
        ),
        /* Query parameters */
        local!amzParams: {
          a!map(param: "X-Amz-Algorithm", value: cons!AWS_SIGNING_ALGORITHM),
          a!map(param: "X-Amz-Credential", value: cons!AWS_S3_PROFILE_IMAGE_ACCESS_KEY_ID&"/"&local!credentialScope),
          a!map(param: "X-Amz-Date", value: local!dateTimeStamp),
          a!map(param: "X-Amz-Expires", value: cons!AWS_S3_PROFILE_IMAGE_PRESIGNED_URL_EXPIRATION_TIME),
          a!map(param: "X-Amz-SignedHeaders", value: "host")
        },
        local!queryString: joinarray(
          a!forEach(
            items: local!amzParams,
            expression: urlencode(fv!item.param)&"="&urlencode(fv!item.value)
          ),
          "&"
        ),
        
        /* Step 1: Canonical request */
        local!canonicalRequest: a!localVariables(
          local!headers: "host:"&local!hostname,
          joinarray(
            {
              "GET",
              local!resourceURI,
              local!queryString,
              local!headers&char(10),
              "host",
              "UNSIGNED-PAYLOAD"
            },
            char(10)
          )
        ),
        
        /* Step 2: String to sign */
        local!stringToSign: joinarray(
          {
            cons!AWS_SIGNING_ALGORITHM,
            local!dateTimeStamp,
            local!credentialScope,
            sha256hash(local!canonicalRequest)
          },
          char(10)
        ),
        
        /* Step 3: Signature */
        local!signature: getawsv4signature(
          key:"",
          scsValue: {
            cons!AWS_S3_READONLY_PROFILE_IMAGE_SCSFIELD_EXTERNALFIELD,
            cons!AWS_S3_READONLY_PROFILE_IMAGE_SCSFIELD_FIELDNAME_SECRETACCESSKEY
          }, 
          dateStamp: local!dateStamp, 
          regionName: cons!TRSY_SQS_AWS_REGION, 
          serviceName: cons!AWS_S3_SERVICE_NAME, 
          string:local!stringToSign
        ),
        
        /* Return the full presigned URL */
        "https://"&local!hostname&local!resourceURI&"?"&local!queryString&"&X-Amz-Signature="&local!signature
      )
    )

    And here's the code for the TRSY_formatXAmzDateTime rule:

    if( 
      or(
        a!isNullOrEmpty(ri!isDateOnly),
        not(ri!isDateOnly)
      ),
      text(gmt(ri!dateTime,"America/Chicago"), "yyyymmddThhmmss")&"Z",
      text(gmt(ri!dateTime,"America/Chicago"), "yyyymmdd")
    )

  • Hi Mike,

    Thanks for responding so quickly. I am asking Appian Support about the stack trace. The plugin is on version 1.0.0.9 in two of our environments, both of which are on 22.2. It works in one environment, but is throwing the above error in the other.

    I will get more details on stack trace ASAP.


  • Do you have the stack trace? Was the environment upgraded?

  • Hi Mike,

    I am getting the following error when executing the "Download Objects" by key action in process:

    [title=Integration Execution Error, message=com.appiancorp.suiteapi.content.exceptions.InvalidContentException: Invalid Content ID, detail=Please review logs for stack trace.]

    I have been communicating with Appian Support, but they don't see anything in the logs. This integration was working previously, and nothing in the code has been updated since it was working. What does this error mean, and which logs can I reference to trouble shoot with Appian Support more extensively?

    Thank you,

    Walker

  • v1.3.4 Release Notes
    • Security Updates
  • You will need to develop a component plugin that is used to directly upload files from SAIL into S3

  • Thanks Mike for your answer. In our use case there is no predefined URL. It is required to upload files larger than 1 Gb from an application developed with Appian to the final repository.

  • That depends on the nature of the transfer - if it is a direct download from S3 using a presigned url, there is technically no limit. Otherwise, limits will apply. What's your use case?

  • Hi, I would like to know if this plug-in is used to address the transfer of files larger than 1 GB, and with which repository it should integrate in AWS.