Salesforce data warehouse on Amazon Redshift on AWS

Partner Solution Deployment Guide

QS

October 2022
Harvey James, Keyrus
Troy Ameigh and Andrew Glenn, AWS Integration & Automation team

Refer to the GitHub repository to view source files, report bugs, submit feature ideas, and post feedback about this Partner Solution. To comment on the documentation, refer to Feedback.

This Partner Solution was created by Keyrus in collaboration with Amazon Web Services (AWS). Partner Solutions are automated reference deployments that help people deploy popular technologies on AWS according to AWS best practices. If you’re unfamiliar with AWS Partner Solutions, refer to the AWS Partner Solution General Information Guide.

Overview

This guide covers the information you need to deploy the Salesforce data warehouse on Amazon Redshift Partner Solution in the AWS Cloud.

Costs and licenses

There is no cost to use this Partner Solution, but you will be billed for any AWS services or resources that it deploys. For more information, refer to the AWS Partner Solution General Information Guide.

No licenses are required to deploy this Partner Solution.

Architecture

Deploying this Partner Solution with default parameters builds the following Salesforce data warehouse environment in the AWS Cloud.

Architecture
Figure 1. Partner Solution architecture for Salesforce data warehouse on AWS

As shown in Figure 1, this Partner Solution sets up the following:

  • A highly available architecture that spans two Availability Zones.*

  • A virtual private cloud (VPC) configured with public and private subnets, according to AWS best practices, to provide you with your own virtual network on AWS.*

  • In the public subnets, managed network address translation (NAT) gateways to allow outbound internet access for resources in the private subnets.*

  • In the private subnets, an Amazon Redshift cluster of Dense Compute (DC) nodes, one in each subnet. These nodes handle all the Amazon Redshift database work.

  • Amazon AppFlow to create a mapping method to transfer Salesforce data to Amazon Redshift. Mapping creates all destination fields. This solution includes a sample workflow-mapping .swl file for account, contact, case, lead, and opportunity objects.

  • AWS Lambda to create all the required database objects and schema with tables in the Amazon Redshift cluster and to invoke Amazon AppFlow.

  • Amazon Simple Storage Service (Amazon S3) to store the Amazon Redshift logs.

  • AWS Secrets Manager to store and manage Amazon Redshift credentials.

  • AWS Key Management Service (AWS KMS) to encrypt the data within Amazon Redshift.

* The template that deploys this Partner Solution into an existing VPC skips the components marked by asterisks and prompts you for your existing VPC configuration.

Deployment options

This Partner Solution provides the following deployment options:

This Partner Solution provides separate templates for these options. It also lets you configure Classless Inter-Domain Routing (CIDR) blocks, instance types, and Salesforce data warehouse settings.

Predeployment steps

Complete the following steps before you deploy the Partner Solution.

Sign up for a Salesforce account

If you don’t already have a Salesforce account, sign up for one, and then configure your Salesforce connection. For more information, refer to the Salesforce website for developers.

Connect Amazon AppFlow to your Salesforce account

  1. Sign in to the AWS Management Console, and open the Amazon AppFlow console.

  2. In the navigation pane, choose Connections, and then choose Salesforce as your connector.

  3. Choose Create connection. If prompted to log in to Salesforce, do so.

  4. Choose Salesforce from the dropdown list of connections, and choose Connect. The Connect to Salesforce screen appears, as shown in Figure 2

    aws quickstart predeployment connect salesforce
    Figure 2. "Connect to Salesforce" screen in the Amazon AppFlow console
  5. Choose the desired settings.

    1. For Salesforce environment, choose Production or Sandbox. Before you choose the sandbox environment, confirm that your Salesforce administrator has set up a sandbox instance.

    2. For PrivateLink (AWS PrivateLink), choose Enabled or Disabled. Before you enable, confirm that your Salesforce administrator has set up Private Connect. For more information, refer to Secure Cross-Cloud Integrations with Private Connect.

    3. For Connection name, enter any name you like. Make a note of it; you’ll need to enter it in the AWS CloudFormation template when you deploy this solution.

  6. Log in to Salesforce. When prompted to allow Amazon AppFlow access to Salesforce, choose Allow.

Deployment steps

  1. Sign in to your AWS account, and launch this Partner Solution, as described under Deployment options. The AWS CloudFormation console opens with a prepopulated template.

  2. Choose the correct AWS Region, and then choose Next.

  3. On the Create stack page, keep the default setting for the template URL, and then choose Next.

  4. On the Specify stack details page, change the stack name if needed. Review the parameters for the template. Provide values for the parameters that require input. For all other parameters, review the default settings and customize them as necessary. When you finish reviewing and customizing the parameters, choose Next.

    Unless you are customizing the Partner Solution templates for your own projects, don’t change the default settings for the following Amazon Simple Storage Service (Amazon S3) parameters: Partner Solution S3 bucket name, Partner Solution S3 bucket Region, and Partner Solution S3 key prefix. Changing these settings automatically updates code references to point to a new Partner Solution location. For more information, refer to the AWS Partner Solution Contributor’s Guide.
  5. On the Configure stack options page, you can specify tags (key-value pairs) for resources in your stack and set advanced options. When you finish, choose Next.

  6. On the Review page, review and confirm the template settings. Under Capabilities, select all of the check boxes to acknowledge that the template creates AWS Identity and Access Management (IAM) resources that might require the ability to automatically expand macros.

  7. Choose Create stack. The stack takes about 15 minutes to deploy.

  8. Monitor the stack’s status, and when the status is CREATE_COMPLETE, the Salesforce data warehouse on Amazon Redshift deployment is ready.

  9. To view the created resources, choose the Outputs tab.

Postdeployment steps

(Optional) Change the flow trigger

By default, Amazon AppFlow runs the flow on a recurring schedule, refreshing the data once per hour. To change this interval, to run the flow on demand, or to run the flow in response to a SaaS-application event, change the flow trigger as follows.

  1. Sign in to the AWS Management Console, and open the Amazon AppFlow console.

  2. In the navigation pane, choose Flows, choose a flow name, and choose Edit.

  3. In the navigation pane, choose Edit flow configuration, and choose Flow trigger. The Flow trigger screen appears, as shown in Figure 3.

    aws post deployment flow trigger change
    Figure 3. Flow-trigger settings in the Amazon AppFlow console
  4. Make the desired changes.

Troubleshooting

For troubleshooting common Partner Solution issues, refer to the AWS Partner Solution General Information Guide and Troubleshooting CloudFormation.

Customer responsibility

After you deploy a Partner Solution, confirm that your resources and services are updated and configured—including any required patches—to meet your security and other needs. For more information, refer to the Shared Responsibility Model.

Feedback

To submit feature ideas and report bugs, use the Issues section of the GitHub repository for this Partner Solution. To submit code, refer to the Partner Solution Contributor’s Guide. To submit feedback on this deployment guide, use the following GitHub links:

Notices

This document is provided for informational purposes only. It represents current AWS product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS products or services, each of which is provided "as is" without warranty of any kind, whether expressed or implied. This document does not create any warranties, representations, contractual commitments, conditions, or assurances from AWS, its affiliates, suppliers, or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

The software included with this paper is licensed under the Apache License, version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at https://aws.amazon.com/apache2.0/ or in the accompanying "license" file. This code is distributed on an "as is" basis, without warranties or conditions of any kind, either expressed or implied. Refer to the License for specific language governing permissions and limitations.