Audience & Campaigns

Templates for syncing audience data and reloading campaign segments from SFTP.

Audience and campaigns templates let you manage audience lists and campaign-related customer data in Engage+. Each template provides a pre-configured block sequence for a specific use case. Select a template, configure the block details for your environment, and deploy the dataflow.

The following templates are available for audience and campaigns.

Audience reload from SFTP

Refreshes files in a defined FTP location at specified intervals and automatically updates the matching audience list in Engage+. The template reads the updated audience file from FTP, pushes it to the configured S3 bucket, and calls the Iris audience reload API to update the list.

Prerequisites

  • The audience CSV file must be available in the FTP location.
  • An audience list must be created in the Engage+ Audience Manager before configuring this template. The audience list name must match the name used in the template configuration and must not contain any spaces. For example, FTP_Test is a valid name; FTP Test is not.

Use case

A brand runs a recurring campaign that targets a customer segment updated daily from an external data source outside the Capillary database. The updated customer list is generated in Databricks each day and placed in an FTP location. Using this template, the brand configures a scheduled dataflow that picks up the new file, pushes it to S3, and triggers a reload of the corresponding Engage+ audience list. The campaign then runs against the refreshed audience, removing the need for manual list updates before each send.

Block configuration

The following table lists the blocks in the Audience reload from SFTP template, describes what each block does, and provides the configured values for each field.

Block NameConfiguration FieldConfigured Value
Connect-to-source
Type: sftp_read
Hostnamedata.capillarydata.com
Usernamecapillary
PasswordRedacted
Source directory/capillary testing/vtest/source
Filename pattern.*.csv. Matches all CSV files.
Processed directory/capillary testing/vtest/process
API error file path/capillary testing/vtest/error
Port22
File delimiter,
Report status codeall
Push-to-S3
Type: s3_write
S3 object key${'${filename}'}. Uses the source filename.
S3 bucket namenewValue
Content typeapplication/octet-stream
AWS regionus-east-1
Destination-Iris-audience
Type: iris_ftp_audience_refresh
API methodPOST
API URL/
Organization ID headerNot configured
Capillary tokenNot configured