Configure blocks

This page provides you with information on configuring blocks in Connect+.

Connect+ has various blocks that are used to create a dataflow. The blocks in a dataflow is based on the selected templates and can differ from template to template.

Connect to source

This block enables you to define the source location of the data that needs to be ingested. The connect to source block has the following fields:

Field nameDescription
HostnameURL (address) of the SFTP/FTP server from where the source file is available. For example, data.capillarydata.com.
Username and PasswordCredentials to access this SFTP/FTP server. This provides read/write access to the files on the server.
Source DirectoryThe directory path where the source file is available. For example, /APAC2Cluster/A_Connect Connect+ usually supports any text-based files with a delimiter, irrespective of extensions such as .txt, .csv, or .dat etc. In addition, _.ok _file format is also supported.
Filename PatternFile names in the regex pattern. If you give a filename pattern file.*.csv _, the application takes any file that starts with the file name file.
Processed DirectoryThe path to save the file to be processed. The data to process is captured from the processed file. Processing is the operation of formatting and transforming a given set of data to extract the required information in the appropriate format. For example, /APAC2Cluster/Process.
Unzip FilesIf the files are compressed, you can select this option to unzip the file and then select the original file.
API Error FilepathThe SFTP/FTP path where you want to save the API error file. This file includes the error details that occurred during the API calls. This field is applicable only where API endpoints are required (data ingestion).
Search Directory RecursivelySelect the checkbox, if you want to search for the defined file anywhere in the root folder of the server. For example, if there are multiple folders inside /APAC2-Cluster/A_Connect, it looks for files matching the pattern inside each folder under /APAC2-Cluster/A_Connect.
PortThe source SFTP/FTP port number. Generally, 22 for SFTP and 21 for FTP.
Source directory file path

Copy directory file path

Defining column header for transform block

You can upload a CSV file with the column headers of the input source file or define the header data manually. This makes mapping fields in the transformation block easier. Once you define the headings, they are automatically retrieved in the transformation blocks and you can map corresponding API fields against them.

The maximum supported size of a CSV file is 5 MB. After attaching the file, you can delete it by clicking the delete icon. The system allows you to switch to the manual option after uploading the CSV file. The values should be separated by a comma only.

If you do not have a CSV file, you can select the Add manually option and define the headings manually in the transformation block. You can also select Add manually and choose the Is file headerless option to define the headings separated by a comma. These headings are retrieved in the Transformation block, and you can also add additional headings in the transformation block itself.

Decrypt data

This block enables the descryption of an encrypted source file. It is an optional feature that can be enabled if the source file is encrypted. All the fields within this block become mandatory once you enable the Decrypt data action block. You can use the Is Enabled drop-down to control the activation of this block.

Field nameDescription
Encryption AlgorithmThe algorithm that was used to encrypt the file. At present, only files encrypted using the PGP algorithm can be decrypted. Hence, by default, PGP encryption is always selected.
Private Key FilePrivate Key in the base64 encoded format.
Private PassphraseThe password that is required to unlock the private key.
Is EnabledEnable or disable the decrypt function.

Connect to destination

This block enables you to define the details of the v2 API that will be used for running the dataflow. Enter the below details:

Field nameDescription
Client KeyAPI client key. For information on creating an API client key, refer to Creating API client key and secret.
Client SecretAPI client secret. For information on creating an API client key, refer to Creating API client key and secret.
API EndPointAPI endpoint. For example, /v2/customers.
API Base UrlAPI cluster URL of the org. For more information on cluster URLs, refer to documentation on host URLs.
OAuth Base UrlEnter the URL entered for the API Base URL.
API MethodAPI method. For example, POST. For information on host URLs, refer to API host URL documentation.
Bulk SupportUse the default value.
Request Split PathUse the default value.
Response Split PathUse the default value.
Recoverable Error CodesUse the default value.
Parse Path MapUse the default value.
Yielding Error CodesUse the default value.
Max RetriesUse the default value.

Transform data

Data transformation in Connect+ is the process of converting the ingested data to JSON format. To achieve this, you need to map the fields in the source files to the appropriate API fields in this block. This helps connect+ to convert the data to JSON, call the APIs successfully and upload the data. These fields in this Connect+ block are parameters of the JSON payload of the API associated with the template that you have selected for the dataflow.

Perform the following to map the fields:

  1. If the workspace is linked to multiple orgs, select the org ID from the Intouch Org drop-down.
  1. In the File Delimiter text box, enter the delimiter used to separate the headings in the source file. By default, the delimiter is set as a comma.

  2. Map the API fields with the appropriate columns in the source file. To map, perform the following:

    1. If the column header fields were not uploaded or defined, enter the column header name in the File header text box. This is the name of the column header in the source file that will have information for an API field. For example, a column named source can have values for the API field called source. You can also click on Add more headers and can add more headers or add headers using expression.

    2. Select the API field name from the API fields drop-down that corresponds to the column header specified in the File header text box. The number of mandatory fields that need to be mapped is displayed next to the parent API field.

    3. If required, in the Filter data text box, use expression language and add a filter condition to include or exclude data from the source file. Example: Consider a transaction dataflow that includes a Financial status header with values such as paid, refunded, or not paid. To exclusively process dataflow for transactions that are either paid or not paid, you can use the filter expression ${header_value:equals('paid'):or(${header_value:equals('not paid')})}.

Adding a header using the expression

If the value of an API Field is the same for all the customers, you can use const{value}. For example, to categorize loyaltyType as loyalty for all customers, you can enter const{loyalty} against the API field loyaltyType. For more information, refer to the documentation on using expression.

To add a header using an expression,

  1. Click Add header using expression.
  2. Enter the expression in the text box.
  3. From the API field drop-down, select the API field you want to map to the header.

Adding additional header

If you selected Add manually as the Column headers input method (with or without header names) for transforming data in the Connect to source block, you will have the option to add additional headers. You can add the additional header while mapping the fields or editing.

🚧

Note

If you added the header information through the CSV file method, you cannot add additional headers.

Join data

This block enables the merging of two or more files and adds the merged file to the platform. You need to enter the details of files that need to be identified, compared and merged. Enter the below details:

FieldDescription
Files 1-2 Join TypesEnables you to select the type of SQL join command.
1. Inner join - An Inner Join returns only the rows that have matching values in both tables being joined.
For example, suppose you have two tables: "Line item table" and "Bill level table," and you want to determine which items were purchased in each bill and calculate the total cost of each item in every bill. When you use the Inner Join option, the merged table will include only the records that have matching BillNumbers in both data sources.

2. Left outer join - A Left Outer Join returns all the rows from the left table and the matching rows from the right table.
For example, consider two tables: "Customers" and "Orders." When you use a Left Outer Join, you retrieve all customer records along with any matching order records. If a customer has no orders, the customer record is still included with NULL values in the order-related columns.

3. Right outer join- A Right Outer Join returns all the rows from the right table and the matching rows from the left table.
For example, consider two tables: "Orders" and "Customers." When you perform a Right Outer Join, you retrieve all order records along with any matching customer records. If an order has no associated customer, the order record is still included with NULL values in the customer-related columns.

4. Outer join - An Outer Join, also known as a Full Outer Join, returns all rows from both tables, including matching rows and those with no matches.
All Files DelimiterEnter the file delimiter (comma or tab) to separate fields in the .csv file. By default, the delimiter is a comma (,).
File 1 RegexEnter the name of your first file here in the format filename_.*.fileformat. The asterisk represents the wildcard.

For example, for the system to search for files that begin with the name "BILL_LEVEL_MERGE" followed by any sequence of characters you can enter the file name as "BILL_LEVEL_MERGE.*.csv". This will enable the system to locate files that match this pattern.

NOTE: The file works in an FCFS manner. The files that are uploaded first and second are merged first.
File 2 RegexEnter the name of your second file in the format: filename.*.fileformat. The asterisk represents the wildcard.

For example, for the system to search for files that begin with the name "BILL_LEVEL_MERGE" followed by any sequence of characters you can enter the file name as "BILL_LEVEL_MERGE.*.csv". This will enable the system to locate files that match this pattern.
NOTE : The file works in an FCFS manner. The files that are uploaded first and second are merged first.
File 1 HeadersEnter the column header name of File 1 that needs to be compared.

This is compared with the column headers in the File 2.
To match columns based on multiple headers, enter the header names separated by a comma.
File 2 HeadersEnter the column header name of File 2 that needs to be compared.

This is compared with the column headers in the File 1.
To match columns based on multiple headers, enter the header names separated by a comma.
File Join Use CaseFrom the dropdown, select TRANSACTION_LINE_ITEM.
This is an inner join operation where one row in the bill maps to multiple line items.

**NOTE:**The option None is not applicable.
Output FilenameEnter the output filename.

By default, the filename format is merge_yyy-mm-dd, where merge is the file name, yyyy-mm-dd is the year month, and date on which the file is created.
Alphabetical SortSelect this check box to ensure accurate sorting of the files in alphanumerical order. By default, this is always enabled.
NOTE: It is recommended to always select the checkbox before proceeding further.
Merge based on common NameMake sure that this check box is always selected.

This makes sure that the system identifies the correct files to be merged.
Merge based on common name Template File OneEnter the filename using the tag.

For example, if there are files with names TktABC.dat, and TktDocument_ABC.dat, and if you enter the value as TktDocument.dat, the system looks for the file and selects TktDocument_ABC.dat for merging.
ABC is the common tag among the files.
Merge based on common Name Template File TwoEnter the filename using the tag.

For example, if there are files with names TktABC.dat and TktDocument_ABC.dat, and if you enter the value as TktDocument.dat, the system looks for the file and selects TktDocument_ABC.dat for merging.
ABC is the common tag among the files.
File One is HeaderlessSelect the check box if the columns in file one do not have a header.
File Two is HeaderlessSelect the check box if the columns in file two do not have a header.
File One Mention Header namesDefine the heading order of the columns in fIle one separated by commas.
File Two Mention Header namesDefine the heading order of the columns in fIle two separated by commas.
Match All RegexSelect True from the dropdown to keep only the matching files in the server and delete the files that are not matched. By default, the value is set as False.
All Unique MatchSelect True to identify and delete duplicate files.

FTP file upload instructions

When uploading files to the FTP path:

  • Ensure that all files that need to be merged are added simultaneously.
  • If you are adding different sets of files that need to be merged, maintain a minimum gap of at least 10 minutes between each upload. For example, if you want to merge files A and B, and also C and D, do not add files A, B, C, and D together. Add files A and B first, and then add files C and D only after a minimum of 10 minutes.

If you need to re-add any files:

  • Remove the existing file from the process folder first.
  • Then, add the new files to the input folder for proper handling.

Rebuild headers /Define headers and transform data

FieldDescription
Headers MappingMap the headers according to the input file.
Output Headers OrderThe header name order in which the output file is generated.
Input File DelimiterThe file delimiter (comma or tab) separates fields in the input file.
Output File DelimiterThe file delimiter (comma or tab) separates fields in the output file.
Output FilenameThe name of the output filename.
Mention header names (for use in mapping and expressions)Define the heading order of the columns in the file separated by commas.
Is the file headerless?Select the check box if the columns in the file do not have a header.

CSV-To-XML-Converter

Enter the following details.

FieldDescription
XML TopHeader declaration of the XML file. This includes the XML version, character encoding, and so forth.
XML file nameThe name of the XML file.
XML Repeating ComponentThe repeating elements of the XML file. This is the section of the XML file where the data is present.
Input File DelimiterThe file delimiter. This template supports commas (,) and pipes (%>%)
XML BottomThe closing tags of the XML file.

Push to S3

Enter the following details:

  • S3 Object name/Filename - Defines the name that will be given to the file(s) uploaded to S3. Use the default value. You can also use the expression language and define the file name. For the "Audience reload from FTP template", enter the object name in the format <audience_Group_Name>/${filename}, where audience_Group_Name is the audience name used in the Engage+.

  • S3 Bucket Name - Enter the S3 bucket name.

  • Content Type - Use the default value application/octet-stream. This enables the copying of all file types.

  • Access Key & Secret Key - Enter the S3 access key and secret key.

  • AWS region - Enter the AWS region to which you are transferring the files.

    • US East 1 - For the bucket in the India region
    • EU West 1 - For the bucket in the EU region
    • AP Southeast 1 - For the bucket in the Singapore region
    • US East 2 - For the bucket in the US regions

Note: If you do not have information on the Access key, secret and AWS region, create a ticket to the Sustenance team.

Encrypt data

This block enables the encryption of an output file generated, using the PGP algorithm. All the fields within this block become mandatory once you enable the Decrypt data action block. You can use the Is Enabled drop-down to control the activation of this block.

Field nameDescription
Encryption AlgorithmThe algorithm to encrypt the file. At present, only the PGP algorithm is supported. Hence, by default, PGP encryption is always selected.
Public User IdPublick user ID.
Public Key FileServer location of the public key file.
Is EnabledEnable or disable the encrypt function.

Schedule trigger

This block enables you to schedule the time at which the dataflow should be triggered.

You can configure the dataflow activity to execute daily, hourly, or on a minute basis.

🚧

The minimum triggering interval for the dataflow is set at 5 minutes, meaning that even if a value less than that, such as 1 minute, is entered, the dataflow will still trigger at 5-minute intervals.

Schedule trigger at specified minutes

Select Minutes to schedule a trigger at specific minute intervals The minimum duration allowed is 5 minutes.

Schedule hourly trigger

  • Select Every hour(s) to schedule the dataflow trigger at regular intervals, such as every 2 hours. Specify the frequency in the "Every ___ hour(s)" field.
  • Select At to set a specific time for dataflow execution, such as 13:00. Configure the time using the provided hour and minute drop-down menus.

Schedule daily trigger

  • Select Every day(s) to schedule the dataflow trigger at regular intervals, such as every 2 days. Specify the frequency in the "Every ___ day(s)" field.
  • Select Every week day to set a specific time in a week day for dataflow execution, such as 13:00. Configure the time using the provided hour and minute drop-down menus.