Data Extract
    • Dark
      Light

    Data Extract

    • Dark
      Light

    Article summary

    Configure the export of raw transactional and fixed data from the reporting database. Users can extract data from standard sources such as call attempts at specified times and for selected Campaign groups or campaigns.

    1. Go to Menu > Reports > Data Extraction.

    2. In the Select Campaign, perform the following:

      1. Click Add Data Extract.

      2. Add a Name and a Description of the Data Extract.

      3. Select the file to extract the data from Master or Transactional.

      4. Select the Data Source from the dropdown list. This is active only for Transactional data extraction.

      5. Select the Campaign Group. The available campaign groups are listed based on the selected Data Source.

      6. Select the Campaigns. The Available Campaign are listed based on the selected Data Source.

    3. Click Next.

    4. Select a Data Source from the list. Data Source is the list of fields available for selection. There are standard data sources listed based on the selected Data type.

    5. Move the Available Fields to the Selected Fields. Example: Call Activity is one data source. The data source provides details about call attempts made and the results of these attempts.

    6. Click Next.

    7. In the Edit Schedule Configuration, perform the following:

      1. Select the required Run Type from Regular Intervals, Scheduled Time, and On Demand. If the selected run type is On Demand, enter the Start Date and End Date.

      2. Select the Run Days. You can select multiple days.

      3. Select the Time for EOD.

      4. Enter the File Name.

      5. Select the File Extension from csv and txt. If the selected file extension is txt, select the Column Separator from the dropdown.

        Note:

        If data extracted from any table has JSON string, use the txt format to save the file. For example, the Audit Log table contains data in a JSON string.

      6. Enable the Table Specific File Creation. This appends the table name to the data extract file. You cannot disable this switch. Enable the other option if needed. Other options are visible based on the selected Data source.

      7. Enable the File Header Required if you need file header.

      8. The Empty File Required option is enabled automatically when Campaign Specific File Creation toggle is ON. This writes a file with no records. If you do not require an empty file, turn this OFF. This is visible only if the selected data type is Master.

      9. Enable the Add Double Quote to include double quotes. Data for each field is embedded with double quotes.

      10. Enable the Append Date Time to append the server time. The file is saved with the server time appended with the file name.

        Note:

        This is mandatory if you select the Run Type as On Demand. Even for other Run Types, we recommend using the Append Date Time option. This avoids accidental overwriting of extracted files.

    8. Click Save.

    Storage Destination

    Storage destination screen allows the user to store the data extraction file.

    1. Go to Reports > Storage Destination. By default, the Shared Drive is selected and below fields are populated.

    2. Select the Storage Type from Shared Drive, S3, and Google Cloud Storage.

    3. If you select S3 storage, then perform the following:

      1. Enter the S3 Path that stores your extraction data. This is the absolute path on the Amazon S3 bucket where you intend storing the extraction data. Example, bucket:\DE\.

      2. Select the Is Role based Authentication checkbox, if required.

      3. Enter the AWS Region End Point. This is the region that your AWS S3 bucket is located in.

      4. Enter the AWS Access Key. This is the key to access your AWS S3 bucket. Access Keys are used to sign the requests you send to Amazon S3. AWS validates this key and allows access. You use access keys to sign API requests that you make to AWS.

      5. Enter the KMS Encryption if you want the data to be encrypted using AWS' KMS encryption.

      6. Enter the AWS Secret Key. This is the secret key (like the password) for the AWS Access Key entered above. The combination of an access key ID and a secret access key is required for authentication.

      7. Enter the Server Side Encryption. This is the encrypt/decrypt key, defining that the purged data is encrypted using the AWS' Key Management System (KMS) encryption.

      8. Enter the KMS Key. This is the key to decrypt the data on S3 bucket.

      9. Enter the Archive Path that stores your archived data. Example, bucket:\DE\archive\.

        Note:

        When giving the path, do not include any slash/backslash at the beginning. For example, if you require your data to be archived in the LCMArchive folder of the machine having IP address 172.20.3.74 and the Path as LCMArchive. If you are using a subfolder under LCMArchive, specify the correct path - LCMArchive\PurgeData.

      10. Click Save.

    4. If you select Shared Drive storage, then perform the following:

      1. Enter the IP/Host Name of the device that stores your archived data.

      2. Enter the User ID and the Password of the user to accesses the drive to store the data. This should be a combination of domain and username. Example, <domain>\User ID.

      3. Enter the Extraction Path of the shared drive where your data is to be extracted.

      4. Enter the Archive Path of the shared drive where your data is to be archived.

        Note:

        When adding a path, do not include any slash or backslash at the beginning. Example, if you require your data to be archived in the LCMArchive folder of the machine having IP address 172.xx.x.xx and the Path as LCMArchive. If you are using a subfolder under LCMArchive, specify the correct path - LCMArchive\PurgeData.

      5. Click Save.

    5. If you select Google Cloud storage, then perform the following:

      1. Enter the Data Extraction Path field of Google Cloud Storage that stores your extraction data. This is the absolute path on the Google Cloud Platform where you intend storing the extraction data.

      2. Enter the Account Type. This is the account type used to access the Google Cloud Storage. Use service_account as the default account type.

      3. Enter the Private Key of the Google Account to access the Google Cloud Storage to place the archived data.

      4. Enter the Client Email of the Google Cloud Platform client account used to access the Google Cloud Storage.

      5. Enter the Archive Path of Google Cloud Storage where the application stores the archived data.

    6. Click Save.

      Note:

      Do not use any special characters as part of the file names such as /, \, :, *, ?, <, <, and |.

    Edit Data Extract

    1. Select the Data Extract and click Edit under Action.

    2. Update the parameters and click Save.

    3. Enable Activate switch to activate data extraction process.

    Delete Data Extract

    1. Select the Data Extract and click Delete under Action.

    2. Click Ok on the confirmation pop up.

    Fields

    Fields

    Description

    Name

    Name of the data extract configuration.

    Description

    Description of the data extract configuration.

    File Name

    File Name that saves the extracted data.

    Job History

    Job History of the data extract configuration. To access the job history details, click the adjacent button to expand the dropdown history details.

    Master Type

    Type of source. This extracts data fields from Master data sources.

    Transactional

    Type of source. This extracts data fields from Transactional data sources.

    Note:

    Continue selecting Campaign Groups / Campaigns or both only if you select Transactional.

    Campaign Group

    List of Campaign groups based on the selected data source.

    Campaign

    List of Campaign based on the selected data source.

    Data Source

    List of Data Source. There are standard data sources available in the system.

    Regular Intervals Run Type

    Run the Data Extraction at regular configured intervals. Use the number panel or enter to complete the Time Intervals in Mins field. You are allowed to select intervals of 30 minutes. The Data Extraction is generated periodically at the interval configured here.

    Scheduled Time Run Type

    Schedule the Data Extraction generation at a specific time each day.

    On Demand Run Type

    Generates the Data Extract on demand.

    Run Days

    Start day for data extraction.

    IP/Host Name

    Displays the IP address or the host name of the device that stores your archived data.

    User ID

    Displays the user ID of the user that accesses the above drive to store the data. This must be a combination of domain and username. For example, <domain>\User ID.

    Password

    Displays the password for the above user to access the shared drive.

    Extraction Path

    Displays the path on the shared drive where your data is to be extracted.

    Archive Path

    Displays the path on the shared drive where your data is to be archived.

    S3 Path

    S3 Path that stores your extraction data. This is the absolute path on the Amazon S3 bucket where you intend storing the extraction data.

    Is Role based Authentication

    Allows role-based authentication.

    AWS Region End Point

    This is the region that your AWS S3 bucket is located in.

    AWS Access Key

    Key to access your AWS S3 bucket. Access Keys are used to sign the requests you send to Amazon S3. AWS validates this key and allows access. You use access keys to sign API requests that you make to AWS.

    KMS Encryption

    AWS' KMS encryption allows you to encrypt the data.

    AWS Secret Key

    This is the secret key (like the password) for the AWS Access Key entered. The combination of an access key ID and a secret access key is required for authentication.

    Server-Side Encryption

    This is the encrypt or decrypt key, defining that the purged data is encrypted using the AWS' Key Management System (KMS) encryption.

    KMS Key

    This is the key to decrypt the data on S3 bucket.

    Archive Path

    Path to store your archived data.

    Account Type

    This is the account type used to access the Google Cloud Storage. Use service_account as the default account type.

    Private Key

    This is the Private Key of the Google Account to access the Google Cloud Storage to place the archived data.

    Client Email

    This is the Email address of the Google Cloud Platform client account used to access the Google Cloud Storage.

    Archive Path

    This is the path on Google Cloud Storage where the application stores the archived data.

    Notes:

    • The report is extracted from beginning of the day to the scheduled time configured and the file is placed at the configured storage location.

    • When you extract this report a second time, the file containing the first data extraction is moved to the Archive Path configured. The latest extraction is placed in the configured storage location.

    • When you extract this report a third time, the file containing the second iteration is moved to the Archive Path configured, and the first iteration file is deleted. The third iteration data is placed in the configured storage location.

    • All the above three conditions apply only when Campaign Specific File Creation and Append Date Time switch are OFF.

    Data Extraction Fields

    Data Extract service enables the user to download data from standard data sources such as call attempts and Agent activities from selected platform at specified times and for selected campaign groups/campaigns.

    Transaction Field Details

    The following tables list down the extracted fields and their details:

    • Call Activity

    • Agent Activity

    • Global Upload

    • List Upload

    • Scrub List Info

    • Audit Log

    • Audit Trail

    • Anonymous Inbound SMS

    • SMS Inbound Session

    • SMS Outbound Session

    • SMS Delivery Status

    • Upload Error

    • Global Upload Error

    • API Upload Error

    • Non-Call Activity

    • Contact Business Data

    • List Info

    • Upload History

    • Call Trace

    Master Field Details

    The following tables list down the extracted fields and their details:

    • Agents

    • Campaign Filter Groups

    • Outcomes

    • Campaign Groups

    • Channels

    • Categories

    • Campaign

    • Modes

    • Users

    • Campaign Business Fields

    • Contact Status

    • Dial Plan Details

    • Profile

    The table contains the following columns:

    • Source Table: The table name on the application/reporting database that is the data source.

    • Column Name: The field name of the extracted data.

    • Display Name: The field name as displayed on the UI.

    • Data Type: The data type for the field.

    • Description: The description for the field.

    • Platform: The dialer platform that the data is extracted for.


    Was this article helpful?

    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.
    ESC

    Eddy AI, facilitating knowledge discovery through conversational intelligence