- DarkLight
Data Extract
- DarkLight
Configure the export of raw transactional and fixed data from the reporting database. Users can extract data from standard sources such as call attempts at specified times and for selected Campaign groups or campaigns.
Go to Menu > Reports > Data Extraction. The page displays the following information:
Name – Specifies the name of the data extraction configuration.
Description – Provides additional context or details about the configuration.
File Name – Defines the name of the output file that will be generated by the report.
Activate – Indicates whether the data extraction configuration is currently active and running.
Job History – Opens a popup that displays the history of data extraction jobs with the following information:
It displays the time zone selected for the data extract report. Date and time in the report are based on this selected time zone.
Report Start Time – Displays when the data extraction process starts.
Report End Time – Displays when the data extraction process ends.
Scheduled Time – Shows the time the report is scheduled to be generated.
Status – Indicates the current status of the report (for example, Processing, Completed, or Failed).
Records – Shows the number of records included in the generated report.
Action – Provides a download button to download the report file. This appears only when the report status is Completed.
Search Option – Allows users to search and filter the job history list to quickly find specific entries.
Action – Allows the user to edit or delete the data extraction configuration.
Storage Destination
Storage destination screen allows the user to store the data extraction file.
Go to Reports > Storage Destination. By default, the Shared Drive is selected and below fields are populated.
Select the Storage Type from Shared Drive, S3, and Google Cloud Storage.
If you select S3 storage, then perform the following:
Enter the S3 Path that stores your extraction data. This is the absolute path on the Amazon S3 bucket where you intend storing the extraction data. Example, bucket:\DE\.
Select the Is Role based Authentication checkbox, if required.
Enter the AWS Region End Point. This is the region that your AWS S3 bucket is located in.
Enter the AWS Access Key. This is the key to access your AWS S3 bucket. Access Keys are used to sign the requests you send to Amazon S3. AWS validates this key and allows access. You use access keys to sign API requests that you make to AWS.
Enter the KMS Encryption if you want the data to be encrypted using AWS' KMS encryption.
Enter the AWS Secret Key. This is the secret key (like the password) for the AWS Access Key entered above. The combination of an access key ID and a secret access key is required for authentication.
Enter the Server Side Encryption. This is the encrypt/decrypt key, defining that the purged data is encrypted using the AWS' Key Management System (KMS) encryption.
Enter the KMS Key. This is the key to decrypt the data on S3 bucket.
Enter the Archive Path that stores your archived data. Example, bucket:\DE\archive\.
Note:
When giving the path, do not include any slash/backslash at the beginning. For example, if you require your data to be archived in the LCMArchive folder of the machine having IP address 172.20.3.74 and the Path as LCMArchive. If you are using a subfolder under LCMArchive, specify the correct path - LCMArchive\PurgeData.
Click Save.
If you select Shared Drive storage, then perform the following:
Enter the IP/Host Name of the device that stores your archived data.
Enter the User ID and the Password of the user to accesses the drive to store the data. This should be a combination of domain and username. Example, <domain>\User ID.
Enter the Extraction Path of the shared drive where your data is to be extracted.
Enter the Archive Path of the shared drive where your data is to be archived.
Note:
When adding a path, do not include any slash or backslash at the beginning. Example, if you require your data to be archived in the LCMArchive folder of the machine having IP address 172.xx.x.xx and the Path as LCMArchive. If you are using a subfolder under LCMArchive, specify the correct path - LCMArchive\PurgeData.
Click Save.
If you select Google Cloud storage, then perform the following:
Enter the Data Extraction Path field of Google Cloud Storage that stores your extraction data. This is the absolute path on the Google Cloud Platform where you intend storing the extraction data.
Enter the Account Type. This is the account type used to access the Google Cloud Storage. Use service_account as the default account type.
Enter the Private Key of the Google Account to access the Google Cloud Storage to place the archived data.
Enter the Client Email of the Google Cloud Platform client account used to access the Google Cloud Storage.
Enter the Archive Path of Google Cloud Storage where the application stores the archived data.
Click Save.
Note:
Do not use any special characters as part of the file names such as /, \, :, *, ?, <, <, and |.
Edit Data Extract
Select the Data Extract and click Edit under Action.
Update the parameters and click Save.
Enable Activate switch to activate data extraction process.
To delete Data Extract, click Delete under Action. Click Ok on the confirmation pop up.
Fields
Fields | Description |
---|---|
Name | Name of the data extract configuration. |
Description | Description of the data extract configuration. |
File Name | File Name that saves the extracted data. |
Job History | Job History of the data extract configuration. To access the job history details, click the adjacent button to expand the dropdown history details. |
Master Type | Type of source. This extracts data fields from Master data sources. |
Transactional | Type of source. This extracts data fields from Transactional data sources.
|
Campaign Group | List of Campaign groups based on the selected data source. |
Campaign | List of Campaign based on the selected data source. |
Data Source | List of Data Source. There are standard data sources available in the system. |
Regular Intervals Run Type | Run the Data Extraction at regular configured intervals. Use the number panel or enter to complete the Time Intervals in Mins field. You are allowed to select intervals of 30 minutes. The Data Extraction is generated periodically at the interval configured here. |
Scheduled Time Run Type | Schedule the Data Extraction generation at a specific time each day. |
On Demand Run Type | Generates the Data Extract on demand. |
Run Days | Start day for data extraction. |
IP/Host Name | Displays the IP address or the host name of the device that stores your archived data. |
User ID | Displays the user ID of the user that accesses the above drive to store the data. This must be a combination of domain and username. For example, <domain>\User ID. |
Password | Displays the password for the above user to access the shared drive. |
Extraction Path | Displays the path on the shared drive where your data is to be extracted. |
Archive Path | Displays the path on the shared drive where your data is to be archived. |
S3 Path | S3 Path that stores your extraction data. This is the absolute path on the Amazon S3 bucket where you intend storing the extraction data. |
Is Role based Authentication | Allows role-based authentication. |
AWS Region End Point | This is the region that your AWS S3 bucket is located in. |
AWS Access Key | Key to access your AWS S3 bucket. Access Keys are used to sign the requests you send to Amazon S3. AWS validates this key and allows access. You use access keys to sign API requests that you make to AWS. |
KMS Encryption | AWS' KMS encryption allows you to encrypt the data. |
AWS Secret Key | This is the secret key (like the password) for the AWS Access Key entered. The combination of an access key ID and a secret access key is required for authentication. |
Server-Side Encryption | This is the encrypt or decrypt key, defining that the purged data is encrypted using the AWS' Key Management System (KMS) encryption. |
KMS Key | This is the key to decrypt the data on S3 bucket. |
Archive Path | Path to store your archived data. |
Account Type | This is the account type used to access the Google Cloud Storage. Use service_account as the default account type. |
Private Key | This is the Private Key of the Google Account to access the Google Cloud Storage to place the archived data. |
Client Email | This is the Email address of the Google Cloud Platform client account used to access the Google Cloud Storage. |
Archive Path | This is the path on Google Cloud Storage where the application stores the archived data. |
Notes:
The report is extracted from beginning of the day to the scheduled time configured and the file is placed at the configured storage location.
When you extract this report a second time, the file containing the first data extraction is moved to the Archive Path configured. The latest extraction is placed in the configured storage location.
When you extract this report a third time, the file containing the second iteration is moved to the Archive Path configured, and the first iteration file is deleted. The third iteration data is placed in the configured storage location.
All the above three conditions apply only when Campaign Specific File Creation and Append Date Time switch are OFF.
Data Extraction Fields
Data Extract service enables the user to download data from standard data sources such as call attempts and Agent activities from selected platform at specified times and for selected campaign groups/campaigns.
Transaction Field Details
The following tables list down the extracted fields and their details:
Call Activity
Agent Activity
Global Upload
List Upload
Scrub List Info
Audit Log
Audit Trail
Anonymous Inbound SMS
SMS Inbound Session
SMS Outbound Session
SMS Delivery Status
Upload Error
Global Upload Error
API Upload Error
Non-Call Activity
Contact Business Data
List Info
Upload History
Call Trace
Master Field Details
The following tables list down the extracted fields and their details:
Agents
Campaign Filter Groups
Outcomes
Campaign Groups
Channels
Categories
Campaign
Modes
Users
Campaign Business Fields
Contact Status
Dial Plan Details
Profile
The table contains the following columns:
Source Table: The table name on the application/reporting database that is the data source.
Column Name: The field name of the extracted data.
Display Name: The field name as displayed on the UI.
Data Type: The data type for the field.
Description: The description for the field.
Platform: The dialer platform that the data is extracted for.