Data cleaning (also called data cleansing or scrubbing) is a critical step in data analytics to ensure accuracy, consistency, and reliability. Removing duplicate records is a key data cleaning technique that improves data quality.
Improves Data Integrity – Prevents misleading results caused by duplicate values.
Enhances Data Accuracy – Ensures that analytics are based on unique and valid information.
Optimizes Performance – Reduces redundancy, improving processing speed and efficiency.
Prevents Reporting Errors – Ensures accurate insights for decision-making.
A. Deploys data visualization tool – Visualization tools help interpret data but do not clean it.
B. Adopt standardized data analysis software – Software tools support analysis but do not eliminate duplicate records automatically.
C. Define analytics objectives and establish outcomes – This step is important for analysis strategy, but it does not clean data.
IIA’s GTAG on Data Analytics – Emphasizes the importance of data cleansing in ensuring reliable analytics.
COBIT 2019 (Data Management Framework) – Highlights duplicate removal as a best practice in data governance.
ISO 8000-110 (Data Quality Standard) – Recommends eliminating duplicate records for high-quality analytics.
Why Eliminating Duplicate Records is the Correct Answer?Why Not the Other Options?IIA References:✅ Final Answer: D. Eliminate duplicate records.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit