Pass the Isaca Advanced in AI Audit AAIA Questions and answers with CertsForce

Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions
Questions # 1:

When auditing a research agency's use of generative AI models for analyzing scientific data, which of the following is MOST critical to evaluate in order to prevent hallucinatory results and ensure the accuracy of outputs?

Options:

A.

The effectiveness of data anonymization processes that help preserve data quality


B.

The algorithms for generative AI models designed to detect and correct data bias before processing


C.

The frequency of data audits verifying the integrity and accuracy of inputs


D.

The measures in place to ensure the appropriateness and relevance of input data for generative AI models


Questions # 2:

Which of the following AI system characteristics would BEST help an IS auditor evaluate the system's algorithm?

Options:

A.

The AI system algorithm uses training data to inform decision output.


B.

The AI system provides multiple options for model training.


C.

The AI system provides transparent justification of decisions.


D.

The AI system uses archived transaction data to provide decisions.


Questions # 3:

An IS auditor is testing an AI-based fraud detection system that flags suspicious transactions and finds that the system has a high false positive rate. Which of the following testing methods should be prioritized to BEST optimize the detection rate?

Options:

A.

Regression testing


B.

Cross-validation testing


C.

Substantive testing


D.

Benford's Law analysis


Questions # 4:

The PRIMARY purpose of maintaining an audit trail in AI systems is to:

Options:

A.

Facilitate transparency and traceability of decisions.


B.

Analyze model accuracy and fairness.


C.

Measure computational efficiency.


D.

Ensure compliance with regulatory standards for AI.


Questions # 5:

Which of the following is MOST important to have in place when initially populating data into a data frame for an AI model?

Options:

A.

The box charts, histograms, scatterplots, and Venn diagrams that identify correlations and outliers


B.

The code for separating data into training and testing data sets


C.

An analysis of exploratory data that checks for incorrect data types, null values, and duplicate entries


D.

An approved risk assessment for including, excluding, or subsequently dropping data attributes from the model


Questions # 6:

An organization uses an AI-powered tool to detect and respond to cybersecurity threats in real time. An IS auditor finds that the tool produces excessive false positives, increasing the workload of the security team. Which of the following techniques should the auditor recommend to BEST evaluate the tool's effectiveness in managing this issue?

Options:

A.

Use a log analysis tool to examine the types and frequency of alerts generated.


B.

Implement a benchmarking tool to compare the system's alerting capability with industry standards.


C.

Conduct penetration testing to assess the system's ability to detect genuine threats.


D.

Deploy a machine learning (ML) validation tool to increase the model's accuracy and performance.


Questions # 7:

Which of the following is MOST important to review in order to gain assurance that an AI model is performing without biases?

Options:

A.

AI training data


B.

AI development environment


C.

AI model adaptability


D.

AI model temperature


Questions # 8:

During an audit of an investment organization's AI-powered software, an IS auditor identifies a potential security risk. What is the GREATEST risk associated with staff exfiltrating organizational data to a generative AI tool?

Options:

A.

Data contamination due to biased AI model outputs


B.

Unauthorized data disclosure


C.

Potential business disruptions


D.

Excessive reliance on AI-generated insights


Questions # 9:

Which of the following is the PRIMARY reason IS auditors must be aware that generative AI may return different investment recommendations from the same set of data?

Options:

A.

Limitations can arise in the quantification of risk profiles.


B.

Neural node access varies each time the process is executed.


C.

Computational logic is based on probabilities.


D.

Servers are reconfigured periodically.


Questions # 10:

Which of the following controls helps mitigate the risk of competitors poisoning data utilized by a machine learning (ML) model performing sentiment analysis of product reviews?

Options:

A.

Peer reviewing code that acquires product reviews from social media posts


B.

Hiring a marketing firm to text links to customers requesting product reviews for monetary compensation


C.

Augmenting the unbalanced product review data set with the use of oversampling by the model developer


D.

Requiring customers to authenticate access to their accounts prior to writing product reviews


Viewing page 1 out of 3 pages
Viewing questions 1-10 out of questions