In PMI’s treatment of AI in customer-facing environments, responsible AI, privacy, and regulatory compliance are consistently framed as high-impact risk areas. For a telecommunications company using AI chatbots for customer service, any breach of customer data privacy is not just a technical issue but a legal, regulatory, and reputational threat. It may trigger regulatory investigations, fines, lawsuits, and loss of customer trust.
While scalability risks (such as the chatbot not handling volume) and integration risks (such as poor connection with existing platforms) may harm service quality, they are usually remediable through technical improvements, capacity upgrades, or refactoring. Conversely, PMI’s AI governance perspective emphasizes that violations of data protection laws can incur “non-recoverable” damage: sanctions, forced shutdown of systems, and long-term brand erosion. Therefore, the potential that “the solution might breach customer data privacy regulations, leading to legal consequences” is typically assessed as a higher-order risk than operational challenges.
PMI-CPMAI content stresses implementing privacy-by-design, strict access controls, encryption, and compliance checks early in the solution lifecycle. This means that, in a feasibility and risk assessment, data privacy and regulatory compliance represent the highest risk category, and thus option D is the most appropriate answer.
Submit