The greatest concern is the lack of documentation for the basic assumptions and formulas used to develop the decision logic. In AI and algorithmic systems, undocumented assumptions, logic, and model foundations undermine transparency, explainability, validation, governance, auditability, and the ability to assess whether outputs are reliable and appropriate. ISACA’s AI audit guidance specifically emphasizes questioning assumptions and identifying conditions that are taken for granted within AI systems.
Option D is correct because without documented assumptions and formulas, neither management nor the auditor can effectively understand how the AI system reaches conclusions, validate whether it is working as intended, assess bias or error, or confirm whether controls are adequate. ISACA’s AI-related guidance highlights that assumptions embedded in AI systems must be examined because changing environments, unreliable data, or hidden logic can create serious control weaknesses.
Option A is not the greatest concern by itself. Outsourcing development to vendors may introduce third-party risk, but vendor use is common and can be controlled through contracts, review, validation, and oversight. The mere fact of outsourcing does not create as severe a control weakness as undocumented decision logic.
Option B is a potential concern because annual review might be too infrequent depending on the system’s risk and rate of change. However, even with infrequent review, the organization would still have documented policy and decision logic to examine. A complete lack of documentation of assumptions and formulas is more serious because it prevents meaningful review altogether.
Option C is also a concern, but vendor maintenance access can be managed through access controls, logging, approvals, segregation, and monitoring. Access by contracted developers is not inherently unacceptable. The more fundamental weakness is the absence of documented logic foundations, which affects governance and auditability at the core.
Therefore, the greatest concern is D because undocumented assumptions and formulas make the AI system insufficiently transparent and far harder to audit, validate, and control.
References (Official ISACA):
ISACA Journal, AI Risk and Mitigation: Tips and Tricks for Auditing in the AI Era — stresses questioning assumptions and identifying taken-for-granted conditions in AI systems.
ISACA Journal, Algorithms and the Auditor — supports the auditor’s role in questioning algorithmic reliability and possible errors.
ISACA White Paper, Leveraging COBIT for Effective AI System Governance — highlights risk assessment and governance throughout the AI life cycle.