The emergence of data analytics allows auditors to test entire populations of data drawn from clients’ information systems, rather than relying solely on sampling methods. While full population testing increases the sufficiency⎯or quantity⎯of evidence examined, it does not necessarily eliminate the lack of appropriateness⎯or quality⎯of that evidence. In particular, full population testing typically relies heavily on client-internal data, which are vulnerable to management manipulation, potentially reducing their appropriateness. Therefore, auditors must remain skeptical when subsequent, more appropriate evidence from external sources contradicts a client’s financial reporting. In this study, we examine whether auditors employing full population testing mistakenly substitute their assessment of evidence sufficiency for their evaluation of evidence appropriateness, leading them to view clientinternal evidence as more appropriate than auditors using sample testing. Consequently, auditors using full population testing may be less likely to act skeptically when subsequent and more appropriate external evidence indicates that a fraud red flag is present. In an experiment, we find that auditors using full population testing, compared to sample testing, are less likely to exercise skeptical actions when a subsequent external industry growth trend reveals a fraud red flag. We also posit that this unintended consequence is exacerbated when full population testing results are visualized (versus tabulated), a typical format used for presenting data analytic tests in practice. However, our findings do not support this prediction.
Attendance to this seminar is possible by invitation only. Please send an e-mail to secbs-abs@uva.nl if your are interested in attending this seminar.