Human-AI collaboration increasingly drives decision-making across industries. While AI systems promise efficiency gains by providing automated suggestions for human review, these workflows can trigger cognitive biases that degrade performance. This paper reveals the psychological factors that determine when these collaborations succeed or fail. We conducted an experiment with 2,784 participants. A subset of participants completed a survey measuring their attitudes toward AI and automation. One week later, all participants were shown tables from corporate greenhouse gas emissions reports and asked to verify whether values extracted by an AI were accurate. Participants could accept the AI’s suggestion or flag it as incorrect. We manipulated three aspects of this task: whether the AI’s suggestions in the first three tables were all correct or all contained errors (to test whether early impressions of AI reliability shape later behavior), whether participants who flagged an error were also required to enter the correct value (adding effort to the act of correcting the AI), and whether participants were offered a bonus payment for high accuracy. Two patterns emerged that challenge conventional assumptions. First, when flagging an AI error required the additional step of typing a corrected value, participants made fewer corrections overall and more often accepted incorrect suggestions. Second, participants’ pre-existing attitudes toward AI were the strongest predictor of performance, surpassing demographic factors in importance. Participants skeptical of AI detected errors more reliably and achieved higher accuracy, while those favorable toward automation more often accepted incorrect AI suggestions. Neither the bonus payment nor the accuracy of the AI’s early suggestions meaningfully affected performance. These findings reveal that successful human-AI collaboration depends not only on algorithmic performance but also on who reviews AI outputs and how review processes are structured.