Academic writing in English can be challenging for non-native English speakers (NNESs). AI-powered rewriting tools can potentially improve NNESs’ writing outcomes at a low cost. However, whether and how NNESs make valid assessments of the revisions provided by these algorithmic recommendations remains unclear. We report a study where NNESs leverage an AI-powered rewriting tool, Langsmith, to polish their drafted academic essays. We examined the participants’ interactions with the tool via user studies and interviews. Our data reveal that most participants used Langsmith in combination with other tools, such as machine translation (MT), and those who used MT had different ways of understanding and evaluating Langsmith’s suggestions than those who did not. Based on these findings, we assert that NNESs’ quality assessment in AI-powered rewriting tools is influenced by the simultaneous use of multiple tools, offering valuable insights into the design of future rewriting tools for NNESs.