A recent review published in the Journal of Hematology & Oncology explores how generative artificial intelligence could support oncologists in interpreting complex genomic data, matching patients to clinical trials, and synthesizing patient information, while emphasizing the need for strict human oversight. The review was released on Mar. 11.
The topic is important as cancer medicine becomes more data-intensive, with clinicians facing challenges in managing large volumes of genomic and clinical information. Generative AI tools may help address these challenges by assisting with data interpretation and report drafting, but concerns remain about their reliability and safety.
The review summarizes literature on integrating generative AI into precision oncology practice. It highlights how large language models can help interpret genetic mutations and screen patients for clinical trial eligibility. The authors also discuss the use of vision-language models to draft diagnostic reports that combine imaging, pathology, genomic, and clinical data. However, they caution against unsupervised use of AI due to risks such as "AI hallucination," where models may generate incorrect or fabricated information.
Several examples are presented in the review. For instance, TrialGPT achieved strong agreement with expert assessments (87.3% accuracy) when evaluating patient suitability for clinical trials and reduced processing time by an average of 42.6%. Flamingo-CXR matched or exceeded board-certified radiologists' performance in 94% of chest X-ray cases without clinically relevant findings and produced diagnostic reports equal to or better than human experts in 77.7% of evaluated cases.
Despite these benefits, the review notes that both AI- and human-generated reports had clinically significant errors in nearly a quarter of evaluated radiology cases. To mitigate such risks, the authors recommend "Human-in-the-Loop" workflows requiring expert review before clinical implementation and advocate for Retrieval-Augmented Generation techniques that ensure AI outputs are grounded in current medical knowledge.
The authors conclude that while AI can serve as a valuable assistant to oncologists by rapidly synthesizing complex data, it should not be given autonomous decision-making authority at this stage. They emphasize the importance of established privacy standards, addressing demographic biases in training datasets, and maintaining continuous human oversight as essential steps toward safe adoption.