Articles

LLMs & Safety Limits in Toxicology

LLMs & Safety Limits in Toxicology

In toxicological risk assessment, accuracy and transparency are not optional; they are prerequisites for protecting public health and ensuring regulatory trust. Among the growing applications of artificial intelligence (AI) in the life sciences, Large Language Models (LLMs) (e.g.,  ChatGPT)  can now be used to support a demanding tasks in toxicology: the derivation of safety limits as for example health-based guidance values like the Permitted Daily Exposure (PDE). PDEs define the maximum daily exposure to a substance considered unlikely to cause adverse effects in humans over a lifetime. Their derivation requires identifying a critical study, determining the relevant point of departure (e.g., NOAEL or LOAEL), and applying appropriate uncertainty factors. This process demands systematic evidence collection, expert interpretation, and defensible documentation. These are areas where LLMs can provide valuable and efficient support.

The promise of LLMs

LLMs can assist in the discovery, extraction, and organization of toxicological evidence from vast amounts of unstructured data. In exploratory case studies, ChatGPT demonstrated the ability to identify critical information, summarize evidence, and highlight potential data gaps with impressive speed. Beyond efficiency, AI-assisted approaches can broaden evidence discovery, helping experts locate less visible but relevant documentation and capture underlying biological correlations or mechanistic insights.

Limitation of LLMs

However, the limitations of LLMs must be acknowledged. Variability in responses, incomplete references, or “black-box” reasoning can undermine regulatory confidence of LLM conclusions. The human-in-the-loop principle therefore remains essential: while LLMs can assist in data processing, subject matter experts retain full responsibility for interpreting results and validating conclusions. Structured workflows that enforce transparency, traceability, and accountability help ensure the robustness of LLM-assisted toxicological risk assessments. These workflows help document sources, assumptions, and reasoning steps behind each AI-supported conclusion.

Human-centered AI

AI in toxicology serves not as a substitute for expert judgment but as an instrument to strengthen it. When applied with a clear understanding of their strengths and limitations, and with appropriate measures to mitigate those limitations, LLMs can enable faster, accurate, and transparent derivations of safety limits such as PDEs. By automating repetitive evidence-gathering tasks and time-consuming summarizations, LLMs allow experts to concentrate on what truly matters: critical interpretation and high-level scientific reasoning. The result is a process that combines the speed and scalability of AI with the rigor and review of the expert. This hybrid model ensures that AI contributes meaningfully to the scientific process while maintaining the integrity, accountability, and trust that define regulatory science.

Let’s work together

Whether you want to explore the Onesum Platform or engage our expert consultants, we’d love to learn about your thoughts and challenges.

Image

Onesum Portal

contact@onesumportal.com

Latest Articles